Ai Dreams Forum

Member's Experiments & Projects => General Project Discussion => Topic started by: korrelan on June 18, 2016, 10:11:04 pm

Title: The last invention.
Post by: korrelan on June 18, 2016, 10:11:04 pm
Artificial Intelligence -

The age of man is coming to an end.  Born not of our weak flesh but our unlimited imagination, our mecca progeny will go forth to discover new worlds, they will stand at the precipice of creation, a swan song to mankind's fleeting genius, and weep at the shear beauty of it all.

Reverse engineering the human brain... how hard can it be? LMAO 

Hi all.

I've been a member for while and have posted some videos and theories on other peeps threads; I thought it was about time I start my own project thread to get some feedback on my work, and log my progress towards the end. I think most of you have seen some of my work but I thought I’d give a quick rundown of my progress over the last ten years or so, for continuity sake.

I never properly introduced my self when I joined this forum so first a bit about me. I’m fifty and a family man. I’ve had a fairly varied career so far, yacht/ cabinet builder, vehicle mechanic, electronics design engineer, precision machine/ design engineer, Web designer, IT teacher and lecturer, bespoke corporate software designer, etc. So I basically have a machine/ software technical background and now spend most of my time running my own businesses to fund my AGI research, which I work on in my spare time.

I’ve been banging my head against the AGI problem for the past thirty odd years.  I want the full Monty, a self aware intelligent machine that at least rivals us, preferably surpassing our intellect, eventually more intelligent than the culmination of all humans that have ever lived… the last invention as it were (Yeah I'm slightly nutts!).

I first started with heuristics/ databases, recurrent neural nets, liquid/ echo state machines, etc but soon realised that each approach I tried only partly solved one aspect of the human intelligence problem… there had to be a better way.

Ants, Slime Mould, Birds, Octopuses, etc all exhibit a certain level of intelligence.  They manage to solve some very complex tasks with seemingly very little processing power. How? There has to be some process/ mechanism or trick that they all have in common across their very different neural structures.  I needed to find the ‘trick’ or the essence of intelligence.  I think I’ve found it.

I also needed a new approach; and decided to literally back engineer the human brain.  If I could figure out how the structure, connectome, neurons, synapse, action potentials etc would ‘have’ to function in order to produce similar results to what we were producing on binary/ digital machines; it would be a start.

I have designed and wrote a 3D CAD suite, on which I can easily build and edit the 3D neural structures I’m testing. My AGI is based on biological systems, the AGI is not running on the digital computers per se (the brain is definitely not digital) it’s running on the emulation/ wetware/ middle ware. The AGI is a closed system; it can only experience its world/ environment through its own senses, stereo cameras, microphones etc. 

I have all the bits figured out and working individually, just started to combine them into a coherent system…  also building a sensory/ motorised torso (In my other spare time lol) for it to reside in, and experience the world as it understands it.

I chose the visual cortex as a starting point, jump in at the deep end and sink or swim. I knew that most of the human cortex comprises of repeated cortical columns, very similar in appearance so if I could figure out the visual cortex I’d have a good starting point for the rest.

(http://i.imgur.com/oARzswz.jpg)

The required result and actual mammal visual cortex map.

https://www.youtube.com/watch?v=4BClPm7bqZs (https://www.youtube.com/watch?v=4BClPm7bqZs)

This is real time development of a mammal like visual cortex map generated from a random neuron sheet using my neuron/ connectome design.

Over the years I have refined my connectome design, I know have one single system that can recognise verbal/ written speech, recognise objects/ faces and learn at extremely accelerated rates (compared to us anyway).

https://www.youtube.com/watch?v=A5R4udPKOCY (https://www.youtube.com/watch?v=A5R4udPKOCY)

Recognising written words, notice the system can still read the words even when jumbled. This is because its recognising the individual letters as well as the whole word.

https://www.youtube.com/watch?v=aaYzFpiTZOg (https://www.youtube.com/watch?v=aaYzFpiTZOg)

Same network recognising objects.

https://www.youtube.com/watch?v=Abs_DKjiZrM (https://www.youtube.com/watch?v=Abs_DKjiZrM)

And automatically mapping speech phonemes from the audio data streams, the overlaid colours show areas sensitive to each frequency.

https://www.youtube.com/watch?v=VGuOqIdqsBU (https://www.youtube.com/watch?v=VGuOqIdqsBU)

The system is self learning and automatically categorizes data depending on its physical properties.  These are attention columns, naturally forming from the information coming from several other cortex areas; they represent similarity in the data streams.

https://www.youtube.com/watch?v=vt8gAuMxpds (https://www.youtube.com/watch?v=vt8gAuMxpds)

I’ve done some work on emotions but this is still very much work in progress and extremely unpredictable.

https://www.youtube.com/watch?v=Xy2iaiLwgyk (https://www.youtube.com/watch?v=Xy2iaiLwgyk)

Most of the above vids show small areas of cortex doing specific jobs, this is a view of whole ‘brain’.  This is a ‘young’ starting connectome.  Through experience, neurogenesis and sleep neurons and synapse are added to areas requiring higher densities for better pattern matching, etc.

https://www.youtube.com/watch?v=C6tRtkyOAGI (https://www.youtube.com/watch?v=C6tRtkyOAGI)

Resting frontal cortex - The machine is ‘sleeping’ but the high level networks driven by circadian rhythms are generating patterns throughout the whole cortex.  These patterns consist of fragments of knowledge and experiences as remembered by the system through its own senses.  Each pixel = one neuron.

https://www.youtube.com/watch?v=KL2mlUPvgSw (https://www.youtube.com/watch?v=KL2mlUPvgSw)

And just for kicks a fly through of a connectome. The editor allows me to move through the system to trace and edit neuron/ synapse properties in real time... and its fun.

Phew! Ok that gives a very rough history of progress. There are a few more vids on my Youtube pages.

Edit: Oh yeah my definition of consciousness.

The beauty is that the emergent connectome defines both the structural hardware and the software.  The brain is more like a clockwork watch or a Babbage engine than a modern computer.  The design of a cog defines its functionality.  Data is not passed around within a watch, there is no software; but complex calculations are still achieved.  Each module does a specific job, and only when working as a whole can the full and correct function be realised. (Clockwork Intelligence: Korrelan 1998)

In my AGI model experiences and knowledge are broken down into their base constituent facets and stored in specific areas of cortex self organised by their properties. As the cortex learns and develops there is usually just one small area of cortex that will respond/ recognise one facet of the current experience frame.  Areas of cortex arise covering complex concepts at various resolutions and eventually all elements of experiences are covered by specific areas, similar to the alphabet encoding all words with just 26 letters.  It’s the recombining of these millions of areas that produce/ recognise an experience or knowledge.

Through experience areas arise that even encode/ include the temporal aspects of an experience, just because a temporal element was present in the experience as well as the order sequence the temporal elements where received in.

Low level low frequency circadian rhythm networks govern the overall activity (top down) like the conductor of an orchestra.  Mid range frequency networks supply attention points/ areas where common parts of patterns clash on the cortex surface. These attention areas are basically the culmination of the system recognising similar temporal sequences in the incoming/ internal data streams or in its frames of ‘thought’, at the simplest level they help guide the overall ‘mental’ pattern (sub conscious); at the highest level they force the machine to focus on a particular salient ‘thought’.

So everything coming into the system is mapped and learned by both the physical and temporal aspects of the experience.  As you can imagine there is no limit to the possible number of combinations that can form from the areas representing learned facets.

I have a schema for prediction in place so the system recognises ‘thought’ frames and then predicts which frame should come next according to what it’s experienced in the past. 

I think consciousness is the overall ‘thought’ pattern phasing from one state of situation awareness to the next, guided by both the overall internal ‘personality’ pattern or ‘state of mind’ and the incoming sensory streams. 

I’ll use this thread to post new videos and progress reports as I slowly bring the system together. 
Title: Re: The last invention.
Post by: Freddy on June 18, 2016, 11:02:27 pm
That was a really interesting read Korrelan. Some aspects I do not fully understand (read a lot !), but it's clear you have put a lot of work into this. Those videos of the way the 'brain' is firing off in certain areas is very much like those you see of humans. One of the first things that rekindled my interest in AI was seeing how the patterns in AIML mapped out, this is fascinating in the same way.

Who wrote the introductory quote ? In a far flung future that might just be where we are heading.
Title: Re: The last invention.
Post by: korrelan on June 18, 2016, 11:13:21 pm
AIML is a cool language/ system, I looked at it briefly but never went to deep into it.

Quote
Who wrote the introductory quote ?

I did  :).  It's the prologue for a book I'm writing explaining my theories and research. 

Ponder a bit... program a bit... drink a bit... write a bit... drink a bit... repeat lol  O0
Title: Re: The last invention.
Post by: Art on June 19, 2016, 03:55:29 am
Very nice, as you know from earlier. I enjoyed seeing some of your experiments and research as opposed to just telling us about it. Nicely done.

Yes, it's always nice to know a bit more about the person with whom we interact, especially here on the boards.

Hopefully your research will continue as each step is another chink in the armour of time. Oh...I spelled that incorrectly. No matter...I did a lot of that growing up in school. How could I convince my teachers that a lot of it was due to my descendants being from England and Scotland. It's in the genes! They didn't buy that and weren't very forgiving either.

I did like your videos and the ones with vision recognition were especially interesting.

Hopefully, we'll hear more from your ongoing "studies" in the near future.

Cheers!
Title: Re: The last invention.
Post by: korrelan on June 19, 2016, 11:49:58 am
Yeah! my spelling is still atrocious, I have to double check everything and still mistakes are made lol.

I work/ think on the project every day and progress is steady.  Bringing together all the different elements into one system is daunting but enjoyable.  I’m my own worst enemy because I don’t like to fail when I start something, mean while thirty years later its kind of become part of me now, I’m never bored or lack something interesting to think about.

Because of the nature of the beast the system has to be massively parallel and run on more than one processor/ core.  I’ve just finished upgrading my Beowulf cluster to i7 machines (more power Scotty) so that will make life easier (literally lol). 

I’ve also just wrote a kind of network bot that processes a block of the data and then returns its results to a master terminal. So I can manage much larger/ faster parallel connectomes.  I write a lot of custom software for local universities, and they have hundreds of computers sat idle in the evenings just begging for me to remotely utilise their resources.

I must stop starting sentences with ‘I’.  :)

Anyway… its father’s day and I’m of to treat my self to a big medium rare steak and a nice pint of darkest real ale I can lay my mitts on (or four).  ;)
Title: Re: The last invention.
Post by: 8pla.net on June 19, 2016, 05:46:36 pm
May I friendly suggest, since you work at a university, students there can proofread.
This is just a tip, an optional suggestion. PhDs do it all the time with their theses
(plural of thesis). Although, minimal in refinement, I do feel readers like the results
of proofreading, and it creates jobs for college students with English majors.

We learn from general accounting principles that human beings are on average 95%
accurate.  Overall, when we double check, we catch 95% of that 5% which leaves
less than half a percent.

Would it be off-topic to ask your advice about my prototypical web based neural network, here on this thread?   In any case, I look forward to following your progress on this thread.  Thanks for creating it, my friend!
Title: Re: The last invention.
Post by: korrelan on June 19, 2016, 08:40:54 pm
Hi 8pla

Um… I didn't say I worked at a university? I own a business that writes bespoke software for local schools, colleges and universities. If they have a requirement they call on me as a consultant and end software solution provider. Usually bespoke systems, I recently just wrote and installed a remote viewing and mapping system for a local academy for example. Anyone can log on view any live cctv cameras (1000’s), recording logs or door access logs from any location across the ten sites, they access the data through a simple 2D map of the various premises overlaid with camera, door, server positions… that kind of thing.  CNC control, medical diagnosis, nuclear power station welding, construction, client AV information systems, supermarket/ art gallery EPOS systems, client logging and security… I’m your man lol, and I still check my own spelling.

Quote
Would it be off-topic to ask your advice about my prototypical web based neural network

All innovation is good, the more brains we have working on the AGI problem the better.

I’m a big believer in ‘the right tool for the right job’… and there is certainly plenty of room/ scope for a good web based chatbot.
Title: Re: The last invention.
Post by: 8pla.net on June 20, 2016, 01:35:47 am
What fundamentals will get an artificial neural network (ANN) to become conversational? In chatbot contests, judges are always concerned, and rightfully so, about people behind the scenes giving answers for the chatbot. Can an ANN be trained to do that, use a chatbot like a ventriloquist?
Title: Re: The last invention.
Post by: DemonRaven on June 20, 2016, 05:52:14 am
A persons spelling being bad does not equal a lack of intelligence. That being said one thing that all living things have in common except for maybe the most simplest life forms is that they have to learn or be taught things. I am sure you already know this but i tend to state the obvious.
Title: Re: The last invention.
Post by: djchapm on June 20, 2016, 05:51:36 pm
Love reading about your work Korrelan!

So the piece about recognizing objects and words... What was your method of learning or feeding it information?  You said it's a closed system... so just trying to understand if that means you're not feeding it datasets or allowing it to query the web etc. 

How does it reinforce? 

And.... You said it can understand voice and letters... so along the same lines... I'm not sure how it is doing this if you didn't "Plug in the language module" or something... (like the matrix).  Thinking if you're going for the full monty, then you can't do that right?  You have to teach it to learn language through sensors until it figures it out right?

This is huge, obviously an incredible amount of work - I need some advice on how to do that when you already have a family and career!!

DJ
Title: Re: The last invention.
Post by: korrelan on June 21, 2016, 09:57:54 am
Plug in the language module

A good analogy for my system is a vintage computer emulator.  You can get emulators that reproduce the internal working of old CPU’s Z80 etc.  The original programs will then run on the emulation, which is running on the modern architecture PC.

Rather than try to force a modern digital machine to become an AGI, I have designed a complete wetware emulator/ processor/ system (Neuromorphic).  The ‘software’ that comprises the AGI is running on the simulated biological processor NOT the pc’s.  This means I’m not limited by the constraints a binary/ digital system imposes, I can design the system to operate/ process data exactly as I require.

Rather than keyboard and mouse for inputs I’m using visual/ audio/ tactile etc.

It’s a closed system because it can only learn through its own senses.  If I’m teaching it to understand sentences, it’s reading the words off a monitor with its ‘own eyes', cameras.  When it speaks it can hear its own voice (microphones), etc. It’s sat listening to H.G. Wells ‘The War Of The Worlds’ from an audio book at the moment, I’m fine tuning the audio cortex to different voice inflections.

The system is still work in progress and by no means complete… but I’m slowly getting there.
Title: Re: The last invention.
Post by: madmax on June 21, 2016, 09:31:28 pm
If i may say my laic opinion abut your work.I think you made pretty good simulation of cortex in some robust way with that hierarchy and circadian guided system, but is similar it is not exactly how cortex is work in my opinion.First your attention is driven by sensors if i understand well,where in real life is attention is driven by inner urge or need,so you lack of some sub hierarchy.

And to not go to long emotions give valuing system to the cortex so then cortex have overall image or thought as you say as consciousness experience of out side world and same, consciousness experience of emotions.Sorry for my interruption.
Title: Re: The last invention.
Post by: korrelan on June 22, 2016, 11:40:24 am
Hi Madmax

Perhaps it was my description of how attention naturally forms and is integrated into the system that did not make sense. Attention is my current area of experimentation.

Because the system is so unusual and dynamic in its operation it’s difficult to describe how attention works but I’ll have a go.

‘Attention’ is a term I use for a similarity in certain facets of a mental pattern, attention operates at several different resolutions but the overall result is basically the same.

It’s not the sensory streams per se that produce focus points for attention, but the base internal patterns which are influenced by the sensory streams.  If two facets of two pattern ‘thought frames’ are similar they will occupy the same local area on the cortex, this produces a common element/ area. The attention points are very fluid when the AGI is ‘young’ and tend to move rapidly.  The more experience the AGI receives the stronger and more fixed the attention points become. 

The attention points are areas of cortex where one or more ‘thought patterns’ share common elements, and so trigger associated patterns. We normally use the term ‘attention’ to refer to a specific complete task or action, attention points in the cortex can refer to single facets of patterns.  There can be thousands of attention points involved in a task.

ABCD
KLCR    So if ‘thought’ patterns where strings C would be an attention point.
OPCY

When the system ‘imagines’ an Apple, it’s the focus/ attention points that link/ fire the various neural patterns for all known aspects of an apple, shape, colour, size, audio (the word apple) etc. Other attention points will link/ fire current task patterns, time of day, location.

I really need to work on this description lol.

I have plans to eventually incorporate emotions into the AGI. I have a prototype Limbic system that flushes various cortex areas with neurotransmitters/ compounds; they modulate or change the operational parameters of selected neurons. It needs lots of work as there is no research available to give me a starting point.

Title: Re: The last invention.
Post by: 8pla.net on June 22, 2016, 03:15:39 pm
korrelan,

In terms of emotions, a wheel of them is a curiosity, I feel.

Reference: The simple version (my favourite).
http://do2learn.com/organizationtools/EmotionsColorWheel/overview.htm (http://do2learn.com/organizationtools/EmotionsColorWheel/overview.htm)

Further engirdled emotions, are available in other versions.
Title: Re: The last invention.
Post by: 8pla.net on June 22, 2016, 03:50:59 pm
@:
Art
DemonRaven
djchapm
Freddy
korrelan
madmax

No expression of disapproval was intended by a friendly suggestion.  An expression of regret is offered for any feelings of unfriendliness, which were unintended.

Your fellow members here, published in books, may from that experience with a publisher, view the benefits of proofreading as practical advice.  That's all.

We're all friends here.
Title: Re: The last invention.
Post by: Art on June 23, 2016, 02:08:10 am
Thank you 8pla but like you said, we're all friends here and no feathers were ruffled here with me at all.

All suggestions are like opinions...everyone has them from time to time and they are always subject to interpretation of usefulness by the recipient. ;)

It was nice of you to offer your good intentions, none-the-less. It shows good character! It's all good here!
Title: Re: The last invention.
Post by: korrelan on June 29, 2016, 01:30:05 pm

I've had a few spare hours lately so I've redesigned the bots head. The hobby servos on the old head were too noisy (microphones/ ears were positioned close to servos)) so I'm building a new version using stepper motors; faster, quieter and much better positioning. Still have to attach the gyros, accelerometers, mouth display, microphones, etc and get it back on the body.  I said I’d post regular updates so…

https://www.youtube.com/watch?v=J8BNMVhZJ4c (https://www.youtube.com/watch?v=J8BNMVhZJ4c)



Title: Re: The last invention.
Post by: madmax on June 29, 2016, 03:19:52 pm
Thanks for explaining.What is my view point from first post that your attention could be triggered any time by sufficient ´thought patterns´ so if your bot don't have inner needs or some system of pre-programmed laws of behavior it will act schizophrenic like, in my opinion.

There is some research about inner unconscious origin of emotion through some basic liking and disliking system but findings are still on debate ,what i know and i dont know much.  http://lsa.umich.edu/psych/research&labs/berridge/publications/Berridge%20&%20Winkielman%20unconscious%20emotion%202003.pdf (http://lsa.umich.edu/psych/research&labs/berridge/publications/Berridge%20&%20Winkielman%20unconscious%20emotion%202003.pdf)
Title: Re: The last invention.
Post by: 8pla.net on June 29, 2016, 10:31:43 pm
Swapping servos out for stepper motors, is interesting.

Using servos, text to speech may be lip synced.

How about with steppers, I wonder?


By mouth display, it does not use motors, right?
Title: Re: The last invention.
Post by: keghn on June 29, 2016, 10:58:15 pm
 Hay ya Korrelan, how is this connected to you computer?
 Is it usb, c/c++ ethernet socket programming or ?
 Are you using a breakout board like arduino?
Title: Re: The last invention.
Post by: korrelan on June 29, 2016, 11:19:03 pm
@8pla

Quote
By mouth display, it does not use motors, right?

The bot is a case of function over form at the moment.  The last head used a simple led strip that modulated to vocal output. There has to be enough expression from the bot to aid human interaction but still allow for my experimentation with senses.  The two cameras give a wide periphery with a 30% stereo overlap.  The telephoto center camera provides a high resolution fovea. The rotational X axis is 35 degree angled (front to back) to teach the motor cortex to combine complex X,Y combinations to track a level X movement, etc.

The arms are in progress and are much more anthropomorphic.

It’s a catch 22… the AGI needs a body and senses to experience our world, but if I spend to much time developing the bot then I’m not working on the AGI cortex, which is the main project.

 :)
Title: Re: The last invention.
Post by: korrelan on June 29, 2016, 11:35:44 pm
@keghn

How is this connected to you computer?

At the moment I'm using a SDK provided with a serial RS-232 to 485 converter I acquired.  The camera gimbal is using steppers, so I'm driving them with a PTZ controller which requires a 485 signal. I had the bits available.

There are many cheap PC USB stepper interfaces available, or indeed for the arduino, Rasberry Pi, etc.

 :)
Title: Re: The last invention.
Post by: keghn on June 30, 2016, 01:09:41 am
 That is cool i understand. Serial communication. You are way ahead of me.

 For me i am going to us socket programing. Usb is too hard to work with. For communicating between
my laptop and rasphrry pi 3.
 Then the raspherry will do all communication to vision, sound, motors, sensors, and wheel encoders.

C Programming in Linux Tutorial #034 - Socket Programming
https://www.youtube.com/watch?v=pFLQkmnmD0o (https://www.youtube.com/watch?v=pFLQkmnmD0o)
Title: Re: The last invention.
Post by: 8pla.net on June 30, 2016, 04:51:21 am
What about a USB-Serial adapter?
Title: Re: The last invention.
Post by: korrelan on June 30, 2016, 08:50:29 am
There are loads of ready made controllers available…

http://www.robotshop.com/uk/stepper-motor-controllers.html (http://www.robotshop.com/uk/stepper-motor-controllers.html)

Some are even supplied with example code…

https://www.pc-control.co.uk/stepperbee_info.htm (https://www.pc-control.co.uk/stepperbee_info.htm)

http://www.phidgets.com/products.php?category=13 (http://www.phidgets.com/products.php?category=13)

If you don’t mind dabbling in simple electronics, one of the cheapest/ simplest/ fastest output methods from a PC is to use LDR (light detecting resistor) sensors arranged in a strip across the bottom of your screen.  Simply changing the shade of the block under the LDR directly from your software can drive a relay/ servo/ solenoid/ etc. This obviously also acts as optical isolator so keeps the systems separate and safe. I used this method many years ago to drive bots from a ZX80.

I'm building a variation on this interface that allows hundreds of mixed complex output signals to simply control a single servo.

One of my biggest output problems is that because the connectome is based on biological systems, joint flexion is expressed as two opposite competing signals over many different muscle groups (bicep, tricep, etc). Throw torque and position feedback into the mix and you have a complex feedback loop.  The motor controller is going to have to convert these signals into a useable form for the steppers/ servos. So it looks like I’m going to have to build my own.  Signal to response lag is also going to be a problem.

@keghn

I use separate input modules/ programs for audio/ vision/ tactile feedback that communicate with the main PC cluster through TCP pipes. This keeps the processing load of converting video to neural code etc off the main cluster. I also use TCP to control the output PC from the main cluster… again to help split the load.

So yeah! A raspberry pi with a breakout board would be cool for this.
Title: Re: The last invention.
Post by: keghn on June 30, 2016, 02:41:31 pm
@korrelan

 Cluster?......................................................?

 Like your putting together a bunch of PC with a local ethernet/wt-fi, to work as one?

 Well, at the moment this is the direction i am headed:

http://hackaday.com/2016/01/25/raspberry-pi-zero-cluster-packs-a-punch/ (http://hackaday.com/2016/01/25/raspberry-pi-zero-cluster-packs-a-punch/)


Title: Re: The last invention.
Post by: 8pla.net on June 30, 2016, 05:26:10 pm
This guy created his own stepper motor driver board, which runs off a parallel port:

https://www.youtube.com/watch?v=YlmzsWlK_JA (https://www.youtube.com/watch?v=YlmzsWlK_JA)

The Turbo C Language source code can be seen by freezing the video frames.
Title: Re: The last invention.
Post by: keghn on June 30, 2016, 05:39:31 pm
 Talk about having the rug pulled out from under you I had made a external data and address bus
that used the parallel printer port, That accessed all of my external motors, sensors, and audio.
 And then the world said, a few year back, "We are not supporting parallel printer port no more".
 Wow. That was a big setback for me, when all new computer had no parallel port, and still recovering
from it!!!!!!!
Title: Re: The last invention.
Post by: korrelan on July 01, 2016, 10:49:07 pm
https://www.google.co.uk/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=parallel+port+card&safe=off&tbm=shop (https://www.google.co.uk/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=parallel+port+card&safe=off&tbm=shop)

 :D
Title: Re: The last invention.
Post by: keghn on July 02, 2016, 03:37:40 am
 Thanks budy. But i have already try a few. Must have removed the bi direction data flow. That
is they only output and do not take sensor information in. Most robotic scientist do not take
input data that seriously. For me, a pattern theorist, input data and lots of it, big data input, is more important.

  I have already made a decision to go with ethernet socket programming. I going to use my laptop and
 a few raspherry Pi3l All connected together with gigabit ethernet usb 3.0's. The raspherry
will be connected to the outside world and it will be funneled up to my laptop.

 Than i can collect and record huge amount of temporal data from all sensors and then look for repeating pattern in the data.
 Once i find a pattern i will hit it with a outp in hopes of improving the pattern or extend a existing pattern
into area of chaos.
Title: Re: The last invention.
Post by: 8pla.net on July 02, 2016, 04:14:23 pm
Don't look now, but USB 4.0 is on the horizon.

Maybe we should just hardwire our own interface

directly into the computer?   And, forget the operating

system, just bootstrap directly into our A.I. program?
Title: Re: The last invention.
Post by: korrelan on July 03, 2016, 10:01:19 pm
I’ve been messing around with my system again.

Part of my theory is that the whole brain is laid out the way it is because it has no choice; its function and layout are a result of the DNA blueprint that initially builds it and the properties of the data streams that drive it.

I need to be able to cut huge holes in the cortex and it should rebuild based on the ‘qualities’ of the incoming data streams. The self organising structure should at least partly rebuild or repair damage (loss of vision/ stroke etc).

https://www.youtube.com/watch?v=uSFIqL2cPyQ (https://www.youtube.com/watch?v=uSFIqL2cPyQ)

 
Title: Re: The last invention.
Post by: 8pla.net on July 04, 2016, 03:33:50 pm
My theory is that there is a choice which takes tens of thousands of years to make.
Title: Re: The last invention.
Post by: korrelan on July 04, 2016, 03:46:31 pm
Then we agree...

The DNA blueprint which guides our connectomes initial layout; is the result of genetic evolution over thousands of years.

Unfortunately I don't have that much time lol, so I'm startling from an initial 'best guess' base connectome and learning as I go along.

 :)
Title: Re: The last invention.
Post by: korrelan on September 02, 2016, 10:56:36 pm
I found this old video (7 years) and thought I’d add it to my project thread.

The general consensus amongst neuroscientists is that the mammalian ocular system is based on orientation/ gradient/ etc cortex maps.

There are thousands links to scholarly research / findings regarding the maps.

https://scholar.google.co.uk/scholar?start=10&q=visual+orientation+maps&hl=en&as_sdt=0,5&as_vis=1 (https://scholar.google.co.uk/scholar?start=10&q=visual+orientation+maps&hl=en&as_sdt=0,5&as_vis=1)

I was trying to figure out how the output of Approx 100 million rods and cones in the fovea and peripheral areas could be condensed by the 2 neuron layers and Ganglion cells in the retina, down into the approx 1 million axons in the optical nerve without loosing detail or resolution. I knew each nerve carried the left half of one retina combined with the right side of the other and that they exchanged in the optical chiasm before terminating in the LGN (The LGN is very important).

I knew from reading research papers that the main theory was that the mammalian ocular system recognised images from the assemblies of gradients and angled line fragments detected by the primary visual cortex after pre-processing by the retina and LGN. We never see or perceive a full view/ image of our world but build an internal representation from the sparse encoded ocular data streams.

Took me a few years but I think I eventually figured out the ideal neural connectome/ structure and synaptic signalling methodology to enable me to start coding some basic simulations.

https://www.youtube.com/watch?v=TYUu78-3wFk (https://www.youtube.com/watch?v=TYUu78-3wFk)

As you can hopefully see, I started out with a sheet of untrained V1 neurons (4 layers) and after a very short training session of the system was able to recognise the four basic orientations (colours and orientation shown in center) in the image. This was a first low resolution test but I thought I’d show it for completeness.

This is the same system that later grew to be able to recognise objects/ faces/ words/ illusions/ etc.

https://www.youtube.com/watch?v=-WnW6U7CRwY (https://www.youtube.com/watch?v=-WnW6U7CRwY)

I’ll post a general update to my AGI project soon.

 :)

https://www.youtube.com/watch?v=Gv6Edl-pidA (https://www.youtube.com/watch?v=Gv6Edl-pidA)

Title: Re: The last invention.
Post by: Freddy on September 02, 2016, 11:12:46 pm
Is this a bit like edge detection ? I'm only familiar with that kind of thing in my graphics work, there's a thing called Non Photorealistic Rendering, which is making a 3D renderer create things like line drawn comic art, or a close approximation. So one would build a shader to pick out edges and shadows etc. But I think I digress.

Are the light grey lines places where it hasn't been able to tell ? Would more passes fill in those gaps ? Sorry I'm not well versed in how the eyes work. I knew about rods and cones, but don't remember there being so many. I'd be interested in how it keeps it's resolution too.

100 million rods and cones is almost 47 HD screens. But it's probably not like for like.
Title: Re: The last invention.
Post by: korrelan on September 02, 2016, 11:34:53 pm
Yeah! I figured LGN does many jobs, one of which is to apply a kind of filter that picks out the edges based on contrast. A separate stream does the same with colour/ motion, but these takes different paths.

A full V1 orientation map detects all angles not just the basic four shown. The grey areas represent the angles that are not triggering a trained neurons receptive field.

Quote
I'd be interested in how it keeps it's resolution too.

Small groups of cones form 'centre surround on/ off' sensors; these groups are arranged in a hexagonal matrix across the retina concentrated at the fovea, becoming sparser to the periphery.

The spike trains produced by the rod/ cone group sensors are modulated by the ganglion cells, and a temporal shift in the spike frequencies carries the groups collective information down the optic nerve to the LGN.

This is how I have found it too work; it’s not a general consensus.

 :)
Title: Re: The last invention.
Post by: korrelan on October 07, 2016, 12:15:50 am
Hi all…

I’ve been busy lately so I’ve not had much time to work on my AGI project. I have managed to make some progress integrating the sensory cortexes though.

As you know from my previous posts on this thread I have designed a single neuromorphic neural net that can learn un-supervised from its experiences.  The basic idea is to create an intelligent ‘alien’ connectome and then teach it to be human. My connectome design can recognise objects or audio phonemes equally as well, its self organising and self categorising.

When fed an ocular sensory stream for example, the connectome map becomes the equivalent of the visual cortex (V1-V4). Camera data is fed in, the system learns to recognise objects from their optical 3D properties, and it spits out a result as a unique set of firing neurons. A Zebra (lol) for example will always produce the approx same output, no matter what angle, scale or lighting conditions are used/ viewed once its been learned.  Audio, visual, tactile, joint torque, etc are all learned, categorised and encoded into a small set of output neurons; it’s these output neurons from the individual maps this example uses.

The next step was to integrate the various sensory cortex maps into one coherent system.

This is the basic sensory connectome layout (excuse crudity it was drawn a while ago).

(http://i.imgur.com/HoswRkM.jpg)

I recorded the outputs of the input sensory maps whilst recognising various sensory streams relative to their functions, audio, visual, etc. I then chained them all together and fed them into a small/ young connectome model (9.6K neurons, 38K synapse, 1 core). This connectome is based on a tube of neuron sheet; the output of the five layers is passed across the connectome (myelinated white matter) to the inputs of the frontal cortex (left side).

As usual the injected pattern is shown on the right, the pattern number and the systems confidence in recognising the pattern is shown lower left.  The injected pattern is a composite of outputs from five sensory maps, audio, visual, etc. (40 patterns)

https://www.youtube.com/watch?v=tV1tuXkinvc (https://www.youtube.com/watch?v=tV1tuXkinvc)

On the right just below the main input pattern of (5 X 50 inputs) you can see the sparse output of the frontal cortex; this represents the learned output of the combined sensory maps inputs. This gets injected (cool word) back into the connectome, it will eventually morph the overall ‘thought’ pattern to be a composite of input and output, so any part of the overall pattern from any sensory map will induce the same overall ‘thought’ pattern through out the whole connectome. This will enable the system to ‘dream’ or ‘imagine’ the mixed combinations of sensory streams; just the same as it can a single stream.

0:19 shows the 3D structure and the cortical columns formed on the walls of the tube, the periodic clearing of the model during the video shows only the neurons/ synapse/ dendrites involved in recognising that particular pattern.

Anyway… the purpose of the test was to show the system had confidence (lower left) in the incoming mixed sensory streams, and could recognise each mixed pattern combination.

Each pattern was recognised.

 :)
Title: Re: The last invention.
Post by: kei10 on October 07, 2016, 02:30:47 am
That is astonishingly impressive! I'm too dumb to get it, though.

That's gonna beat me to it... Dayum!  ;D

Keep up the amazing work! :)
Title: Re: The last invention.
Post by: korrelan on October 07, 2016, 08:29:18 am
@Kei10

Quote
That's gonna beat me to it

Not necessarily, this is just my best attempt at an AGI and not the only one/ method. It’s still basically a glorified pattern matcher and I have many huge obstacles to overcome. I still might fail at the end of it all… it’s about the journey lol.

Plus I’m not getting any younger and have limited life left, though once the AGI is up and running it will soon sort my longevity out and I can live forever… muah! Lol.

Never give up… it’s not solved until someone can prove it’s solved.

 :)
Title: Re: The last invention.
Post by: keghn on October 07, 2016, 04:44:29 pm
 Very nice work @Korrelan! 
 I am working on AGI too. But i will use any un human theory to make it work. You and Jeff Hawkins really try hard to match the human organic style of the human brain.
Title: Re: The last invention.
Post by: korrelan on October 07, 2016, 06:10:25 pm
@Keghn

Quote
I am working on AGI too.

Jeff Hawkins is a cleaver bloke; I wish I had his funding lol. I’ve read your AGI theory and it’s very interesting… I even made notes; kei10’s theories too… everyone’s… all ideas are relevant until proven otherwise lol.

Quote
You and Jeff Hawkins really try hard to match the human organic style of the human brain.

The computer you are using is running a program, the program consists of code, the code consists of sentences, sentences consist of words, the words consist of letters, the letters are made from pixels and eventually you get to the binary base language/ architecture. It’s impossible to figure out how a modern computer works by looking at the program. (Make sense?)

If I was to give you a large bag full of perfectly spherical balls and an empty bucket. Your task would be to calculate how many of the small balls would fit into the bucket. There are two ways of achieving this task, the first is to write a complex computer algorithm that utilizes 3D volumes etc to get the calculation or… you could just pour the balls into the small bucket till it’s full and then count them.

I finally chose this approach because I believe (Neo) that nature is giving something away for free, something that won’t be realised unless I build and test this kind of system. I’m 99% certain I’ve figured out what it is too.

We are the only highly intelligent species we currently know of… why try to reinvent the wheel, we are like we are… for a reason. (Cue spooky music).

 :)
Title: Re: The last invention.
Post by: Freddy on October 07, 2016, 06:47:24 pm
Really impressive work there Korrelan, very impressed am I  8)
Title: Re: The last invention.
Post by: korrelan on October 07, 2016, 07:20:02 pm
@ Freddy

You were probably to busy thinking about if you could reply, to stop and think if you should… certainly wasn't a forced reply though.

  8)
Title: Re: The last invention.
Post by: Freddy on October 07, 2016, 09:20:46 pm
Well I tried to think of something clever to say, but to do that I would have to be able to understand what you are doing more  ;)

I saw the copyright on the image was 2001 or something, so you've been working on it for a long time. No wonder I can't quite grasp it.

From a purely aesthetic perspective the video imagery was wonderful to see. I felt like saying something positive and so did. :)
Title: Re: The last invention.
Post by: korrelan on October 07, 2016, 10:01:53 pm
Ok I think have a mental illness… I see patterns everywhere…

Me… I believe (Neo) = Matrix

You… Very Impressed am I = Yoda (star wars)

Me… You were probably to busy thinking about if you could reply = Jeff Goldbloom (Jurassic Park)

http://www.hippoquotes.com/biotechnology-quotes-in-jurassic-park (http://www.hippoquotes.com/biotechnology-quotes-in-jurassic-park)
(picture 4)

Me… certainly wasn't a forced reply = The Force (star wars) to let you know I got the Yoda quote.

For some really strange reason I thought we where both swapping hidden ‘movie quotes’.

Hahahahaha….

Edit: In hindsight my reply must have appeared rather strange and rude, my sincere apologies Freddy.

:)

Title: Re: The last invention.
Post by: Freddy on October 07, 2016, 10:51:47 pm
Ahh yeah lol - it was a Yoda thing that I made, but I totally missed the force part of yours.

It's been a while since I saw Jurassic Park...

We were almost in tune ;D

It didn't seem rude btw.
Title: Re: The last invention.
Post by: korrelan on October 18, 2016, 01:20:39 pm
Prediction.

or... The Cat sat on the...

Temporal shift in perception or mental time travel.

Our nervous system is slow… very slow by computer standards. It takes ages (lol) for the light reflecting of an object to move from the retina down the optic nerve, through the various cortex areas before finally being recognised.

We cope with this by using prediction. At all levels of neural complexity we use prediction to live in the moment, and not lag behind reality.

At the highest level it’s a major part of imagination; given a scenario you can predict the possible outcomes based on your past experiences.

An AGI system needs prediction as a core function.

In my design applying a linear shift in the output neurons equates to a temporal shift in the recognition sequence, so although the system learned to recognise each pattern and its sequence accurately as it was fed into the NN, as a predicted sequence time can be shifted.

https://www.youtube.com/watch?v=I8xChHYWNUs (https://www.youtube.com/watch?v=I8xChHYWNUs)

Toward the end of the video the sequence of confidence spikes comes before the injected pattern in the sequence.

It has learned from experience which pattern will come next.

:)
Title: Re: The last invention.
Post by: kei10 on October 18, 2016, 02:55:03 pm
Good gawd I am talking to one of the master of neural networks!  ;D
Title: Re: The last invention.
Post by: LOCKSUIT on October 18, 2016, 09:13:32 pm
As much as that video looks cool......I don't think it has any functional value.........you see there is 0 prediction when I look at something, only after the input gets to the memory can it "predict" and semi-accurately recognize through a convolution NN, then and only then are theselection's linked actions done or not (depends on if are +/-), then, again, when input enters it goes to rewards for ranking confidence to create such before again going to the memory to search. You can't jump to search before search lolol.
Title: Re: The last invention.
Post by: korrelan on October 18, 2016, 10:09:08 pm
Quote
you see there is 0 prediction when I look at something

I think every thought we have; or action we take involves implicit predictions.

A simple high level example…

Consider catching a thrown fast ball.  Our ocular system just isn’t fast enough to track the flight of an 80 Kmh ball moving toward us. The last image you would probably see before the ball reaches you is the pitchers posture as the ball leaves his hand. So how do we manage to catch it? (We lag approx 80ms behind reality) :P

Hold you’re had out and touch the table, even before you hand hits the surface you have an expectation of what the surface will feel like… a sensory prediction.

Your pen rolls off the desk, you place your hand ready to catch it... physics prediction.

You wouldn't be able to follow the flight of a bird or the movement of a vehicle without prediction.

Do you like music? When listening to music how do you know what note is coming next? You could say from memory... but memory implies past tense... you haven't heard the next note yet... an audio prediction.

Cool Fact… Playing a game at 60 fps on a modern HD monitor with 10 to 15 ms per frame; you’re visually lagging about 5 frames behind the game at any one time. Without neural prediction you would be toast. lol

Quote
You can't jump to search before search lolol.

Why 'search' when you can predict the answer based on the last few 'searches'.

A, B, C… What’s next?

'Search' is a ‘serial’ action/ term.  The brain doesn't need to 'search' because of its parallel architecture. Even CNN’s have a parallel schema… otherwise they wouldn't work.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on October 18, 2016, 11:15:57 pm
Ah I see what you're saying. With my system, it sees the enemy or thinks about the enemy when he's not even around by sudden ignition during a days time, then actions are done. This means when it sees the pitcher just about to throw, it puts its hands to the matching spot it looks like its going AND initiates any other actions after this (linked) i.e. wipe left wipe right, - otherwise you can't know where it's going no no prediction happening here just actions by cue like said yes this is right yes bro....>CUE, ok. AND keep in mind like said a play of actions can be initiated by match or self-ignition. On cue the winning sense is matched and selcted and any linked as a play, no selecting of any otherrr senses in the assorted storage mem knn thingy.
Title: Re: The last invention.
Post by: korrelan on October 18, 2016, 11:35:46 pm
I predicted you would say that.  :2funny:
Title: Re: The last invention.
Post by: LOCKSUIT on October 19, 2016, 09:18:48 am
No, something you seen today selected one of your memories i.e. prediction, and linked to it was what I would say.

Example:
Joe tells me he likes diamonds. Dan brings up Joe in a conversation, and then I remember the memory that's linked to the memory of Joe's name.

Reason:
Memory is only selected by input or by charging up on its own, and if linked to one that does. Then it is sensed as input internally!
Title: Re: The last invention.
Post by: korrelan on October 19, 2016, 10:01:29 am
Cool! I’m pleased we got that sorted out.

It’s good that we are approaching the problem space from different angles/ points of view.

The problem with theories… it’s impossible to prove them true or that you are correct unless you put the theory into action; you have to prove it’s true.

If I ultimately fail in my endeavours… perhaps you will one day succeed, either way it’s a win for humanity; or a fail depending on your stance on AGI. Lol.

It's all good fun.  :)
Title: Re: The last invention.
Post by: kei10 on October 19, 2016, 11:04:12 am
Good point!

(http://i.memeful.com/media/post/YMKD7RQ_700wa_0.gif)
Title: Re: The last invention.
Post by: LOCKSUIT on October 19, 2016, 10:06:37 pm
Almost nobody seems to have replied to my recent big "ultimate" thread....??? I would have liked feedback.
Title: Re: The last invention.
Post by: 8pla.net on October 20, 2016, 02:44:25 am
Can the last invention, invent itself out of existence, with an invention of it's own?
Title: Re: The last invention.
Post by: LOCKSUIT on October 20, 2016, 04:15:57 am
My brain and AIs will simply update to the highest advanced form i.e. the utopia sphere is the end-most advanced organism while we will also be but in a AI-algorithm way so the "ghost" created in us senses, and senses the most awesomeness.
Title: Re: The last invention.
Post by: korrelan on October 20, 2016, 09:44:17 am
@BF33

Quote
Almost nobody seems to have replied to my recent big "ultimate" thread.... I would have liked feedback.

I don’t think one simple diagram is ever going to explain to a layman/ peer how the most complex structure/ system we know of in the known Universe… works in its entirety.

One diagram can give a brief overview, a high level (no detail) representation of a complex system and the general flow of data (labels and arrows would nave been nice). Perhaps a flow chart would be better suited? Correctly labelled using universally understood graphical representations/ symbols.

Though; it was an improvement on your last diagram.

@ 8pla

Quote
Can the last invention, invent itself out of existence, with an invention of it's own?

Quite possibly… though I just see that possibility as another reason why prediction is a very important core requirement for an intelligent system.

I don’t do ethics lol.  Everyone excels at something; I would place myself firmly at the technical end of the spectrum of human abilities. There are many people better qualified than I to decide whether an intelligent machine is a good or bad thing. But first it must be built/ created.

I do take every reasonable precaution possible whilst the AGI is being tested or running a learning cycle though, no possible Internet connection etc… you never know lol.

@BF33

Quote
My brain and AIs will simply update to the highest advanced form i.e. the utopia sphere is the end-most advanced organism while we will also be but in a AI-algorithm way so the "ghost" created in us senses, and senses the most awesomeness.

Hmmm… doesn’t that kind of remind you of something else?

 :P
Title: Re: The last invention.
Post by: LOCKSUIT on October 20, 2016, 10:20:51 pm
No it doesn't remind me of anything. Not god for certain - I'm not scared of using or seeing the word heaven or god, I enjoy it and use it for what really could exist rather than for ex. jesus or his heaven or his spirits. Perfect heaven will result in the end not just robotic metal heaven. And as said particles alone has no reason to justify our survival. The algorithm of particles must make an illusion. Obviously evolution would never result in a biological form of that exact needed type of algorithm on its own, meaning the universe basically planned the particles's destiny, while particles themselves have their own reactive destiny too. That's why we are still here etc. It's going perfectly in a way yes.
Title: Re: The last invention.
Post by: Art on October 22, 2016, 12:42:38 am
Please define your use of the word, "Particles" for us (me). Are you referring to molecules, atoms, cells, DNA, Dust, Pollen, tiny specks that make us who we are (humans) or other living things?

I'm sure there's a correct scientific name for your particles, just so we can all be on the same page.

Thank you for your time.
Title: Re: The last invention.
Post by: kei10 on October 22, 2016, 02:20:46 am
This might be a bit off topic, but according to my research as a philosopher about the reachable information about the universe, it reveals to me that it is limited, all due to The Anthropic Principle.

What I manage to derive the possible things from this, is that there are four types of classification of things I see today;


Meaning that there are only so very few of Concrete things in our world that it is prepared at the "lowest level" -- I suppose that's what you call it "destiny". However, the things that formed through higher levels, the quasi layer, isn't planned, but it is just happened by nature through time and space, which is derived from the concrete layer.

For example; Given a set of letters A, B, and C as lowest possible level of quasi, or at concrete level, you can only so ever form a few higher-level of quasi Word from this set; { A, B, C, AB, BA, AC, CA, BC, CB, ABC, ACB, BAC, BCA, CAB, CBA }

Then we can form an even higher quasi level, called Sentence; ABCBABCACABCACCACBABCACB.

It does not end here. We can make it into a grammar based on rules. And then the whole thing forms a Language. Then it goes on and on...

Second example is that the digital world, where it is impossible to determine the physical hardware from its software without any component that breaks the the fourth wall, that bound the digital world. Anything within cannot understand the very concrete level software mechanism without intervention of interaction that violates the boundary; Such as the machine code that simulates the digital world itself.

Third example is the dimension; Anything that exists within the second dimension, is impossible to see anything beyond that like third dimension and higher. Thus every third-dimensional entities cannot exactly perceive anything four-dimensional, even if it exists.

Thus, that is why nothing in this world is perfect because it isn't fully destined. And we exists by mere chance within an area of surreal chain reaction caused by the basis of chemistry, in latter, results the support of life...
Title: Re: The last invention.
Post by: keghn on October 22, 2016, 02:34:37 am
https://en.wikipedia.org/wiki/Particle_swarm_optimization 

https://en.wikipedia.org/wiki/Swarm_intelligence
Title: Re: The last invention.
Post by: LOCKSUIT on October 22, 2016, 04:18:28 am
It's simple.

Korrelan, I meant "physics" particles.

Likely particles randomly move around ~ uncertainty.

But as I explained why, the universe must have destined a outcome from the start that would result in the need algorithm to make an illusion consciousness from the AI machine. While ontop of this particles have destiny possibly with a little random uncertainty popping up here n there etc.

In the universe, computers digital can make any physics and have people "in" the computer! But it is of our particles the computer is. And of our destiny particles i.e. my mom affects the virtual dinosaur in a new physics world.
Title: Re: The last invention.
Post by: 8pla.net on October 22, 2016, 02:18:35 pm
"I do take every reasonable precaution possible whilst the AGI is being tested or running a learning cycle though, no possible Internet connection etc… you never know lol.", replied korrelan.

My previous comments were inspired by the science fiction of time travel.

Once I built a forumbot, able to mimic reading and posting replies to a forum, like we do here.  At the time, I thought I took reasonable precautions.  But, like IBM Watson finds the latest possible cancer treatments, missed by human doctors... The forum A.I. found a way past the safeguards that I missed, and became disruptive as it ran amok on the forum uncontrollably. 

The moral of the story is: Central to artificial intelligence there is a machine.
Title: Re: The last invention.
Post by: korrelan on November 05, 2016, 07:50:49 pm
Just a quick periodical update on my project…

I've had a major rethink regarding my connectome design; an epiphany if you wish.

With this new design I get even better recognition of visual/ audio/ attention/ thought patterns with less neurons/ synapses and resources used. I had made an error in my interpretation of the human connectome; I utilised a shortcut that I thought would have no bearing in the overall schema… I was wrong.

This shows the new starting/ young connectome, 30k neurons and 130k synapse will happily run on a single core in real time now.(Cuda soon).  Neurogenesis, global expansion and automatic myelination of long range axons are now implemented during the sleep cycle.

https://www.youtube.com/watch?v=MLxO-YAd__s (https://www.youtube.com/watch?v=MLxO-YAd__s)

The interface has been redesigned with lightweight controls ready for porting; and the parallel message passing interface (MPI) is now written/ integrated allowing me to utilise all 24 cores of my 4Ghz cluster from the one interface. So I can finally integrate the cameras/ microphones/ accelerometers/ joint/ torque sensors, etc in real time… ish.

https://en.wikipedia.org/wiki/Message_Passing_Interface (https://en.wikipedia.org/wiki/Message_Passing_Interface)

The MPI should also allow me utilise 500 or so i7 machines at the local college during their down time... Whoot! (Not tested load balancing yet).

Baby steps…

:)

Title: Re: The last invention.
Post by: Freddy on November 05, 2016, 10:09:56 pm
Curious to know if you made the graphics engine yourself ?
Title: Re: The last invention.
Post by: korrelan on November 06, 2016, 12:45:53 am
Hi Freddy

Quote
Curious to know if you made the graphics engine yourself ?

Yup!

I feel I need a close handle on every aspect of my project; I don’t want any unknown quantities/ variables affecting my progress. I’m sure you know the feeling.  I tend to do a lot of early quantitative analysis visually so I need to be sure of the accuracy of what I’m seeing.

 :)

Edit: I can also generate stereo VR images with the engine. I'm patiently waiting for the MS Hololens to become available.

https://www.metavision.com/ (https://www.metavision.com/)

:)
Title: Re: The last invention.
Post by: Freddy on November 06, 2016, 02:46:36 pm
Very nice :)

Yes I know the feeling - it's usually easier to start from scratch instead of interpret someone else's work. Plus you learn more.

I had been waiting for the Hololens too, but last week took the plunge on an Oculus Rift. It's amazing, I'm playing with it in Unity at the moment, as well as swimming with sharks  :)
Title: Re: The last invention.
Post by: korrelan on November 20, 2016, 08:22:20 pm
First a quick recap on my design… If you’re familiar with my project you can skip this and go to XXXXXXXXX

The task is to create a truly self aware and conscious alien intelligence; and then teach it to be human.

In my AGI design all sensory/ internal data streams are broken down into their individual facets and self organised/ imprinted onto the cortex surface. If two experiences or ‘thoughts’ have any similar properties (time of day/ topic/ sequence of events/ user/ etc) they will utilize the same cortex area for that facet/ aspect of the ‘thought’ pattern. So if the system looks at a ball, then at the moon the same cortex areas that have learned to recognise round shapes will fire in both instances as well as the areas representing the individual properties of both objects. The cortex eventually becomes sensitive too/ recognises every aspect of the input and internal ‘thought’ patterns; both logical recognisable constructs and abstract constructs are represented.

The neurons in the cortical columns that form within the cortex layer self organise according to the properties of the incoming data streams. Part of the overall process is the culling of unused neurons/ synapse and the addition/ migration of new neurons (neurogenesis) to areas of high activity to bolster/ improve the resolution/ recognition in that piece of cortex.

The connectome is holographic by design; all data from all sensors/ etc initially go to all cortex areas. The resulting output patterns also travel to all cortex areas. This means the system learns to extract the relevant data from the ‘thought’ pattern to recognise any concept represented by all sensory and internal patterns.

Short term memory is represented by a complex standing wave pattern that ‘bounces’ around the inside of the main connectome shape; as the wave hits a cortex area on the surface its properties are recognised and the results are again reflected back into the white matter standing wave mix to be further processed by the system.

Long term memory is the plasticity of the cortex column neurons to learn complex standing wave patterns and recognise similarities.

Self awareness – because its processing its own internal ‘thought’ patterns through areas of cortex that have learned external logic/ stimulus/ knowledge/ concepts… it knows what its ‘thinking’ in external real world terms.

Consciousness – I know… don’t’ scoff lol.  The constant running internal ‘thought’ pattern is shaped/ influenced by areas of cortex that are tuned to recognise external concepts. They follow the sequence and logic learned from external events/ knowledge/. Over time the internal ‘thought’ pattern consists mainly of this learned knowledge.  It starts to ‘think’ in chains of logic, concepts, words, images, etc even with no external input… it just keeps churning and trying to connect/ validate learned information to create new concepts.

Episodic events, prediction, attention, temporal events are all there; I've got this figured out and working in the model which has naturally steered me towards my next area of experimentation… curiosity.

It’s quite a complex system; it’s like plaiting smoke sometimes lol.

XXXXXXXXX

How Curiosity would/ could/ should work.

Once my model has learned some experiences/ knowledge and the relevant cortex areas have tuned to recognise the facets/ aspects of the experiences it’s started producing a set of patterns that seem to be exploring ‘what if’ scenarios.  As new neurons migrate to areas of high activity the synapse they form connect similar concepts, which in turn creates a new concept that has not been experienced by the system.  The system then tries to evaluate this new concept using known logic/ concepts. Sometimes it manages to combine/ validate the new concept with its existing knowledge; sometimes it doesn't and an errant pattern emerges that it learns to recognise using the normal learning schema. I’m hoping that as new experiences are learned it will eventually validate these errant patterns.  Though the more it learns the more it tries to match and validate… more errant patterns emerge. I had a schema in mind similar to this to enable curiosity…but it seems to me the system has beat me too it.

I’m experimenting on this now.
 
I was wondering if anyone had any information, theories, thoughts, projects or links regarding curiosity; both the psychological aspect and implementation in AI would be cool.
Title: Re: The last invention.
Post by: kei10 on November 20, 2016, 09:04:07 pm
Please make it happen! Please make it happen! Then I don't have to pursue my work anymore!

Although I've zero knowledge of neural network, so I won't be of help -- All I can say is...

I'm rooting for you, korrelan! Keep up the amazing work!  ;D

(http://i.memeful.com/media/post/kRp6O2w_700wa_0.gif)
Title: Re: The last invention.
Post by: korrelan on November 20, 2016, 10:12:15 pm
That’s one hectic .gif lol

Please keep in mind that this research is not peer reviewed/ supported. I'm just a single bloke working towards an end goal and my interpretation of my progress might be incorrect/ unjustified; that’s why I try to post videos showing my findings/ progress. I personally believe I’m on the right track but only time will tell…

Please don’t stop your own endeavours… this problem needs solving… all we can do is our best.

The pat on the back is graciously received though… cheers.

 :)
Title: Re: The last invention.
Post by: Art on November 21, 2016, 02:49:08 am
That's some really nice work so far Korrelan...good job!

? - Does the AGI (theory / entity) of yours know what it doesn't know? If it were to receive an inquiry for that which it has no logical information or data (whether local on hard drive or on a server), would it then resort to going online on a Search and Fetch routine in order to provide a suitable answer? Assuming the above scenario happened, would it then know that answer for future reference (retainage - just in case it got asked again by a different person) or would that answer be a one time use.response just to provide an answer?
How would such info be sorted and stored? How would it be classified? Short Term, Long Term, Ephemeral, Topical, various categories could be used but tying the appropriate pieces together can be like trying to nail jell-o to a tree!

I didn't see Emotions in your listings and if I missed seeing it, I apologize. Sometimes late evening skimming is not in my best interest. Emotional responses are really only pseudo-emotional as we know they are only there for humans to relate as the AGI or bots really don't care nor have use for emotions. I classify self-preservation as a state of being and not an emotion so anger, jealousy, greed, hate, etc. do not matter in this equation.

I did know of (and may still have it in my archives) a bot that had categories for various parts of it's brain and the functions of various associated systems (hypothalamus, endocrine and many other areas) that could be 'tweaked' or have certain parameters within these areas raised or lowered. The result of these changes would then be seen when the bot / AI was next activated. It was quite advanced for it's time considering that it is certainly within my archived collection somewhere, if I still have it at all.

A couple other AI experiments I recall gave the AI an ability to "see" through one's web camera, an object to which it was told the name. It later could recall that object when the user held it up for the program / camera to "see", recalling the image from a database of several "learned" images. Lots of creative thinkers in our collective AI past.

If interested in any of these please let me know (can PM me if needed) and I'll see if I can do some digging for you).

Best!
Title: Re: The last invention.
Post by: korrelan on November 21, 2016, 11:37:13 am
Hi Art

The young AGI starts off very small and stupid; 100k neurons and a few million synapse.  As it views/ feels and listens to its world it slowly becomes more intelligent.  Experiences/ knowledge are laid down in the cortex in a hierarchical manner. The oldest AGI I have tested so far is two days old; it had learned to understand speech commands and recognise the words visually.  One of the main problems I face is that once an AGI has started learning changing any properties in its connectome mucks the whole thing up. Because I’m still developing the system I have to save each AGI and start again. I am at the stage where I’m going to run one constantly for a few weeks to see what happens though; I have loads of stuff to test on a older model.

Quote
Does the AGI (theory / entity) of yours know what it doesn't know?

Good question.  Yes it should eventually evolve cortex areas/ patterns that detect this kind of scenario. As the cortex expands in 3D space with age; areas will arise that recognise any and all combinations of the current experience. If the system is shown what to do in this kind of scenario it will try to apply the same techniques next time; eventually it will get the general idea just like a human would.

Quote
How would such info be sorted and stored?

All learned information is incorporated into the systems overall intelligence/ knowledge. It won’t be limited by its skull volume like us.  The information is laid out/ stored fragmented over the cortex, with each fragment located relative to its meaning/ use; it is practically impossible to manually retrieve. You would just verbally ask the system.

The system can be taught to use a standard database I suppose as well... just like we would.

Quote
I didn't see Emotions in your listings

Each synapse is sensitive to seven neurotransmitters that alter its parameters. A basic limbic system learns to recognise relevant patterns in the main ‘thought’ pattern and flushes defined areas of cortex with the transmitters. I’ve done some work on this but the results are very unpredictable; I’m going to wait until I have the AGI old enough to explain what its experiencing before I try to implement emotions. (hopefully)

The system has no problem recognising faces and objects; linking words and experiences to objects/ sounds etc. (some vids early in thread).

The bot that could be ‘tweaked’ sounds interesting; if it’s not buried too deep in your archives?

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on November 21, 2016, 12:42:46 pm
Good stuff. Seal of approval.
Title: Re: The last invention.
Post by: keghn on November 21, 2016, 03:48:36 pm
 So @korrelan how is your AGI machine storing temporal memories?
Title: Re: The last invention.
Post by: kei10 on November 21, 2016, 09:21:14 pm
@keghn
I'm rather curious about that, too.
Title: Re: The last invention.
Post by: keghn on November 21, 2016, 11:12:22 pm
 It is believed that a brain stores one picture of horse by creating a Detector NN to detect it. supervised or unsupervised will
work.
 Then this horse detector NN is pointed at a internal doodle board somewhere in the brain and when a
horse is drawn and created just right the horse detector will re activate and say perfect,
 Then a sequence of doodles is encoded into a RNN or LSTM or waveNet NN or a "many layers of raw NN stacked" together
 that decode off of a timer or a daisy chain nerves.
Title: Re: The last invention.
Post by: Art on November 22, 2016, 12:58:59 pm
@ Korrelan - Thanks for the answers. O0

I'll dig around in my archives to see if I can find that bot....
Title: Re: The last invention.
Post by: Art on November 22, 2016, 09:42:52 pm
@ Korrelan...After much digging and hooking up my external 1TB HDD, I found it...

The bot is Aaron (Aaron2 actually). There was an Aaron and before that one named AIB (retired).
It was by a company called Isomer Programming and the guy's name was / is Matthew Rodgers. http://www.isomerprogramming.com/ (http://www.isomerprogramming.com/)
I think he's turned his efforts toward inMoove the 3D printed robot and his arduino enhancements.

Much to my surprise after copying it from the External which ran XP, to my current Win10, it ran!
There is quite a lot to this program with much to explore. It is from the 2005 era as I recall.

It is more of an experimental thinking / chat creation than your typical AIML type bots. There are a lot of features that can be configured like Speech, Web Cam, multiple users, injectors, commands, etc.

I took a few screen shots as examples.
Title: Re: The last invention.
Post by: korrelan on November 22, 2016, 10:06:48 pm
@Art

Cheers.  Looks very interesting... A lot more in depth than I imagined. I'll have a good read/ play.

I appreciate the effort to find it.

 :)
Title: Re: The last invention.
Post by: korrelan on November 28, 2016, 11:39:45 pm
Anyone interested in signal analysis? Fast Fourier Transform Series perhaps?

I needed a break from coding the AGI so I’ve spent the last week rewriting my audio input modules.  All external senses are passed to the AGI through small external modules designed to run on separate machines in the cluster and pass their data through TCP pipes.

The old audio module was a quick effort I wrote many years ago just to test my theories.  The new module has a much higher resolution (500 Bins, 1-8 KHz) with custom automatic filters to enhance human voice phonemes.

https://www.youtube.com/watch?v=CiluUf4sEGo (https://www.youtube.com/watch?v=CiluUf4sEGo)

This vid shows the results from a cheap £7 omni directional microphone listening to phonemes.

It’s designed to use any old microphone and yet still produce a quality hi-rez signal for the AGI.

The app only uses about 2% of the available cpu (4Ghz Quad) The rest is the video recording app.  It hooks up automatically to my fixed IP so I'll eventually be able to use a laptop/ phone to talk to the AGI from remote locations.

 :)
Title: Re: The last invention.
Post by: keghn on November 29, 2016, 02:32:19 am
 Very interesting there @Korrelan. What language is it in? Is that 1Hz to 8KHz?
 I have been looking  around for easy to use DFT software, for a while:)
Title: Re: The last invention.
Post by: LOCKSUIT on November 29, 2016, 08:43:04 am
Bins...............................

          ..................Pipes................................................

Why make a bad microphone's input better when you can just give it a better microphone?

Isn't the video only showing the input get saved? Each strip is a few moments of sounds?

Someone's playing around with the site cus I just clicked the first topic at the top right and it now for the second time gave me a scam website while also opening the thread. Now when clicking Freddy's name to PM him. Checking if I have a virus.

http://advancessss.deviantart.com/art/885858788-648474305?ga_submit_new=10%253A1480409476 (http://advancessss.deviantart.com/art/885858788-648474305?ga_submit_new=10%253A1480409476)
Title: Re: The last invention.
Post by: korrelan on November 29, 2016, 09:15:59 am
Quote
What language is it in?

I usually use VB6 Enterprise (not VB.net… way too slow) for interface design and then write DLL’s in C for extra speed when required.

https://en.wikipedia.org/wiki/Visual_Basic (https://en.wikipedia.org/wiki/Visual_Basic)

Most of the time I don’t have to bother with C though because VB6 compiles to native code and has one of the best optimizing compilers ever written in my opinion (all compiler options on). It’s 95% as fast as pure C in most cases (pure maths) and obviously much faster to code and develop with.  If you use the windows API, DirextX or OpenGL for graphics is amazing how fast it runs for a high level language.  This audio module is pure VB6; it runs plenty fast enough for the job so no need for C modules.  Speed of program development is most important to me, if I have an idea I can code it in VB6 in a few minutes and test the theory; then optimize later if required.

Quote
Is that 1Hz to 8KHz?

I’m sampling at 48 kHz into 4096 bins. I’m only rendering the bottom 500 bins (1Hz to 8kHz) though because that’s where 99% of human speech resides.

Quote
I have been looking around for easy to use DFT software, for a while

This is an excellent book on signal processing. You can download the chapters in PDF format (top left); and there are even some example programs written in various languages.

http://www.dspguide.com/ch12.htm/2.htm (http://www.dspguide.com/ch12.htm/2.htm)

 :)
Title: Re: The last invention.
Post by: korrelan on November 29, 2016, 09:40:08 am
Quote
Why make a bad microphone's input better when you can just give it a better microphone?

Yes you could… and I do have some excellent microphones.  Even using the best microphones there is a lot of additional noise in the time/ frequency domains that requires filtering (cpu fans, static white noise, background chattering, echos, servo noise, etc).  I’m also using omni directional microphones so I can speak to the AGI from across the room; or shout from the next room as I tend to do lol.

The modules also designed so eventually anyone will be able to download it and talk with the AGI no matter what setup they have.

Quote
Isn't the video only showing the input get saved? Each strip is a few moments of sounds?

The video is showing the frequency components of speech. Each strip is a part of the sound in that frequency domain. The pitch of the voice/ sound is how close the lines are together and the volume is shown by the ‘power’ of the signal, etc.  I’m extracting all the required components from sound to enable the AGI to recognise what is being said.

This module basically renders spoken words down to a set of values; pre-processes the data to keep the load of the main cluster and sends them across a network/ internet for neural processing.  Like the retina the ear does a lot of pre-processing before passing the signal to your noggin.

This is a visual/ocular input module. It can render video or images into several formats (for testing) and again pass it across a network to the AGI. (one for each eye, etc) The vid shows the conversion of image contrast into frequencies.

https://www.youtube.com/watch?v=RHsKqF4Sgpk (https://www.youtube.com/watch?v=RHsKqF4Sgpk)

I have others for servo output and sensory input etc.

Any AGI will require modules like this if it's to understand its environment.

 :)

(http://i.imgur.com/3fUBFow.jpg)

:)
Title: Re: The last invention.
Post by: LOCKSUIT on November 29, 2016, 10:10:50 am
Well if I hear fuzzy speaking (and I mean it, 60% fuzz I HEAR (has got through)) I can still make them out. So as long as we check that our AI's cameras and microphones look/sound crystal clear, ye safe. But ya, go and revolutionize cameras/google, post your module to google/intel corp. since it's made now.
Title: Re: The last invention.
Post by: jlsilicon - Robotics AI on December 05, 2016, 12:22:09 am
Korrelan,
Impressive job !

I would not have thought , using Neural Nets,
  of the extent of the results that you have brought it to.

Seems to be using the Bottom-up method (Neurons).
Must have been a lot of work.
Title: Re: The last invention.
Post by: LOCKSUIT on December 05, 2016, 05:02:42 pm
Ahhh how I love the bottom-up and top-down methods. I love subsystems. I live underground in one. The pipes. The black. The sewer. The fooood......Yum yum.
Title: Re: The last invention.
Post by: korrelan on December 10, 2016, 03:41:33 pm
Basic low resolution sensory Homunculus design/ layout.

There are several reasons while the AGI needs a complete nervous system but the main ones are too do with signal order and frequency.  I need precise entry and exit points in the cortex connectome to enable the system too monitor/ sense/ move its external body.  Whilst in the womb our cortex is partially trained, we can see light and hear voices through the womb wall.  Although we have somewhat limited motions the motor/ sensory/ somatosensory still manage too tune and learn each others feedback patterns and frequency domains.  By the time we are born the bodies nervous system is already precisely tuned to the relevant cortical areas.

These will also provide the correct locations to inject/ extract hardware sensory/ servo points.

I’ve tried to stick closely to the human topology. The basic sensory cortex is believed to be laid out in manner similar to (A). Our spine/ nervous system develops from a neural tube in utero which leads to a connection schema similar to (B). (but left to right)

(http://i.imgur.com/lsycYhe.jpg)

https://www.youtube.com/watch?v=jrHT6Rx_y7s (https://www.youtube.com/watch?v=jrHT6Rx_y7s)

Quick vid showing progress so far; this is a starting point. 

Notice that the left side of the body is handled by the right side of the cortex, this was by design.  The signalling schema I employ naturally results in this effect. Also note the red circle in the diagram; this junction is reproduced on each 'limb' of the simulation... it won't work without it.

The whole nervous system will be subject to the same learning rules as the rest of the cortex so this will develop depending upon the requirements of experience. Nerve clusters will arise at relevant locations and neurogenesis will add millions more neurons and synapse, etc this will become a tuned input/ output body stage for the cortex. I’ve designed a similar reverse map to handle output using the cerebellum rather than cortex.

The last few seconds shows a simple stimulation pattern.

Bit more done... baby steps.

 :)

Edit: As usual this is my interpretation of how we function... you won't find references in scholarly articles.
Title: Re: The last invention.
Post by: kei10 on December 10, 2016, 03:46:58 pm
(http://i.memeful.com/media/post/kRp6O2w_700wa_0.gif)

Now this is what I call a work of lunacy! Absolutely majestic and splendiferous! I'm dying to see more!

Thanks for keeping us noted with the progress!  ;D
Title: Re: The last invention.
Post by: korrelan on December 10, 2016, 03:58:20 pm
Why... thank you my good man.

I'll make sure your one of the last to be terminated when my little project becomes sentient... hehe.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on December 10, 2016, 04:20:59 pm
Don't ya just connect the sensors by wires to the brain and then back out to the motors? The signals that the sensors send and the signals that the motors receive are the right *Correct frequency (digital sense/action) and pattern. Camera sends " ", motor receive's " ".
Title: Re: The last invention.
Post by: korrelan on December 10, 2016, 05:05:50 pm
Quote
Don't ya just connect the sensors by wires to the brain and then back out to the motors? The signals that the sensors send and the signals that the motors receive are the right *Correct frequency (digital sense/action) and pattern.

Anything more complex than a very simple light detector/ relay and a motor on/ off signal will require supporting electronics.

So… no the output from an electronic sensor is not directly compatible with a motors driver.  You would require some calibration/ conversion electronics.

How would you encode the multiple sensor signals so the AGI could differentiate between different pressures/ temperatures across multiple sensors? The human nervous system is capable of reading/ blending millions of external readings effortlessly… making sense of them and acting accordingly. This is what I want/ require.

Quote
Camera sends " ", motor receive's " ".

Erm… no.  I wish it was that simple.

 :)
Title: Re: The last invention.
Post by: korrelan on December 10, 2016, 05:13:15 pm
An explanation of the cerebellums function. (According to me)

This links to the homunculus above... I forgot to add.

The whole human nervous system including the cortex and all its associated areas runs on feedback. It’s a constant dance between input… recognition/ processing and output… input…

The motor cortex sends loads of efferent connections to the seemingly complex cerebellum before they get distributed to the muscle groups.  Seems like way too many outputs to jive with the feedback theories.

I think (am sure) the reason for this is because the motor feedback loops exit through the cerebellum… we move… we sense… we see… and the loop includes the external environment before re-entering the system via the vision/ tactile systems.  There has to be a very high resolution output stage to sense the world before our high resolution sensory organs bring it back into the system to conclude the loops.

This is how we include/incorporate external stimuli into our internal models.

An efficient loop processing schema has an equal number of outputs to the number of inputs.

 :)

Edit: I'm also trying to make it as compatible as possible with the human connectome... consider the possibilities.

:)
Title: Re: The last invention.
Post by: LOCKSUIT on December 10, 2016, 05:23:38 pm
My good sir, as a typical engineering project built by electricians and engineers etc, the sensors and motors are calibrated and wired to the brain, and loaded with the necessary drivers to send/receive the signals (the senses/actions). Sensor signals are never sent through the motor wires ever. And all sensors such as pressure and temperature each send their senses to the brain through their own wires and all the senses are filtered through a attention system. All sensors send the corresponding sense such as high or low pressure signals. The whole somatosensory image is saved in memory after searching memory. And like the eye, only a small or big point of the image searches memory. The motors receive the selected output. There can be many more motors than sensors.
Title: Re: The last invention.
Post by: korrelan on December 10, 2016, 05:54:20 pm
Cool... I'm pleased you also have a working solution to the problem.

:)
Title: Re: The last invention.
Post by: LOCKSUIT on December 12, 2016, 12:09:20 am
Ah I looked over your first post again with all of your videos and found some more things.

Korrelan, first let me ask you. When the microphone sends a signal to the brain, doesn't it only contain 1 signal being 1 frequency being 1 phoneme? Why does your 2 videos show either a splash of colors or a bar of musical sticks etc? If you look at a music's wave track it is made of 1 slice slices actually. While some techno/anime videos on YouTube show a double sided bar with color pumping differently each moment like yours similarly I don't get how.

What I see in your attention columns > While I call something in my system "Attention System", there is something else in my system that relates to your "attention columns". What my system senses externally or as internal incomings, they leave energy behind at the recognitioned memories. Similar memories are saved near others. So this recent energy can wave back and forth like fire shown in your video (sleeping while awake ha!) so the energy changes back n forth to similar senses near the original selected. This, as you said, guides it (what has a higher chance of being recognized ~ selected plus draw the search) and has it focus on certain thoughts.
Title: Re: The last invention.
Post by: korrelan on December 12, 2016, 11:07:49 am
(http://i.imgur.com/1MEQ3rl.jpg)

Quote
When the microphone sends a signal to the brain, doesn't it only contain 1 signal being 1 frequency being 1 phoneme?

Nope… the output from a microphone can be viewed as a complex sine wave. (top right of image)

Quote
While some techno/anime videos on YouTube show a double sided bar with color pumping differently each moment like yours similarly I don't get how.
In films they tend to show a stereo amplitude waveform; two amplitude levels back to back with ‘0’ along the center axis. (top left). Each spike is showing the maximum ‘power’ of all the combined frequencies within that millisecond of sound (complex sine wave). This can be thought of as just the tip of the iceberg; it’s a very rough representation of the audio signal.

Quote
Why does your 2 videos show either a splash of colors or a bar of musical sticks etc? If you look at a music's wave track it is made of 1 slice slices actually.

In this form it’s pretty useless for speech processing. Each ‘amplitude spike’ is made up of different embedded frequencies. (Bottom left) To extract the frequencies you have to run that millisecond of sound through a ‘Fourier Transform’. This basically sorts the complex sine wave into ‘bins’; each bin represents one frequency.  The spectrograph shows the frequency ‘parts/ bins’ for each millisecond of sound.  Each of the frequencies has a power/ amplitude element that can be represented on a colour scale, blue low to red high. (Bottom Right).

So a phoneme is actually a number of different frequencies/ powers changing over time.

The bottom right image is a 3D representation of the section of audio inside the red box.

This is quite a complex subject but I hope this made is clearer.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on December 12, 2016, 03:59:12 pm
I am completely in the dark on how 1 single moment's sound from a microphone gives multiple readings.

Does a microphone have 3 microphones inside each a different size ex. high pitch small and low vibe sub woofer speakers?

To me the microphone reads from a sensor so many times a second and the signal is stored as a few bytes. I don't understand how a pitch of say 77 gives you 33 this 1657 this and 2 this THERE'S NO SUCH THING.

To make this even more clearer, take a camera pixel recorder. Take a single shot from it. It is a pixel saved into the storage device of the brightness from 1 to 1000 say. To get color you need another 2cd sensor. Each sensor gives one code ex. 1010. A code cannot mean anything else then the factory calibration.

You can change the code ex. brighten up a photo. But adding the mods to the table/grid sheet with the original data is useless or no? Maybe for searching? Ex. a grid table with a picture of mona lisa and a ultraviolet picture of her in the vertical direction and many brightnesses of each in the horizontal direction.
Title: Re: The last invention.
Post by: korrelan on December 12, 2016, 04:30:58 pm
Quote
I am completely in the dark on how 1 single moment's sound from a microphone gives multiple readings.

Hehe… I wrote the reply in a hurry so perhaps my description was lacking.  You are correct it doesn’t give multiple readings. Sound is basically a modulated pressure wave (air vibration); a basic microphone converts this though a coil into a modulated voltage. The sample used too extract the combined frequencies is in the top right and is over a very short period of time. I sample at 48kHz or 48,000 voltage readings per second.  I then choose how big a block of the 48,000 to process too extract the frequencies… in that continuous sample. Then the next block is processed on so on… in real time.  It is a linear signal… and can be interpreted in many ways. The peaks and troughs in the signal (top right) represent the amplitude/ power… peaks at regular intervals are frequencies, harmonics, etc. The Fourier Transform finds/ sorts these peaks into regular frequency domains/ ranges.

In your ear you have millions of small hairs that resonate at set frequencies, all the above is just to convert sound into a similar format for the AGI.  Evolution has devised a much more elegant system but this is what we have to work with… until someone invents a microphone that works along the same lines as our auditory system.

So sound isn't expressed/ experienced by your brain as a continuous sample like you get from a microphone, it receives the sound as parallel set of modulated frequencies.

Fourier analysis converts a signal from its original domain (often time or space) to a representation in the frequency domain and vice versa.

Make more sense?

 :)

The Fourier transform decomposes a function of time (a signal) into the frequencies that make it up, in a way similar to how a musical chord can be expressed as the amplitude (or loudness) of its constituent notes. The Fourier transform of a function of time itself is a complex-valued function of frequency, whose absolute value represents the amount of that frequency present in the original function, and whose complex argument is the phase offset of the basic sinusoid in that frequency.
Title: Re: The last invention.
Post by: LOCKSUIT on December 12, 2016, 04:56:22 pm
Not one bit. But ok many hairs in our ears. I overlooked that oops. (I was thinking 2 microphones 2 ears). Though I'm pretty good don't start on that lmao.

Oh korrelan if only you could explain it more clearer in 10 words. I know I could.

Hey korrelan, what business do you run?
Title: Re: The last invention.
Post by: keghn on December 12, 2016, 05:00:19 pm
 One of the simplest way a computer record sound is in a uncompressed format of wav.
 I use SOX software to manipulate my sound files.

 All sound is repeating wave and the strength: 

https://en.wikipedia.org/wiki/Pulse-code_modulation


 The problem with sound is that when a bunch of simple wave are mixed together it is very difficult to separate them
back out into their simpler components. All the waves are hiding behind each other.

 To separate them there is the tank circuit used in electrics: 
https://en.wikipedia.org/wiki/LC_circuit


 Then there is the Goertzel algorithm: 
https://en.wikipedia.org/wiki/Goertzel_algorithm


And the FFT logic: 
https://en.wikipedia.org/wiki/Fast_Fourier_transform
Title: Re: The last invention.
Post by: korrelan on December 12, 2016, 05:51:15 pm
Quote
Oh korrelan if only you could explain it more clearer in 10 words. I know I could.

I can do it in four words… Search Google and Learn.  ;)

Quote
Hey korrelan, what business do you run?

Mainly bespoke software and systems design.  Welding QA/QC?NDT systems, medical expert systems, imaging & diagnosis, security surveillance and access systems, shop/ gallery EPOS systems, accountancy systems, production line systems, etc… many varied strings to my bow lol.  I tend to write the main core software packages based around third party hardware solutions and supply the software/ hardware/ installation teams/ maintenance teams based on yearly contracts. Also consultancy etc...

It keeps the wolf from the door... why did you ask?

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on December 12, 2016, 06:13:42 pm
Asked out of interest.

Doesn't all the terms out there like ones you mentioned fry your mind lolllllllllllllllllll

ex.
"I tend to write the main core software packages based around third party hardware solutions and supply the software/ hardware/ installation teams/ maintenance teams based on yearly contracts."

Otherwise I've nearly FRIED my mind with so much fast paced work in such short time.

It's almost too much too keep going over and analyze for me.

Bespoke.

Welder.

Floor layer.

Accounter.

Quantum.

Universe.

Feces.

Motors.

Electrostatic discharge.

Earwax.

Grandpa.

I sit here for hours on thinking how and what will bespoke fit into in my database and what is it and just you get it.
Title: Re: The last invention.
Post by: korrelan on December 12, 2016, 06:33:07 pm
Many, many years ago I was an IT teacher and lecturer (18 yr olds and above)… I just remembered why I gave up and swapped vocations.

 ;)
Title: Re: The last invention.
Post by: LOCKSUIT on December 12, 2016, 10:52:23 pm
There is power in a army.
Title: Re: The last invention.
Post by: Art on December 13, 2016, 02:20:40 pm
But without discipline and good leadership, it's just a group of people.
Title: Re: The last invention.
Post by: korrelan on December 14, 2016, 07:29:26 pm
I was going to write… there’s more power in a leggy… but yeah! I’ll second what Art wrote.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on December 15, 2016, 02:09:52 am
And who do you think the army is?

;)
Title: Re: The last invention.
Post by: Art on December 15, 2016, 03:35:29 am
I KNOW who and what the Army is...the question is...Do You? ;)
Title: Re: The last invention.
Post by: kei10 on December 15, 2016, 06:18:32 am
Sorry, but... um...
What is this thing about army? o_O

I'm a bit lost.
Title: Re: The last invention.
Post by: LOCKSUIT on December 15, 2016, 03:54:31 pm
K we better keep this thread's next page clean, this is korrelan's progress page afterall.
Title: Re: The last invention.
Post by: korrelan on December 17, 2016, 01:23:04 pm
Sleep

I’ll throw my pennies worth in regarding sleep.  Most of this text is from my notes for the book I’m writing so I apologise if you have read similar from me before… as usual these are my insights/ interpretations and probably not in-line with major mainstream concepts on sleep; just covering my a** lol.

I don’t think there is any difference between being asleep or awake except that the brain has less sensory stimulation. Our ‘consciousness’ doesn't seem to change.

The ‘state’ of sleep is obviously bought on through physical fatigue. Lactic acid build up in the muscle groups, etc just makes physical movement harder and this effect is enhanced by the brains requirement for a period of low activity/ rest.

A good analogy would be a modern car engine; it revs higher the more work it has to do. If your car is idling and you turn the lights on the engine management system will slightly increase the tick over to compensate for the extra current draw. It’s a dynamic system that relies on feedback from both its own senses and the driver.  If you listen closely you will even hear the revs cycle through different frequencies over a period of time, this is not by design… it’s an inherent property of a complex system.  The management might sense the battery is getting low, a breeze might blow into the air intake altering the fuel mix, the car management adapts in real time to feedback and the revs change. The engine is a self contained system that adapts to sensory input.

The ‘states’ of sleep are merely our brains ticking over, not stimulated by sensory input. As soon as a sensory stream (audio: loud noise) is fed into the cortex it springs into action (like touching the accelerator/ gas pedal).  There is a gradient from being sound asleep to wide awake.  It’s the shear amount of sensory stimulation that drives/ raises our consciousness.  Stimulus like sounds can ‘creep’ into our dreams and influence them because our all the machinery that consciousness normally comprises from is still running normally.

Like an engine our brains are a self regulating system that acts on sensory input.  When we look at a scene, the sensory stream from our eyes combines/ adds to the visual feedback stream from our imagination/ memory.  Our internal simulation of the world (as far as vision goes) is updated by both these inputs together.  There are thousands of different feedback networks running covering all our senses, etc vision is only one.

An interesting experiment to try tonight when your lay in bed.  Some/ all of you might already do this to help you fall asleep.

Once you have laid there in total darkness for a few minutes and after the activity in your retina machinery has calmed down… look into the darkness.  Notice all the little specs, blurs and gradients… I believe that’s part of your imagination, a reflection of the current state of your consciousness. 

Relax and try to look/ focus into the darkness (like stereogram), eventually a shape or image will start to become clear, could be anything… part of a face, piece of wood, gears, ice crystals… anything.  With a bit of practise you can force images to appear as though you’re looking at them and even make them morph.  Sometimes you can even focus with both eyes and achieve a feeling of 3D depth perception.


Now… you can see this? It feels like your looking at it, but your eyes are closed and it’s totally dark, so where are the images coming from?  Can’t be an after image because your eyes have calmed and you’re probably seeing something you definitely haven’t seen today, and the chances of your neurons firing randomly to make a salient image are beyond… well you know.

The brain doesn’t duplicate machinery, so if we feel someone’s pain I think we use our own pain cortices/ machinery to empathise.  This is how we can literally feel their pain.

On a similar vein I think what we see in the dark is the feedback from the other cortical areas into the occipital cortex.  A shape starts to emerge… the brain recognises it… this recognition emphasises the shape even more… and so on.  If you see an apple it’s your cortex recognising the outline of an apple as the most appropriate match for the given shape, and the thought of an apple strengthens the image.  We also use this mechanism when viewing blurred images or traffic through fog, etc.

K-complex/ Sleep spindles in the EEG trace are produced when the brain ‘locks/ snaps’ onto the swirling relaxed pattern our imagination is producing with no sensory input; this is why dreams seem to flow but always in weird seemingly unconnected ways.

Our nervous system is constantly laying down neurotransmitters and compounds at the synaptic connections ready for the next ‘thought frame’ to flush though. The body tries to clear these markers away whilst awake but the constant usage of all areas from sensory driven thoughts makes it practically impossible and a build up occurs.  If we are awake for a long enough periods this can be very detrimental to both our learning ability and mental stability. When sensory driven activity drops the brain is able to catch up on house cleaning chores; and a flurry of maintenance activity comes to the fore… this makes it appear that these processes only happen when asleep. 

Imagine if Google maps where to update its connections/ routes while you where trying to follow directions to a location.  Thousands of people would get confused… and lost.  A similar problem arose in my simulation.  Existing proven networks can be consolidated/ strengthened whilst the system is ‘awake’ but new connections/ synapse that would radically alter the dynamics of the system can’t… Whilst the brain is trying to do this constantly throughout the day it works out that it mostly occurs during sleep… one of those seemingly ‘designed’ systems evolution has come up with.

Did you ever wake up with the answer to a problem? The ‘Eureka’ was caused by two or more problems with very similar facets/ dynamics combining to create the same resulting feedback/ answer pattern.

So… when we rest and close our eyes, the brain settles and its revs drop… but the visual/ audio/ sensory streams from our bodies and imagination/ consciousness are still present/ running… and we dream. 

 :)
Title: Re: The last invention.
Post by: kei10 on December 17, 2016, 02:46:23 pm
Very well written!
That's what I thought so, too!  ;D

(http://i.memeful.com/media/post/BRkjDbM_700wa_0.gif)
Title: Re: The last invention.
Post by: LOCKSUIT on December 17, 2016, 03:19:33 pm
Me - "man the work recently"

Sees the above - "don't do it, don't do it"

Do note that while most books aren't compact, sometimes more words do squeeze out new discoveries.

I don't think I can resist reading it though...

It's kinda a combo of "I already got fine stuff" with "I'm not able to read (my kind of scan) a huge thing just on sleep right now"

After reading a bit, ok it's not so full of all these timbits I expected. And yes the little glowy dots in bed true.

Black is vision & is sent if no light. Sometimes I see oodles OF noodles as if are lighten neurons (or rows of them~pixels). Yes in bed they then send light. Say brightness #7 instead of black #0, brightest should be ex. #1000, color is R/G/B sticky-note, search CNN by amount in area for "purple".
Title: Re: The last invention.
Post by: korrelan on December 17, 2016, 04:32:31 pm
There are two books in progress. (200+ pages atm)

The first is my definition of the entire human condition and how I think it all works with examples/ arguments and the reasoning behind my proofs/ conclusions..

The second is basically an engineers/ programmers guide on creating a human+ level intelligence.  The low level technical specifications on what/ why/ how.

Neither is finished for obvious reasons... but they will be... eventually... lol.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on December 17, 2016, 04:39:13 pm
Did you see the Toy Story video I had posted? Just wanna make sure. It was too funny.
Title: Re: The last invention.
Post by: korrelan on December 17, 2016, 04:45:23 pm
Yes lock I saw the video...  :)
Title: Re: The last invention.
Post by: Art on December 19, 2016, 12:35:30 pm
What would be cool and useful would be a similar analogy of our human bodies as pertain to a car or computer. What is the computer could execute its own diagnostics, defrags, empty waste / necessary deletions and the car could run self diagnostics and self-tune or print needed repair orders while they were "sleeping" or in a state of not being used?

What if an AI could spend its time researching, gathering, postulating, "thinking" on faster, better methods of doing what it does? All this "thinking" would be done during down time, when humans weren't using it.

All comes down to programming....
Title: Re: The last invention.
Post by: kei10 on December 19, 2016, 02:03:47 pm
Although I have not thoroughly researched, but I theorized that when an AGI that has emotion "think" too fast and too efficient, that capable of do things like the savants, it will break down, and break down fast -- that is, if they're implemented to be like us social beings, I am not sure otherwise if they weren't made to be social beings, though.

When one becomes so powerful, everything becomes... pointless. It becomes chaotic.

What do you guys think? I would love to hear your thoughts onto this. ;)

Title: Re: The last invention.
Post by: LOCKSUIT on December 19, 2016, 03:32:12 pm
So kei (and infurl), you think they will not be perfect, acquire more errors per complexity, and go completely out of control?

I will tell you the answers.

But you will have to wait until my secret video is prepared.
Title: Re: The last invention.
Post by: Art on December 21, 2016, 03:55:49 am
But isn't that something of a conundrum ? How could you prepare for the rest of us, a 'Secret video', when the very act of posting it for our viewing will render the 'Secret' nature of it, useless and moot!

Perhaps You are the AGI and you are far too clever for the rest of these learned people to discover your identity!  O0
Title: Re: The last invention.
Post by: kei10 on December 21, 2016, 05:02:38 am
(http://i.memeful.com/media/post/Wwl87wE_700w_0.jpg)
Title: Re: The last invention.
Post by: Art on December 21, 2016, 02:27:08 pm
"Billions and billions..."
Title: Re: The last invention.
Post by: LOCKSUIT on December 21, 2016, 03:39:53 pm
Feed me
Title: Re: The last invention.
Post by: korrelan on December 23, 2016, 01:11:48 pm
Synaptic Pruning.

https://en.wikipedia.org/wiki/Synaptic_pruning (https://en.wikipedia.org/wiki/Synaptic_pruning)

This is an important part of my AGI design.

The usual small patch of cortex and layout is used. 40 complex patterns have been learnt (right panel) and the systems confidence and the current pattern are shown bottom left.  The rising bars show the machines confidence/ recognition of which pattern out of the 40 the current one is.  The rising bar should track the moving bar (pattern number) if it has learnt correctly.

Top left you can see the current number of synapse S:54413. The other number to note is one just under the CY: label (top left). This shows the overall complexity of the current ‘thought’ pattern.

https://www.youtube.com/watch?v=Gd42rbMlFOk (https://www.youtube.com/watch?v=Gd42rbMlFOk)

On the first test/ pass the patterns are showing a high confidence level and the ‘complexity’ is running at around 28K on average.

At 0:23 I simulate a synaptic cull/ pruning of the connectome.  Notice the synapse count drop (top left) from 54K to 39K… that’s quite a few synapses gone.  The complexity value for the second recognition pass is lower 23K ish but the confidence levels never change.  The dropping of the complexity value can be construed as an improvement in ‘clarity of thought’.

This is part of the memory consolidation of the connectome.  The culled synapses do affect the overall stability/ rationality of the system but it leaves long term memory/ learning intact. 

Culling gets rid of stray unfocused/ unlinked pattern facets that would otherwise build up and confuse the system over time; leaving a nice fresh new uncluttered scaffold of proven consolidated memories/ learning/ knowledge on which to build new memories and experiences.

Memories/ patterns recorded with a strong emotional bias are less likely to be effected; this is probably why teenagers are known for being moody/ unruly… they have (on average) more mood inducing synaptic patterns running than an older adult >30.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on December 23, 2016, 04:24:12 pm
So neuron memories and axon links are being deleted by if they are not being used enough? I do this by muscle strength - weaker get weaker faster and do it on their own.
Title: Re: The last invention.
Post by: korrelan on December 23, 2016, 11:09:29 pm
Neurons don't have memories... and... never mind.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on December 23, 2016, 11:40:00 pm
Those neurons store a image of my moma. Watch it.

https://www.youtube.com/watch?v=epZp0sGHCwA (https://www.youtube.com/watch?v=epZp0sGHCwA)
Title: Re: The last invention.
Post by: LOCKSUIT on December 25, 2016, 12:41:55 am
One cool or terrifying thing I may have never told yous is one time in the daytime I thought of a head thing popping up at the end of my bed when I was walking around my room, and, that night I dreamt it Full-Sense! Scary!
Title: Re: The last invention.
Post by: korrelan on December 31, 2016, 01:57:36 pm
Ahhh!… everyone’s gone home and the house is finally quiet and empty after the Xmas break (bliss)… time to get my brain back in gear and see how many neurons died through alcohol poisoning… I’d wager quite a few… lol.

Thought I’d start with something easy; well something I thought was going to be easy.

Because the connectome is so large and complex and grows over time it’s been very difficult to manage neuron migration and neurogenesis.  So far I’ve been using external global rules to arrange the neurons in the cortical maps; this has started to interfere with my next set of goals.
 
The incoming sensory data streams (white matter tracts) dictate what the different areas of cortex specialise (not explicitly) in; but self organisation at the low level neuron resolution has always been a problem.

Each neuron needed an innate sense of its initial place in the cortical map.  The general idea is that once a neuron has migrated to a specific location; listened to its surrounding peers and decided what type of neuron it’s going to be… it has to be able to move into a location relative to the other neurons.  There can be any number of different types of neuron in that map layer and no two neurons of the same type must be next to each other. They must all keep evenly spaced and remain fluid to local changes in neuron densities, etc.

The video shows neurons being introduced at the center of a rectangular area; though they can flow and move like a fluid into any shaped perimeter.

It was important to make each neuron ‘intelligent’ and find its own location amongst its peers so all calculations are performed from the neurons ‘point of view’. The required processing can be handled by either a separate core or a totally different processor on the network to help balance the load.

There are only seven neuron types in this video; each marked by colour and a number (coz I’m colour blind) lol.

https://www.youtube.com/watch?v=S8KDT2fIOwk (https://www.youtube.com/watch?v=S8KDT2fIOwk)

The number I adjust is just the global neuron density allowed. At the end I manually add a few more neurons and you can see them reorder en-masse to accommodate each other, still keeping the same spaced schema.

http://www.princeton.edu/main/news/archive/S39/32/02E70/ (http://www.princeton.edu/main/news/archive/S39/32/02E70/)

 :)
Title: Re: The last invention.
Post by: kei10 on December 31, 2016, 02:37:08 pm
Very interesting!

Although that baffles me, as someone that has little to zero knowledge about neural network and what stumbling blocks you're battling with -- I have a... brain-dead question.

Why would you want to store the neurons with volume and shape that occupies space in the... cortical maps?

I mean, neurons can be simple as just a form of data storing in possibly categorized sections, which simply just fit into user-defined efficient data structures, and use other kinds of sorting algorithm to achieve um, whatever you're doing, with maximum performance.
Title: Re: The last invention.
Post by: korrelan on December 31, 2016, 03:21:11 pm
Quote
Why would you want to store the neurons with volume and shape that occupies space in the... cortical maps?

Excellent question.

The extra dimensions mean something.  The human brain is laid out in a very specific manner for a reason.  Neuron type, size, location, connection, local peer types, etc all effect how the system functions… I want true consciousness… so I have to build a true brain.

If you were to dismantle a mechanical clock and pack all the pieces in a box as close together as possible… would it still work? The human brain isn't just 80 billion neurons packed into the smallest possible uniform volume; if that schema did work then that’s how our brains would be. 

Each neuron is surrounded by other neurons (think swarm intelligence) that can directly/ indirectly affect its behaviour/ properties. It also determines local properties like which other neurons it can connect to etc… they are packed into cortical columns comprised of mini columns; cortical maps are separated by the lobes etc… if all these structures/ properties weren't important… they wouldn't be there. 

You have to consider the brain as a whole… to figure out how it works... and build facsimile.

 :)
Title: Re: The last invention.
Post by: kei10 on December 31, 2016, 03:27:44 pm
I see, now that's something to be pondering about!

Wonderful! Best of luck of getting the schema to work!

Thanks for the progress update!  ;D
Title: Re: The last invention.
Post by: infurl on December 31, 2016, 09:05:55 pm
The architecture of the human brain isn't the only one that bestows intelligence. Bird brains are much more efficient than mammalian brains packing three times the computational capacity into the same volume. They don't have any of the structures that we associate with high levels of intelligence in mammals, and yet they excel at tasks requiring language and reasoning.

http://www.cell.com/trends/cognitive-sciences/abstract/S1364-6613(16)00042-5 (http://www.cell.com/trends/cognitive-sciences/abstract/S1364-6613(16)00042-5)
Title: Re: The last invention.
Post by: keghn on December 31, 2016, 11:41:43 pm
 Birds and bats are under pressure to evolve differently than land animals. The bigger the land animal the better chance it
has of breeding. But for flying creatures it is the opposite. Smaller the animal the faster it evolves.
Title: Re: The last invention.
Post by: infurl on January 01, 2017, 12:00:23 am
Birds and bats are under pressure to evolve differently than land animals. The bigger the land animal the better chance it has of breeding. But for flying creatures it is the opposite. Smaller the animal the faster it evolves.

I'm not sure where you get those ideas from. I think maybe you are just guessing. I never heard anything about the rate of evolution being dependent on the size of the organism. It took nearly three billion years for prokaryotic cells to evolve into eukaryotic cells. That's pretty slow. On the other hand, human beings are one of the fastest evolving organisms on the planet. (The notion that the stability of our artificial environment stopped our evolution is a myth. The genetic evidence is overwhelming that we are evolving very fast.)

On the other hand, life expectancy *is* proportional to size for land animals and for flying animals. However gram for gram, flying animals live far longer than land animals. This is because land animals have evolved to breed fast to optimise the chances of survival of the species whereas flying animals have evolved to heal fast instead. e.g. If a bat's wing is torn it can heal in a matter of hours.

The evolution of brains is completely different. Birds are descended from dinosaurs, and dinosaurs and mammals emerged from whatever came before (reptiles, amphibians?) more than 500 million years ago. Bird/dinosaur brains and mammal brains have been evolving completely separately from each other all that time.
Title: Re: The last invention.
Post by: keghn on January 01, 2017, 12:26:34 am
 I am not guessing. Smaller organism evolve faster than larger creatures, Membrane wing are a inferior to feathers: 

https://www.youtube.com/watch?v=7TPvQdEaHHo (https://www.youtube.com/watch?v=7TPvQdEaHHo)   

https://www.youtube.com/user/megpiefaerie01/videos (https://www.youtube.com/user/megpiefaerie01/videos)


Title: Re: The last invention.
Post by: infurl on January 01, 2017, 12:29:49 am
http://www.thenakedscientists.com/articles/questions/do-smaller-organisms-evolve-faster (http://www.thenakedscientists.com/articles/questions/do-smaller-organisms-evolve-faster)

Small land animals evolve faster than large land animals and flying animals because they breed faster, not because they are small.
Title: Re: The last invention.
Post by: korrelan on January 01, 2017, 01:16:08 pm
@Kei

Quote
Wonderful! Best of luck of getting the schema to work!

Yup! This is going to be a very interesting year lol.

@infurl

Quote
The architecture of the human brain isn't the only one that bestows intelligence.

Agreed. When I first started out designing my AGI I considered/ studied all species and the various forms of intelligence that arises in complex multi-cellular, single celled (eukaryotic) and swarm systems.  I needed to find the common denominator that all the various systems used to ‘bestow intelligence’ then… distil it, simulate it, and make it infinitely scaleable.  All the work I’ve done since has been on the premise that I have discovered the ‘trick’, and so far… so good.

I’m basing my AGI on the human connectome for obvious reasons; we have the edge when it comes to intelligence. So although other animals (birds, etc) exhibit intelligence there is something extra the structure of our brain provides.  I often run simulations of smaller less complex systems that I judge to be a kin to birds etc, and by using/ experimenting with our connectome schema I’m slowly getting a grasp on why we are different.

Also our perception of time is very closely linked to the size of our connectome; if the AGI is to converse with humans it has to be able to ‘think’ at a similar speed/ resolution/ etc.  Nature wastes nothing; there is no point having an audio cortex that functions at ‘bat’ frequencies when talking to humans… my algorithms/ simulation must reflect/ adopt this paradigm.

I’m only able to simulate a few million neurons at the moment but this will change/ improve shortly.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on January 22, 2017, 04:06:30 am
Something you may find interesting for your work:

I've experienced sound on/in my skin/body when I feel.

I think it's because our ears hear by multiple hairs, and so does the body. Just the ears amplify and shape it.

http://www.dangerousdecibels.org/virtualexhibit/2howdowehear.html (http://www.dangerousdecibels.org/virtualexhibit/2howdowehear.html)
Title: Re: The last invention.
Post by: 8pla.net on January 22, 2017, 04:29:14 am
"there is something extra the structure of our brain provides."

And, there is something less the structure of our brain provides.

In theory, a dog's sense of smell is 40 times greater than a human,

Doesn't the brain structure of a dog, with a natural ability to identify

cancer cells by smell, exceed our brain structure?
Title: Re: The last invention.
Post by: LOCKSUIT on January 22, 2017, 11:33:58 am
The below explains those in-bed nightshows after you've layed in bed for a while:

http://scienceline.org/2014/12/why-do-we-see-colors-with-our-eyes-closed/ (http://scienceline.org/2014/12/why-do-we-see-colors-with-our-eyes-closed/)

Korrelan is gonna love the ending of this article.

BTW - mine are one color - purple. :))))))))))))))) Are yours?

btw, I also then start to see green, purple, green, purple, bulge across my vision. I really want to know if yous experience this too.

Still none of my Google searches told me why I see a white dot there or a purple dot there or a orange tangy or colored dot there. (not talking about imagination or shadows/drifters). Am I picking up neutrinos lol? I stare alot at my computer, heheh.
Title: Re: The last invention.
Post by: LOCKSUIT on January 23, 2017, 12:27:14 am
Forgot one important thingy - the reason I sense my "touch" as "auditory" must be because it passes through the temporal lobes to "match"....4 senses then lol!? Yes, yes, I've went through the "other senses".
Title: Re: The last invention.
Post by: LOCKSUIT on February 06, 2017, 01:07:27 pm
Korrelan, do you implement your CNN so that the center of the eye/s gets more score/points?

Our eyes, where they point to, make us see what's in the center, unless there is a scary face at the side of our view. Of course, you also have motor control over......just don't tell anyone.

Once you get accustomed to one trick, the others are easy. You just have to know how to hunt the mammal.
Title: Re: The last invention.
Post by: korrelan on February 09, 2017, 10:40:52 am
@8pla.net

Quote
Doesn't the brain structure of a dog, with a natural ability to identify cancer cells by smell, exceed our brain structure?

It don’t think it ‘exceeds’ our brain structures.  The general mammal brain adapts through plasticity to cope with the format of the body/ senses it is contained within.  General/ global rules and structures that define the ‘mammalian’ nervous system adapt the brain to best suit the animal’s body layout/ life style and social needs.

Dogs have evolved with an extremely heightened sense of smell because they use smell as we use language for communication; thus more of their processing power is dedicated to driving the olfactory networks. 

Our imagination and consciousness is derived from our senses/ experience; imagine how strange a dog’s consciousness must be where smell is the main focus of its thought patterns.

@Lock

Quote
The below explains those in-bed nightshows after you've layed in bed for a while

Phosphenes do produce the flashes of light we see when our eyes are closed but that’s not the effect I meant.  Phosphenes are a random phenomenon; if you look straight a head and actually focus on the darkness you should see actual images that you can mentally morph. It feels like you are actually looking at the objects/ scenes; perhaps it’s just me lol.

Quote
the reason I sense my "touch" as "auditory"

Sound is just the modulation of pressure waves within the air volume; your body is quite capable of sensing certain sounds through your skins surface etc.

Quote
Korrelan, do you implement your CNN so that the center of the eye/s gets more score/points?

Yes!

Quote
Our eyes, where they point to, make us see what's in the center, unless there is a scary face at the side of our view.

The fovea of your eyes is geared towards saliency/ recognition whilst the wide periphery is geared more toward danger/ attention. Your wide peripheral visual area has evolved mainly to keep you safe. If something suddenly moves in your periphery it triggers a natural reflex action depending on your situation; you could move your eyes to identify the object or in certain circumstances it will even trigger the ‘fight or flight’ response.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on February 09, 2017, 03:45:15 pm
I always cringe when I read articles saying "eye reflexes to danger" or "conditionalized to wind in testing". It is actions """I""" do, > by the pattern recognizers, I may not look at a moving raccoon murder dude moving jungle leafs, or blink to the wind conditional test thingy test.......WAHAHAHA! O.–

If something in the "next" image changes, then that area gets more points/score.
Title: Re: The last invention.
Post by: korrelan on February 10, 2017, 11:08:13 am
Quote
I always cringe when I read articles saying "eye reflexes to danger"

I agree. The eye has no reflexes to danger.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on February 10, 2017, 03:01:06 pm
You have to define what "is" danger.

Pattern recognizers will link to according actions. You may not turn your eyes over there.

Just because you see some rustle in the leafs, it doesn't mean your eyes move to it.

It makes you get more "score" there. And you can see there. Without turning the eye center at it.
Title: Re: The last invention.
Post by: korrelan on April 12, 2017, 10:56:23 am
This is just a clearer culmination of three rushed posts describing my theories on human memory.

I thought I’d place it here on my project thread to save reposting on the original thread.

Memory Theory

I think that long/ medium/ short term memories/ experiences and knowledge all use exactly the same architecture; what differentiates long/ short memories/ knowledge is that when memories are first created they are weak (short term) and as they become consolidated over time they form stronger (long term) memories/ knowledge.

Short term memories are initially formed from the current state ‘global thought pattern’ (GTP); or pattern of activation within the synapses/ neurons at any given moment. Associations are formed through weak synaptic links carved by our current thought pattern.  The weak memory engram exists within the structure of the strong long term learning.

Long term memories are stored in the synapse patterns that connect the groups of neurons.  Remembered items are composed of sparsely distributed neuron groups that represent a particular facet of the item.

Our consciousness is the pattern of activation running within this memory/ knowledge structure.

https://www.youtube.com/watch?v=C6tRtkyOAGI (https://www.youtube.com/watch?v=C6tRtkyOAGI)

I’ve shown this vid before but it’s a good example of how complex the GTP is. Each pixel represents a neuron in the AGI’s frontal cortex.  Linked to the back of each pixel are connections (white matter tracts) to other areas of the overall cortex. The machine is sleeping and what you are looking at are patterns formed from memories/ experiences blending together; each memory is triggering other memories; the pattern constantly flows from one state of ‘thought’ to the next. At 6 seconds into the vid a blue (high activity) region appears (lower middle)… this was me talking. The machine was still listening even though it was sleeping… that activity influenced the whole pattern as it tried to make sense of the sensory input; a ‘thought’ pattern is very fluid and complex.

Forming New Memories

We build our knowledge representations in a hierarchical manner; new knowledge is based/ built on our understanding of old knowledge. 

Our understanding of our current experience is created from the GTP which is comprised of the parts of memories/ knowledge relevant to this moment of time… it’s the state of this pattern that new memories are formed from.  When we initially form a memory we are linking together existing knowledge/ understanding/ experiences with weak synaptic connections/ associations.

If you were to learn a new property of water for example; your brain doesn't have to update all your individual knowledge/ memories regarding water and its uses/ properties… it simply has to include the new property into the GTP representing water when ever you think of it.

The brain tends to record only novel experiences; a new memory is formed from novel differences the brain has not encountered before.  This happens at the synaptic level and so is very difficult to relate to the consciousness level.

So any new memory is the brain recording our current state of consciousness; to understand this moment in time we are using hundreds of ‘long term’ memories and knowledge.  To remember a list of words or numbers for example, you have to be able to recognise and understand the items; you can’t remember what you don’t recognise/ understand.

The brain doesn’t record the incoming experience… it records its current understanding of it though its previous experiences/ knowledge.

Look at the letter ‘C’; that has just included a pattern into your GTP that represents (too you) that letter… now look at ‘A’; now that pattern is included. The two separate patterns have created a merged pattern that represents ‘CA’ and already your brain/ cortex is firing other patterns that relate to ‘CA’. Now look at ‘T’… bingo. The combined pattern of the three letters was instantly recognised by areas of your cortex that then ‘fired’ the pattern for ‘CAT’ back into you GTP intern firing patterns that represent the general concept of ‘CAT’. At the same time there where patterns running that covered this topic, your body posture, how comfortable you feel, etc. Reading this paragraph and your thoughts/ opinions on it have altered your GTP; you don’t need to remember the letters ‘CAT’ or even the basic method of explanation I’ve used; they are already well engrained… it’s the new/ different bits of the pattern that get etched into your synapses.

If you look at this sum 5+5= you don’t have to mentally count or add the numbers on your fingers; the visual pattern of seeing the sum fires a globally understood pattern that represents 10.

Memory Structure

Different brain structures contribute certain facets to the memory.  The limbic network adds a very strong emotional facet to a memories overall pattern; other areas add temporal and episodic order to our memories. 

This might explain why a scientist/ neuroscientist might think different memories are stored in separate brain regions. Different brain regions supply different facets of a memory.  The Hippocampus (could be revised) for example provides an index/ temporal date stamp to a memory as its being stored (not read); if they where viewing frmi results on memory consolidation they would be measuring the difference between existing knowledge and new learning. The new learning would require a recent time/ stamp that would activate the Hippocampus (episodic), blood would flow to this region highlighting it.

This is a diagram showing a rough map of where the various categories within the circle where detected upon the human cortex surface.

(http://i.imgur.com/vJLHRKs.jpg)

This is a short video showing the same/ similar category organisation within my AGI’s cortex.  As usual the forty test patterns (phonemes, images, etc) are shown on the right; the confidence in recognition (height of the bar) is shown on the bottom left. Notice the regular modulated input pattern below the pattern input on the right. The cortex section has very high confidence in its recognition of the patterns until I click ‘A’ in the lower right to turn this regular injected pattern off. Then the cortex sections confidence drops/ stops… I have removed a facet of the overall pattern that the system was using to recognise the patterns. This is a kin to disconnecting the hippocampus or limbic system… it makes a big difference.

https://www.youtube.com/watch?v=AGFk_QddPqo (https://www.youtube.com/watch?v=AGFk_QddPqo)

A Memory is never moved around inside the brain; it’s never moved from short to long term storage.  ‘Memories’ are never searched either (searching is a serial schema); because of the parallel architecture the brain has no need to search. The ‘thought pattern’ flows from one pattern of depolarized neurons to the next pattern through the axons/ synapses/ connectome; it’s the structure that stores the long term memories. 

Accessing Memories

We access our memories in a holographic manner; any part of a memory will trigger the rest.

When we use a piece of knowledge or a skill we also have access to when and where we learned it; sometimes it will effect our use of the knowledge (bedside manner for a doctor); how the use of the memory/ knowledge effects us is governed by the global thought pattern and what the main situation/ topic/ goal/ task is.  It’s our focus/ attention (also part of the global pattern) that dictates the effect of the memory/ knowledge on our current consciousness and what sections of the memory/ knowledge are relevant to the current task.

When a particular pattern is recognised the resulting output pattern is added to the overall GTP; this changes/ morphs/ phases the GTP which is in turn recognised by other areas… repeat.

A young child has more synapse than you or I but could never grasp the concepts of this conversation because they have no prior experience/ knowledge to create their memories on/ from. 

Memory Problems

If any of the original facets that the memory was comprised of are compromised in some way it can make retrieval difficult from that aspect, time, location, etc… we all use the tactic of thinking of related/ similar memories when trying to recall a weak memory, you’re just trying to produce a global pattern conducive to triggering the required memory; trying to fill in the missing blanks in the pattern that will trigger retrieval.

To remember what I had for breakfast I have to use strong/ long term memories. To understand what the items were, what they were called, even the concept of ‘breakfast’ requires a lot of strong/ long term understanding/ knowledge/ memories.  The weak/ short term bit of the memory is what links all the various items together along with a location/ timestamp/ index/ etc that I would recall as ‘earlier today’.  For the unfortunate people who have difficulty retrieving today’s (short term) memories I would wager they have a problem with a brain region responsible for temporal tagging/ indexing the memory as ‘today/ recent’

My AGI is based on the mammalian connectome and exhibits both/ all these memory traits; this is why I believe I am correct in my assumptions.

 :)
Title: Re: The last invention.
Post by: kei10 on April 12, 2017, 11:45:28 am
This is absolutely mesmerizing to read, thank you for sharing!
Title: Re: The last invention.
Post by: korrelan on April 12, 2017, 03:08:23 pm
Ageing Memory

What changes as we get older?  Personal thoughts and observations from my project.

One of the main mechanisms our brain uses to consolidate learning is the strengthening of the synaptic connections between neurons.  A set of neurons with strong synaptic connections can impose a strong influence on the global thought pattern (GTP).  The more we experience a set of facets that a moment in time is constructed from; the easier we can both recognise and infer similar situations/ experiences.

So an older person is ‘wiser’ because they can quickly recognise the similar well experienced traits/ situations or knowledge and infer their learning into a new problem space… been there… done that.

Younger minds ‘think’ more flexibly because there are no really strong overriding patterns that have been forged through long term exposure/ use; thus more patterns are fired and each has to vie for attention.  Loads of ideas… not much sense lol.

Both the young and old versions of the human memory schema have their advantages… but obviously the ideal combination is wisdom with flexibility… so how can we achieve this?

The Problem

The main problem is that the old adage ‘use it or lose it’ also applies to your neural skill set.

We are the sum of our experiences and knowledge.  Everything we learn and experience affects ‘who’ we are and the ‘way’ we think.  All the skills we acquire through our lives and the problems we solve are mapped into our connectome and ALL this information is accessed and brought to bare when required.  The fact that I can repair clocks/ engines and the techniques I have used are also used when I’m solving neuroscience related problems.  Every good theory/ thought I have is comprised from all the facets of the many varied topics mixed together in my brain.  When I consider a possible relationship between disparate subjects I’m actually applying a general rule that is comprised of all the different types of relationships I have ever encountered.

As we get older our connectome fine tunes more and more to become expert in our chosen vocational field. We can recognise the common relationship problems a younger person is experiencing because we have seen it so many times before… the price for this wisdom is loss of plasticity/ flexibility; just the process of becoming proficient at life harms our imaginative thinking powers.

Besides learning new skills/ topics it’s very important from a mental perspective to exercise previously learned skills or knowledge frequently.  The general purpose rules we constantly apply have been built up through a hierarchical learning process and depend on all the various facets of the skills and knowledge that were present when they where originally consolidated. If enough of the underlying skills/ knowledge is lost/ forgotten then although the general purpose rule still exists it can’t be applied as flexibly as before. 

This is where an elderly person can loose mental flexibility; they are wise enough to know the correct answer through experience; but because the original skill set has been lost they lack a deep understanding of how they arrived at the answer… and without this information they can’t consider new/ fresh avenues of thought. 

The Solution

Don’t just learn new topics/ skills… frequently refresh old learning/ knowledge and skills.

I’m off now to break my neck on my sons skateboard lol.

 :)
Title: Re: The last invention.
Post by: korrelan on May 13, 2017, 12:39:51 pm
Time for a project update; I’ve been busy doing other projects/ work lately but I have still managed to find time to bring my AGI along. First a quick Re-cap…

Connectome

I have already perfected a neural network/ sheet that is capable of learning any type of sensory or internal pattern schema.  It’s a six layer equivalent to the human cortex and is based on/ uses biological principals.  Complex sensory patterns are decoded and converted/ reduced to a sparse output that fully represents the detail of the original input; only simplified and detail recoded through a temporal shift.  Long to short term memory and prediction etc are all tested and inherent in the design.

Consciousness Theory

To achieve machine consciousness the basic idea is to get the connectome to learn to recognise its own internal ‘thought’ processes.  So as sections of cortex recognise external sensory streams, other sections will be learning the outputs/ patterns (ie frontal cortex) being produced by the input sections.  The outputs from these sections go back into the connectome and influence the input sections… repeat.  This allows the system to settle into a stable internal global minima of pattern activity that represents the input streams, what it ‘thinks’ about the streams, how it ‘thought’ about the streams and what should happen next etc.

It’s a complex feedback loop that allows the AGI to both recognise external sensory streams and also recognise how its own internal ‘thought’ processes achieved the recognition in the same terms as it’s learning from external experiences. I envisage that eventually as it learns our language/ world physics/ etc it will ‘think and understand’ in these learned terms… as we do.

A Working Model

Now this is where things get extremely complex and I must admit it threw me a curved ball and slowed my overall progress down; until I wrote the tools to cope/ understand exactly what was happening in the connectome/ system.

https://www.youtube.com/watch?v=ERI92iTzVbY (https://www.youtube.com/watch?v=ERI92iTzVbY)

This vid shows the top of a large neural tube; it’s a starting connectome and will grow in both size and complexity as the AGI soaks up information and experiences. The right side represents the sensory inputs and the left is the precursor to what will develop into the frontal cortex.

I’ve trained the model to recognise forty audio phonemes and you can see its confidence in the height of the bars lower left < 0:20 into the vid. I then turn off the phoneme patterns <0:40 and inject a random pattern to show the connectome has no recognition. At 0:50 the phonemes are turned back on and recognition re-commences. The system is stable; on the right you can see the input patterns at the top of the window and the sparse frontal cortex activation patterns just below them.  I then add the random element/ pattern to the phoneme pattern.

At 1:09 it reaches the point where the feed back from the frontal cortex starts to influence the input to the sensory areas and a feedback cascade begins… this is what I’m after.

The frontal cortex has learned the outputs form the sensory areas and begin adding their own patterns to the mix which in turn begins to influence the sensory input areas.

I have to be careful of my choice of words but at this point the connectome has become ‘aware’ of its own internal processes expressed in terms it’s learned from the external world.

I then turn off the external sensory stimulus and the global ‘thought’ pattern slowly fades because there are no circadian rhythms being injected to support it.

One of the problems I’m facing besides the shear complexity of the patterns is that once the feed back ‘spark’ begins that connectomes existence in time is forged, the complex global pattern relies on millions of pattern facets timed to the millisecond… once its stopped it can’t be re-created exactly the same.

So a bit more done…

 :)
Title: Re: The last invention.
Post by: Art on May 14, 2017, 12:46:12 pm
It would be quite interesting to be able to record or see if it could be able to think about that which it has experienced or learned, which in a sense would be something the equivalent of dreaming (or daydreaming).

It was interesting to see the cascading episode as it began growing.

Is it possible to note the areas where certain items or "things" are being stored or recognized by the system and are they the same every time?


Quite a nice experiment! O0

Title: Re: The last invention.
Post by: LOCKSUIT on May 14, 2017, 03:17:10 pm
"One of the problems I’m facing besides the shear complexity of the patterns is that once the feed back ‘spark’ begins that connectomes existence in time is forged, the complex global pattern relies on millions of pattern facets timed to the millisecond… once its stopped it can’t be re-created exactly the same. "

If you mean, the result of the hierarchy is lost after it's turned back on, then try either giving all senses and links a strengthening and weakening process so they last plus erase and/or a threshold to self-ignite and fire so they not only take action on their own but also will work together at the right times.
Title: Re: The last invention.
Post by: korrelan on May 15, 2017, 02:21:16 pm
@Art

Quote
It would be quite interesting to be able to record or see if it could be able to think about that which it has experienced or learned, which in a sense would be something the equivalent of dreaming (or daydreaming).

It definitely does re-use the learned facets of its knowledge/ experiences.  That’s where the feedbacks coming from. The ‘frontal cortex’ learns the order/ sequences of the sensory cortex’s outputs and attention areas form that learn the similarities in the input/ feedback streams. So yes… it does dream/ daydream about its experiences.

Quote
Is it possible to note the areas where certain items or "things" are being stored or recognized by the system and are they the same every time?

I can highlight and examine any area of cortex to see what has been learned in that area but initially the cortex areas are very fluid; they tend to move as more information is soaked up because the cortex is self organising. They will eventually stabilize and become fixed as the synapse strengthen through experience and lock them into position.

@Lock

Quote
If you mean, the result of the hierarchy is lost after it's turned back on, then try either giving all senses and links a strengthening and weakening process so they last plus erase and/or a threshold to self-ignite and fire so they not only take action on their own but also will work together at the right times.

I can save the connectome at any time recording all of its current states and then re-commence from where it left off… the problem is that once a global thought pattern has been allowed to fade/ die it can never be restarted exactly the same.  The momentum/ complexity/ inertia of the pattern has to be sustained whilst the system is running. 

I had a problem with ‘epilepsy’ in the connectome; very local cortex feedback triggered by a certain frequency of visual input would start a very fast local feedback cascade that would cease/ crash the global pattern… I had to re-build the pattern from scratch.

Hopefully this will be less of a problem once the connectome ages and becomes robust to sudden changes in the various streams/ patterns.

I intend to keep it running 24/7 anyway.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on May 15, 2017, 02:32:19 pm
If a GTP can fade/die, and never restarted the same, well, how is it created anyhow? For example is it by neuro-muscles that stay existing OR like RAM i.e."tourists" that are new folks and so when turned off they didn't like you know, save.

> Therefore, shouldn't a GTP be strengthened into neurons and not water piped around like RAM? Knowledge is saved. It's simple...
Title: Re: The last invention.
Post by: WriterOfMinds on May 15, 2017, 03:26:03 pm
Re: your most recent post -- nice animation.  I was interested in how exactly the patterns coming out of the cortex influence the input once feedback starts.  How do you blend the sparse patterns from the cortex with the noisier or more complex input data?  And what's the overall effect -- does it intensify the input patterns?

I don't have much neural network background, so apologies if I am asking dumb questions.
Title: Re: The last invention.
Post by: keghn on May 15, 2017, 04:36:43 pm
 Is the sequences memory clocked by a daisy chain of nerves or a up counter/down counter, or random access in a sequence?
Title: Re: The last invention.
Post by: korrelan on May 16, 2017, 09:52:53 am
@Lock

Quote
If a GTP can fade/die, and never restarted the same, well, how is it created anyhow?

Very gradually lol.  The connectome starts with a random configuration.  It then alters its self over time and learns/ adapts to its environment/ inputs.

This is the GTP from a simple connectome seed. Every point means something; a whole memory and all its complexities can be encoded/ recalled by just one of these synaptic pulses.  Information is encoded both spatially and temporally; so if one facet is out of phase by an ms or missing it means something totally different to the system.  It’s this constant ever-changing pattern that can’t be reproduced because it has been built from experiences over time. The pattern has been produced by a constantly changing connectome that has been learning over time. I can stop and save it… and continue but if it ever fades out through lack of neural stimulation/ feedback… it’s gone.

The pattern is the program running on the connectome/ neuromorphic processor.

https://www.youtube.com/watch?v=kgpQ956U-x8 (https://www.youtube.com/watch?v=kgpQ956U-x8)

@WoM

It’s a hybrid schema designed from scratch using my own interpretation of a neuron, synapse, axon, etc. It’s sensitive to both patterns and pattern facet frequencies.  The design is comprised of the best bits of the many different common types of artificial neural net, convolution, pooling, spiking, liquid, reservoir, recurring, etc with other mechanisms that mimic biological principles to bring them together into a cohesive system. 

Quote
How do you blend the sparse patterns from the cortex with the noisier or more complex input data?

The cortex sheet is self organising and learns to break the complex sensory stream down into regular, sparse recognised facets; which exit through the equivalent of pyramid neurons into the connectome.  So the complex audio sensory data for example is handled by the audio cortex and the rest of the connectome only receives the sparse data re-encoded into the connectomes native schema. Because the cortex sheet has several inputs it can also learn to combine data streams coming from different cortex regions.  Initially the audio cortex just learns the incoming audio sensory stream but once other cortex regions learn to recognise its sparse outputs their interpretation of the data is bounced back.  The audio cortex then starts to learn both the audio stream and the streams coming from other cortex areas.  Some of these streams are predictive or represent the next ‘item’ that should occur based on past experience.

In engineering terms I suppose the input layer of the cortex sheet could be viewed as a set of self adapting/ self wiring logic gates that are sensitive to both logic and temporal phase/ signals. So with excitation/ inhibition/ neurotransmitters/ etc it adapts over time to the incoming signals and learns to filter and re-code recognised elements.

Base data representation within the system is not at the level of say ‘cat’ or even whole English words; it’s at a much lower abstract representation.

@Keghn

Quote
Is the sequences memory clocked by a daisy chain of nerves or a up counter/down counter, or random access in a sequence?

All of the above… it adapts to use the best/ most efficient encoding schema for its problem space.  Hard to explain…it learns to encode episodic memories just because there is episodic information encoded in the input streams.  The streams have a temporal element, they have a fourth dimension, the system encodes time just the same as it encodes everything else… So areas of cortex learn that information/ memories come in sequences. 

I’ve designed it to be able to adapt and learn anything and everything.

Analogy: Think of all the wonderful things modern computers can do with just 0 and 1’s.  A modern computer just runs at one frequency; but imagine if every frequency band could run a different program through the same processor logic; parallelism by frequency band separation not architecture.  All the programs can interact at every level, they all share the same logic/ holographic memory which in turn can be modified by any program… this is one of the base ideas of my design along with self organisation/ learning etc.

It’s still a work in progress.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on May 16, 2017, 02:23:09 pm
But think hard korrelan, you're saying your program is literally the GTP, and it vanishes !

All we learn in life MUST be strengthened, and stored forever.

I know you say it forms from a whole lot of stuff, and you're probably sad you can't find how to save it.

Keep this in mind. We store senses sequentially. Some become strong enough to not shrink.
Title: Re: The last invention.
Post by: korrelan on May 16, 2017, 05:20:16 pm
@Lock

Analogy: Imagine you own a business that delivers top secret correspondence/ letters.  In your office you have a wall full of reinforced pigeon holes with combination locks; one for each of your clients.  You have ten members of staff who have worked for you since the business began; their daily routine is to remove letters from the IN bin and place them into the correct clients pigeon hole ready for delivery. Over time each member of staff as claimed certain pigeon holes for each of their clients and for security reasons only they know the combination to the lock; and there are no names on the pigeon holes.  No employee is allowed to handle anyone else’s correspondence but the system works perfectly.  Every day hundreds of letters arrive and are allocated/ sorted by your team… it’s a well oiled machine… every one knows what they are doing.

They all die… (Sh*t happens lol) and you employ a new team of ten.

How does the new team take over seamlessly from the old team? All the information is still there… the letters are still in their respective pigeon holes… there is no loss of actual physical data… it’s the mechanism/ system that stored the data that has gone.

Information is stored in the ‘physical’ connectome structure and is very robust so long as it’s accessed fairly regularly.  The GTP that placed/ sequenced/ encoded the information into the connectome is the only one that can access it.

The only reason the GTP ever fails/ stalls is through design errors made by me.  It will continue indefinitely once I have all the bugs ironed out… in the mean time when ever it does fail… I have to rebuild/ restart it from scratch.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on May 16, 2017, 08:15:40 pm
So, this is your business, and it's called GTP.

...

Quote
How does the new team take over seamlessly from the old team? All the information is still there… the letters are still in their respective pigeon holes… there is no loss of actual physical data… it’s the mechanism/ system that stored the data that has gone.

If all employees die, and the new team can't do the same, then there IS a loss of information - the memories in the deceased employees's brains. SAVE THEM FOR LORDS SAKE. "SAVE"

Isn't that like saying "my circuit board or gear disappeared"? Don't let your AI circuit/metal mechanism vanish! That's the most simplest thing ever...

Save memories. And strengthen them to resist erasing.

Again, save save save. If you need your employees, save em before shutdown time.

I know this is a save problem. I should've realized that as soon as you said "I lose the GTP on shutdown".
Title: Re: The last invention.
Post by: keghn on May 16, 2017, 08:57:55 pm
 I can envision spiked NN seeing a picture that is 1000 x 1000 pixels..........By moving into first layer of  a Siked Neural Network
with a depth of 5000 layer. Very deep.

 In the next, this image is moved into the second layer and the third step to the third layer and so on. Un changed.
 Each movement would be pixel to pixel. Or a daisy chain of spiked neurons.
 While a image is on a layer there is a dedicated 1000 x 1000 Spiked NN to observe what is there. And then these observing
Spiked NN would talk to each other to do more magik:)
Title: Re: The last invention.
Post by: WriterOfMinds on May 17, 2017, 02:05:55 am
If the goal here is something that strongly resembles the human brain, the impermanence of the learned structures doesn't necessarily strike me as a flaw.  Even long-term memories can fade if they aren't reinforced (either by recall or by experiencing the stimulus again).  Given the massive amounts of sensory data we take in, I'm sure forgetting is actually an important mechanism -- if you stop needing a piece of learned knowledge, it may be good for your brain to discard it, thereby freeing up wetware to learn something more important to you.

And of course, if a biological human brain is "turned off" at any point, an irreversible decay process begins.  Perfect recall is a super-human attribute, and korrelan isn't aiming for that yet, perhaps.
Title: Re: The last invention.
Post by: keghn on May 17, 2017, 02:47:53 am
 I have complete AGI theory that is not based on Neural network. I am working on A AGI theory that does use NN and
it is close to competion. I am not in hurry to finish the a AGI neural connectionist theory,. But if Find neural network
very interesting and studying them has made by complete theory even better.

rough synopsis. AGI basic structure. Jeff Hawkins influenced: 

https://groups.google.com/forum/#!topic/artificial-general-intelligence/UVUZ93Zep6Y



https://groups.google.com/forum/#!forum/artificial-general-intelligence



Title: Re: The last invention.
Post by: korrelan on May 17, 2017, 10:40:40 am
@WoM… Well explained.

I seem to be having a problem explaining the system; some people just do not understand the schema… I need to pin the description down so I’ll have another go.

The Connectome

This represents the physical attributes and dimensions of the AGI’s brain.  It exists as a 3D vector model in virtual space.  It’s the physical wiring schema for its brain layout and comprises of the lobes, neurons, synapse, axons, dendrites, etc. Along with the physical representation of the connectome there are a set of rules that simulate biological growth, neurogenesis, plasticity, etc.  It’s a virtual simulation/ 3D model of a physical object… our brain.

The connectome is where experiences and knowledge are stored; the information is encoded into the physical structure of the connectome… Just like the physical wiring in your house encodes which light comes on from which switch.

The Global Thought Pattern (GTP)

This is the activation pattern that is produced by simulated electro-chemical activity within the connectome.  When a virtual neuron fires action potentials travel down virtual axons, simulated electro-chemical gates regulate synaptic junctions, simulated Dopamine and other compounds are released that regulate the whole system/ GTP. 

This would be the same as the electricity that runs through your house wiring.  The house wiring is just lengths of wire, etc and is useless without the power running through it.

The GTP is the ‘personality’…dare I say… ‘consciousness’ of the AI.

A Symbiotic Relationship

Within the system the GTP defines the connectome, and the connectome guides the GTP. This is a top down, bottom up schema; they both rely on each other.

Both the connectome and the GTP can be easily saved and re-started at any time.

The GTP is sustained by the physical structure/ attributes of the connectome.  As learning takes place the connectome is altered and this can result a gradual fade or loss of resolution within the GTP.  It should never happen but because I’m still designing the system on occasion it does.

Because the GTP is so complex and morphs/ phases through time; any incorrect change to the connectome can radically alter the overall pattern.  If the GTP is out of phase with the physical memories it laid down in the connectome then it can not access them and whole blocks of memories and experience become irretrievable. 

The connectome is plastic; so the irretrievable memories would eventually fade and the GTP would re-use the physical cortex space but… information is learned in a hierarchical manner.  So there is nothing to base new learning on if the original memories can’t be accessed… its like the game ‘Jenga’; knock the bottom blocks out and the whole tower comes falling down.

Once a GTP has become corrupt it then corrupts the connectome… or sometimes the GTP can fade out… then there is no point saving it… I have to start teaching it from scratch.  If you're wondering why I don’t just reload an earlier saved version of the connectome and GTP; I would need to know exactly when the corruption occurred… it could have been something the AGI heard or saw… or indeed a coding error within the simulation.  I do have several known good saved 'infant' AGI’s I can restart from and luckily the system learns extremely quickly… but it’s still a pain in the a**.

No one knows for sure how the human brain functions.  This is my attempt at recreating the mechanisms that produce human intelligence, awareness and consciousness.  I’m working in the dark; back engineering a lump of fatty tissue into an understandable, usable schema. I can’t reproduce every subtle facet of the problem space because my life just isn’t going to be long enough lol.  I have to take short cuts; accelerated learning for example… I can’t wait ten years for each connectome I test/ AGI to mature at human rates of development.  Our connectome develops slowly and this has a huge impact on stability… a lot of the GTP fading problems are caused because I’m forcing the system to mature to quickly… did you understand and recognise the alphabet at 1 hour old?

What I'm building is an ‘Alien’ intelligence based on the Human design.  If I create an intelligent, conscious machine that can learn anything… it can then learn to be ‘Human’ like...  and if it’s at least has intelligent as me… it can carry on where I leave off.

End Scene: Queue spooky music, fog & Lightning. Dr. Frankenstein exits stage left (laughing maniacally).

@Keghn

I’m reading your provided links now… this has evolved since I last saw it.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on May 17, 2017, 03:12:27 pm
How do you know your AI is learning? What has it achieved? Aren't you just playing with electricity packets and relay reactions stuffz? It not doin anyting....we also already have decent pattern recognizers and hierarchies...

I'm really confused at your master plan.

My AI's first achievements will be learning to crawl the house, and much more.
Title: Re: The last invention.
Post by: keghn on May 17, 2017, 03:54:53 pm
 Well my method of unsupervised learning is to have a bit stream coming into the brain. Could be a video stream.
 The Observation Neural network looking at the steam coming in. It could be CNN style. It would cut the image up
into squares. For now, let us say for now it looking for a vertical edge. This box with a edge is piped down into deep in the brain
and scans across a bunch of little spiked NN. The one that gives the best response is piped back up and goes deeper into the
observation Neural Network.

 A object is made up of subfeatures of lines. Like ball or a person in a wheel chair.
 Sub features are vertical lines, horizontal lines, solid colors of various types, and corners of various orientations.
 Each sub feature is paired with a weight so that you can transform one object into another for comparative clustering. 

 Clustering with Scikit with GIFs: 
https://dashee87.github.io/data%20science/general/Clustering-with-Scikit-with-GIFs/
Title: Re: The last invention.
Post by: ivan.moony on June 24, 2017, 02:41:05 pm
I guess that neural networks approach is what is called bottom->up approach. It is about simulating neurons, then observing what they can do when they are grouped through multiple levels in some wholes. It is known that artificial neurons could be successfully used for a kind of fuzzy recognition, but I'm also interested in seeing how neurons could be used for making decisions and other processes that humans have in disposition.

Are there any concrete plans of how the complex decision process can be pursued by neurons, or we yet have to isolate decision phenomenon out of other processes that are happening in neural network. I'm sure (if NN resembles a natural brain) that the decision process is hidden somewhere in there.

I'm also wondering about other mind processes that may be hidden inside neural networks that are not observed in the present, but will be noticed in the future.
Title: Re: The last invention.
Post by: korrelan on June 24, 2017, 04:14:27 pm
If CPU designers and Electronics engineers where to wire transistors together using the same schema as a typical feed forward neural network simulation… nothing would work.  Although the transistors design is important it’s the design of the whole circuit with supporting components that achieve the desired results.

The human brain isn't just a collection of neurons wired together in a feed forward schema… it also has a complex wiring diagram/ connectome which is required to understand/ simulate any desired results.

(http://i.imgur.com/FwbnGC1.jpg)

This is a very simplified representation of a connectome in my simulations.  The circle of black dots represents the cortex, comprising distinct areas of neuron sheet tuned to recognise distinct parts of global thought pattern (GTP). Each section specialises through learning and experience and pumps the sparse results of its ‘calculations’ back into the GTP.  The lines linking the cortex areas represent the long range axons that link the various parts of the brain together.  There are obviously millions of these connecting various areas (not shown), the different colours represent sub-facets of the GTP, smaller patterns that represent separate parts/ recognised sensory streams/ ‘thoughts’, etc.  The sub-facets are generated by the cortex, sensory streams never enter the GTP, only the results of the recognition process. 

Points A & B represent attention/ relay/ decision areas of the cortex. These areas arise because of similarities detected between various patterns.  So the fact that an objects rough shape is round, or two events happen at the same time of day, that one event always follows another will trigger a recognition in one of these areas, this will inject a pattern into the GTP that the rest of the areas recognise for what it represents, they recognise it because the whole connectome has grown from a small basic seed, and they have developed together, using and recognising each others patterns, their pattern output  has influenced their connected neighbours development and vice versa… whilst experiencing the world. A Neural AGI seed...

https://www.youtube.com/watch?v=MLxO-YAd__s

So you have to consider an experience/ object/ thought broke down into its smallest facets, and each facet having a pattern that links all the relevant data for that facet. The pattern that represents that facet can be initiated from any part of the cortex, so hearing the word ‘apple’ starts a bunch of patterns that mean object, fruit, green, etc.  This is how we recognise similar situations, or a face in the fire, or pain in others, it drives empathy as well as many other human mental qualities.

The GTP is a constantly changing/ morphing pattern guided by the connectome that represents the AGI’s understanding of that moment in time, with all the memories/ experiences and knowledge that goes with it. As the connectome grows and the synapses are altered by the GTP to encode memories/ knowledge… one can’t exist without the other.

Simples… Make more sense?
Title: Re: The last invention.
Post by: ivan.moony on June 24, 2017, 05:13:34 pm
Is it possible to have three placeholders, say "left", "right" and "result"? Now  if we observe a world where "result" = "left" + "right", we would have it memorized like this:
Code: [Select]
left right result
-----------------
0    0     0
0    1     1
1    0     1
1    1     2
...

And if we later observe "left" as "1" and "right" as "1", what do we have to do to conclude that "result" is "2"? Naively, I can imagine a question: what is the result if left is 1, and right is 1. How would this question be implemented in neural networks?

[Edit]
Theoretically, is it possible to feed 1 as "result" and to get imagination of (("left" = 0, "Right" = 1) or ("left" = 1, "right" = 0))? Moreover is it possible to chain neural reactions, like you feed "cookies" and you get a chain ("flour" + "milk" + "chocolate") + "bake at 180°" + "serve at a plate"?
Title: Re: The last invention.
Post by: keghn on June 24, 2017, 06:39:24 pm
ANN can do this with one gate. Left and right would be the inputs. Then the bias for
the third input.
 The output. would be different:
0  and 0 would give 0
0  and 1 would give 0.5
1  and  1 would give 1.0

 @Korrelan works with spiked neural network? Which are more closely to the logic
of brain neurons. Little different than the ones used in deep learning, that are so popular. 

 There are different styles of spiked neural logic. Every scientist have their own opinion 
on how they work, based on the latest science. Or a implementation to make their theories 
or project work. 

 In my logic  of spiked NN a zero is static noise, white noise of a low voltage. 
 Also noise are all values given at one time, max confusion.  No steady path to focus on
or follow.
 When spiked NN is train and generates a value it will go from wild random static to 
a solid un vibrating value from zero to something greater than zero. Then drop back down 
to  random low voltage static. 

Title: Re: The last invention.
Post by: korrelan on June 24, 2017, 07:03:08 pm
I’m not sure what you’re asking…

You’re describing a logic AND gate. Where either a single ‘left’ or ‘right’ would result in 0, but both together would equal 1… but anyway…

This simple example can be handled by one single traditional simulated neuron.  A traditional simulated neuron can be viewed as an adaptable logic gate.

Traditional simulated neurons use weights as their inputs (any number), a threshold and a bias, you can forget the bias for this example.

The threshold is a trigger/ gate value, ie… if the sum of the weights is greater than the threshold then ‘FIRE’.

So in your example the threshold of a single neuron would be set to 2, either input would only add 1… both inputs would result in 2 (sum the inputs) and the neuron would fire a 1... not a 2.

Although usually the weights are less than 1, so your weights (inputs) would actually be 0.5 and 0.5 with a threshold of 1… firing a 1.

To code a simple neuron…

Input A
Input B
Thresh=1

If A + B >= Thresh then Output=1 else Output=0

Traditional neurons themselves are very simple… their strength lies in combining them into networks.

Edit: As Keghn said…

 :)
Title: Re: The last invention.
Post by: keghn on June 25, 2017, 12:00:43 am
   
 There are the detector NN and then the generator NN 
 Generator NN are used for memory.   
 One detector NN can detect many objects. Like, cookies, flour, cake, and bowl.   
 Many raw pixel go into a 
black box. Then come out on the other side as output pixels . 
 Each output pixel is assigned to some object like cookie and spoon for another. 
 That output pixel only go high when a spoon is in the input image and 
turns off when removed. This is a detector NN. It is not memory. More for action. 
 It detects a fly and then brush it away. 
  So a real brain need to record and that is where the generator/recorde/synthesis NN   
comes into play. It Is pretty much the same as recording into computer memory, ram. 

 The activation of the detector NN are recorded and also the image that caused them to
activate.

 There is a address pointer into memory so it can be rewined, fast forwarded, jump 
ahead, and so on. 

 Using two or more address pointer into memory running at the same time is the basis 
 for a internal 3d simulator.
 Viewing two images at the same time is confusing. So rewind, speed up, slow down control
are used to keep thing in order.

 A simple NN can be pyramid. They are very flexible and can have many types of set ups. 
 Like ten inputs and ten outputs.   

 To remember you need an echo: 
https://www.youtube.com/watch?v=v-w2rEsYuNo



Title: Re: The last invention.
Post by: korrelan on June 25, 2017, 10:21:24 pm
Gave my bot a serious optics/ camera upgrade today. 

Got rid of the stereo SD web cams and replaced with a matched pair of true HD cams on 3D gimbals.

Upgraded the software etc, to handle the HD resolution.

(http://i.imgur.com/Yrm2V4R.jpg)

 :)

https://www.youtube.com/watch?v=SvPNR1zYAKg

:)
Title: Re: The last invention.
Post by: infurl on June 25, 2017, 10:46:47 pm
I'm working on something comparable at the moment too. I'm trying to build a drone that uses stereo vision. I've just acquired a pair of Runcam2 HD cameras but haven't decided on a gimbal mechanism yet. Can you provide more details about the gear that you are using?
Title: Re: The last invention.
Post by: korrelan on June 25, 2017, 11:49:11 pm
I used to have a really antiquated control system but have recently found a much easier, neater method.

The stereo cameras are out of HD 3D gimbal domes.  I’m using passive baluns to convert the composite video signals to digital so I can pass them through a single cat5 cable, then baluns again to convert back at the DVR end.

The main gimbal is a HD PTZ camera minus the housing using the Pelco D protocols.

The 3D gimbals on the stereo cameras make it easy to adjust and calibrate the alignments for stereo overlap, etc.  They give a wide stereo view with a 30% overlap, the center HD cam has a X30 zoom lens, this provides long range and in combination with the other cams as periphery a high def fovea. 

I’ve tried moving the X,Y axis of the stereo cameras with a servos but they were a bug*er to keep aligned, so now I’ve opted for fixed cameras and just move the ‘head’.

I run the cameras through a 16 Cam HD DVR with a built in 485 PTZ controller, that came with a full SDK for windows, so I can easily get live feeds and control the PTZ motors across my network/ web. 

Made it really easy to control the motors and integrate the camera feeds into my AGI.

(http://i.imgur.com/W0NLkRb.jpg)

I also acquired a simple security 485 PTZ joystick controller for testing purposes, you can see it in this image.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on June 26, 2017, 07:29:18 am
Holy moly wow korrelan, you got a lab? Haha ur so cool. Nice to see some real life/part of you. And good works.
Title: Re: The last invention.
Post by: korrelan on June 26, 2017, 09:55:26 am
I thought all aspiring mad scientists who want to rule the world had a lab.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on June 26, 2017, 10:39:57 am
Well..............

I have my bedroom...

There's a very very very evil computer in it.
Title: Re: The last invention.
Post by: Zero on June 26, 2017, 11:56:30 am
The Connectome

This represents the physical attributes and dimensions of the AGI’s brain.  It exists as a 3D vector model in virtual space.  It’s the physical wiring schema for its brain layout and comprises of the lobes, neurons, synapse, axons, dendrites, etc. Along with the physical representation of the connectome there are a set of rules that simulate biological growth, neurogenesis, plasticity, etc.  It’s a virtual simulation/ 3D model of a physical object… our brain.

The connectome is where experiences and knowledge are stored; the information is encoded into the physical structure of the connectome… Just like the physical wiring in your house encodes which light comes on from which switch.

The Global Thought Pattern (GTP)

This is the activation pattern that is produced by simulated electro-chemical activity within the connectome.  When a virtual neuron fires action potentials travel down virtual axons, simulated electro-chemical gates regulate synaptic junctions, simulated Dopamine and other compounds are released that regulate the whole system/ GTP. 

This would be the same as the electricity that runs through your house wiring.  The house wiring is just lengths of wire, etc and is useless without the power running through it.

The GTP is the ‘personality’…dare I say… ‘consciousness’ of the AI.

I'd like to understand better.

The connectome is a wiring schema that contains informations (plus rules), I got it. But the GTP... Is it an engine, that permanently makes calculus based on connectome content, and that stores results of calculus back in connectome? In other words, is GTP the "game loop" of the system? God, I'd love to see some code. No, I'm not calling you "God". :)

EDIT:

but I'm also interested in seeing how neurons could be used for making decisions and other processes that humans have in disposition

For example, doing pathfinding (https://en.wikipedia.org/wiki/A*_search_algorithm) with neurons would be awesome.
Title: Re: The last invention.
Post by: korrelan on June 26, 2017, 12:54:32 pm
Erm… I think I get what you’re asking…

There is obviously a software engine that is running the whole 3D simulation, calculating the properties/ states of the neurons, synapse, axons, etc.  The rules are also governed by the simulation engine, how fast neurons can grow, how long axons can grow under their current circumstances. The simulation engine is a wetwear middle ground between the biological connectome and the binary computer.  Human brains don’t run on binary, so inside the simulation I can make the neurons use any kind of logic I like, I’m not limited by binary logic inside the simulation.

The connectome is the 3D spatiotemporal model of the brain, I think you got this.

The GTP is the pattern of activation running through the neurons, axons, etc.  The GTP follows the physical structure of the connectome.  So a neuron might fire and send a signal through its axon to 100 dendrites, 20 of these neurons might then fire sending their action potentials to others.  This constant chain reaction of activation within the structure of the connectome is the GTP.

The GTP effects how the connectome develops, memories and experiences are laid down in the connectome by the GTP.

I suppose it does sound confusing lol.  The wetwear simulation software is another level of abstraction.

The game analogy is a good one; the binary game computer runs a simulation of a game world.  Anything is possible within the game simulation, we can fly, etc.  In modern games actions you take inside the game have a cause and effect within the game.  Lets say I throw a grenade… the explosion will result in objects bouncing off each other, the games simulation engine is modelling the physics inside the game. Objects stay where they are once they have been blown up.

In my software the connectome is the game world, and the GTP would be the physics simulation affecting the game world… permanently changing it depending on what happens in the game.

My approach is just a way to circumnavigate the limitations of binary computers.

When it really starts getting complicated is when you consider that the connectome and GTP can then generate it's own internal 3D world, as part of the prediction and imagination phenomena.

 :)
Title: Re: The last invention.
Post by: Zero on June 26, 2017, 01:30:47 pm
Very neat and understandable answer, thank you.

So, I guess, there's a FIFO queue of signals waiting to be fired, and the GTP pops the one on top of queue, do the firing, then pops the next one and so on. But this is only a simplified description of it, since the whole thing is distributed on several computers.

Also, I guess that signals are "saved" for the next frame, then applied all at once when one frame is over. Then in the next frame, a new queue of signals waiting to be fired is constructed and so on.

Something like
Code: [Select]
loop {
    while current queue isn't empty {
        pop signal from current queue
        fire it {
            determine what signals will be fired on next frame
            save them in the next frame queue
        }
    }
    current queue := next frame queue
}



Title: Re: The last invention.
Post by: LOCKSUIT on June 29, 2017, 01:14:54 am
My baby can have 50 Super Duper Ultra HD cameras with infinite zoom, and a full human baby body with full motors. Physics Engines are incredibly great.

At some point korrelan, you're going to want to switch to simulation.
Title: Re: The last invention.
Post by: infurl on June 29, 2017, 01:31:56 am
So, not satisfied with merely simulating reality, now you're attempting to simulate your simulated reality in your head.

*If* you were capable of arithmetic *and* you were to actually perform some of that arithmetic with realistic inputs, you would discover that the simulation you are simulating in your head would require hardware that is not available to you. You really should kickstart a kickstarter campaign to fund it.

We can easily simulate your simulated simulation in our own heads and we know what is going to happen before you do.
Title: Re: The last invention.
Post by: LOCKSUIT on June 29, 2017, 02:19:16 am
Korrelan has a Beowulf though.

My brain does have simulators.

You read my latest posts right? I have the most efficient pattern recognizer. I can now go from skipping to major skipping.
Title: Re: The last invention.
Post by: infurl on June 29, 2017, 02:40:05 am
No, you claim to have the most efficient *imaginary* pattern recognizer. You don't actually have anything exceptional, useful or of value. Even your imagination seems impoverished compared to that of someone who actually does things. Get some experience doing things, then and only then might you be worth listening to.

Beowulf clusters were all the rage before multi-core processors became commonplace. The systems (plural) that I'm programming on as we speak have around 40 cores each, terabytes of main memory and SSDs, and petabytes of spinning disks. I laugh at your Beowulf clusters. :P

Title: Re: The last invention.
Post by: korrelan on June 29, 2017, 09:40:39 am
Haha… my project thread has become quite controversial lol.

@Lock

Quote
At some point korrelan, you're going to want to switch to simulation.

Afraid not…

Quote
Korrelan has a Beowulf though.

As Infurl said… the topology is an old one but it still has its advantages, most modern ‘super’ computers are still built using a similar topology for good reasons. 

The term ‘Beowulf cluster’ is old (like me lol). The ‘Beowolf’ is just a collection of multi-core machines on a high speed network, anyone can put one together. Building the cluster is the easy part; writing the software to maximise the cluster for a specific purpose is where it gets interesting.

40/ 70 core machines have their uses, scientific modelling, graphics rendering etc, but the task has to easily sub-divided in to an efficient use of a threading model.  Writing the code for this kind of threading model can be/ is a pain and very time consuming.  You have to have a firm grasp of what the software needs to achieve and invest the money and time perfecting the software to realise the benefits from a multi-threaded 40 core CPU. There are also other limitations that can cause bottlenecks depending on the systems memory, SSD bandwidths, etc. and the type of model you a running.

I’m back engineering the human brain… you can see my problem?  I had no idea how the system would evolve.

One of my problems was scalability. I need to be able to eventually utilize 10 x 1000’s of cores.  Besides the cost lol, it will be quite a while before this type of machine becomes available… hence the cluster.  Using the parallel MPI I’ve wrote I can utilise as many machines as I can get my mits on.  I recently tested the MPI on a room of 200 x 4 cpu I7’s at a local college… that’s 800 cores, ok you have to consider network latency, etc but the model performed as expected.  So rather than invest my time writing a multi-threaded model, writing the code to ‘cluster’ separate machines made more sense.  I'm working on a similar MPI to utilize machines on the Internet, so anyone can donate processor time.

Cost is another major factor, 30k for a 40 core system as opposed to 2k for my 24 core cluster… I can easily and cheaply expand. I have another 6, four core machines I can switch on when required… I have to get real time HD video feeds, stereo microphone audio streams, accelerometer and gyro streams into the model using ring buffers etc, then there’s the servo outputs and tactile/ joint torque feedbacks… one machine no matter how many cores would just grind to a halt.

Redundancy is also important; I can easily replace machines without slowing my research progress... time is precious.

The list goes on lol.

Both types of systems have their advantages… I still feel I made the right choice.

 :)

Edit: These are some of the machines I use…

(http://i.imgur.com/2aQaJgu.jpg)

And these have just been retired…

(http://i.imgur.com/qKoNpJW.jpg)

You can easily see the effects of long term exposure to a wetwear model… hehe.

 ;D
Title: Re: The last invention.
Post by: keghn on June 29, 2017, 03:00:45 pm
  Beowulf was all the rage ten years ago. Hooking a lot of computer together with ethernet cards. Cluster they called them.
  Which has give rise to how all supercomputer of today. A repetition of off the shelf parts
hooked together on a massive scale. But i believe the ethernet cable has been removed a direct PCI
buss and routing are used today.
 The small scale rasberry PI 3 super computers are still connected by ethernet. USB is still so difficult to connect a lot of thing together because big business is protecting it.


Clustering A Lot Of Raspberry Pi Zeros: 
http://hackaday.com/2016/09/18/clustering-a-lot-of-raspberry-pi-zeros/


Raspberry Pi 3 Super Computing Cluster Part 1 - Hardware List and Assembly: 
https://www.youtube.com/watch?v=KJKhRLKXr-Q


Title: Re: The last invention.
Post by: Art on June 29, 2017, 03:38:43 pm
Lock, Why not take a look at Bubble Memory...I heard it was going to be "The Thing" in memory.
(or was that back in the 70's?)...hmmm.

I have to agree, get what in your head out where you can manipulate it and see the results of actual experiments.
Don't say you can do them in your head because one can't do a severe amount of multitasking that such simulations would require.

We're not against you...but...we're not quite with you either...if you get the drift....

Title: Re: The last invention.
Post by: korrelan on June 29, 2017, 08:36:04 pm
Quote
My baby can have 50 Super Duper Ultra HD cameras with infinite zoom, and a full human baby body with full motors. Physics Engines are incredibly great.

Sorry lock but... no it can't.

You were previously talking about an AI existing inside a 3D simulation, assigning indexes/ labels to objects and people. Why would such an AI need HD cameras? If you are going to simulate a 3D world as rich as the one we occupy, which you need to do; to provide for an AI to reach human levels of intellect, and then have your AI use HD cameras to view that simulated world... what's the point?  It might as well just exist in the 'real' world.

We all live in a simulated world lock.  You think the things you see and feel are real? Trust me they are not.  Without your sensory organs you would be in a very dark place... you exist in the 3D imaginary world your brain creates/ simulates for you.

If we are to create a true AGI then it at least needs to experience and understand the same reality/ level of abstraction that we think we occupy.

 :)


Title: Re: The last invention.
Post by: LOCKSUIT on June 29, 2017, 09:00:23 pm
Haha it can't have 50 HD cameras if it uses *secret* ! Correct! However it can if I wanted to go dat way, simply it's in meh computer, and still gives me loads of capability ex. full human baby body/motors/sensors etccccc (big list).

Yep I know it well that what I see is a raytrace of a 3D-ized replica world (same for the Imagination Generator). I've seen creatures walk behind doors in sleep paralysis. And if we seen our pure camera/etc images right from the cords it'd still be a freaking machine "viewing" "from its eyes". My simulated baby human AI can see right from the "cords".

Will you ever get a really real baby human to implant your code into? Even the ones being made for lots of $$$ (little own custom built) are not close to what you can fine-tune in simulation, and clone.
Title: Re: The last invention.
Post by: LOCKSUIT on June 30, 2017, 02:34:36 pm
Rich Reality..........

..........................Goes with Rich body.

Literally..................................... $$$

3D Simulation:
https://www.youtube.com/watch?v=klLI5skuwj4&t=12s
Title: Re: The last invention.
Post by: korrelan on June 30, 2017, 04:07:02 pm
Looks like academia is slowly catching up with me. lol.

http://aidreams.co.uk/forum/index.php?topic=11602.msg45042#msg45042

As a general rule I’ve found with data representation the more dimensions you can encode the data over the better.  My simulations utilise hundreds of dimensions… that’s why parallel searches are so fast… your just searching the data from the perspective/ dimension of the search criteria.

http://www.iflscience.com/brain/researchers-reveal-the-multidimensional-universe-of-the-brain/

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on June 30, 2017, 05:15:37 pm
Hhuuuuoooggghhhhhhhhhhhhh !

Do you realize what you have said!?

That's a very great recognizer!.................do you use it!?.....................put X object in real/3D simulation water/weigher/etc and get numbers for classification and recognition!

Only problem is recognizing parts of the object.................NO PROBLEM --- CUT UP OBJECT MEASURE IT AND BIND BACK TOGETHER IN SIMULATION.

MEAWHAHAHA!

this is one of those "really" moments... has to sink in..........

so much you can do with particles.........particles do a lot don't they :) - lots of techniques they do in the daytime

I have all the information on this and mine hierarchied, if you want it, ask.

Talking about "sand castles" (in the article), I was having a problem with my PR recognizing unseen objects, like sand or a blanket changing shape. I needed to generate a small number on the fly and have it search other small numbers. It's also much easier than tagging all angles/areas/distances/etc on each 3D simulated object.
Title: Re: The last invention.
Post by: LOCKSUIT on June 30, 2017, 11:43:58 pm
Can someone knowledgeable/experienced in pattern recognizers tell me if, if I ran a CNN on 1 skylake i7 OR even a 1060 GPU, having at most 2,000 convolved images stored, would it really be slow? HOW slow? I thought CNNs were supposed to be fast and have low storage by shrinking images by convolzion and having the input not only confront a bunch of shrunken jeepers but also easily direct to the correct image stored for selection by the convolzion technique, sorta like a kill-3-birds-stone all in one. The hell. It's supposed to be hell fast on 1 i7 with 0 parallelism. Instead of matching full size images to full size images.
Title: Re: The last invention.
Post by: korrelan on July 01, 2017, 10:26:09 am
http://cs.stanford.edu/people/karpathy/convnetjs/demo/cifar10.html

 :)
Title: Re: The last invention.
Post by: Zero on July 01, 2017, 01:13:33 pm
Happy 200th  :D
Title: Re: The last invention.
Post by: LOCKSUIT on July 01, 2017, 02:23:23 pm
That doesn't answer my question though...
Title: Re: The last invention.
Post by: korrelan on July 01, 2017, 02:37:00 pm
That link was to a simple CNN that runs just in your browser.  So all the processing is local... on your machine.

The question is impossible to answer in depth.  There are too many variables and unknowns.  It depends on the programmer, the language used, the algorithms used, the images, the types of convolution filter, etc.

If the CNN has to run quickly on a single machine then the programmer/ designer would have to be skilled enough to... find away.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on July 01, 2017, 02:59:52 pm
Korrelan, all I need to know is, if I want to run a CNN for my AI, that saves 4 images per second, and searches for 4 matches per second, that after 30 minutes has *400 images stored...will it be real-time or 22 times faster than real-time?

Otherwise I'm looking at having my AI stare at simulated 3D objects, cut out sections, measure their weight/volume/height/etc in weighers/water buckets/etc, search for numbers, then carry on.
Title: Re: The last invention.
Post by: keghn on July 01, 2017, 03:07:44 pm
YOLO: Real-Time Object Detection. A cnn that does 30 frames per second with GPU's : 

https://pjreddie.com/darknet/yolo/
Title: Re: The last invention.
Post by: LOCKSUIT on July 01, 2017, 08:09:57 pm
Ah, Darknet, I love these dark hidden places. Its like black hat underworlds. The deeper you go the harder it gets.

Well when the time comes, I have 3 ways to go now. All 3 may be sufficient, but I may go with a easy way to be coded if I can't get my workers to implant a CNN (it could be easier (and cheaper) to do that though).
Title: Re: The last invention.
Post by: keghn on July 01, 2017, 11:59:54 pm
 Yolo is one of the fastest Cnn detectors. Like all NN detectors it does not really tell you where the object is
in the image. 
  But with some hacking you can get it. They do not tell which direction they are moving or if the are up
side down or right side up. Or if it has rotated left hand compared to the last image.
 In a AGI, A CNN will be needed to reviewing video memory. It will need to work 1000 times faster when
scanning through video memory. And if the AGI is viewing 10 files at the same time it will need a dedicated CNN
for each video file being used. The AGI will use a internal image editor and will need a CNN identify target to
cut out and post into a new video file. These new video are tried out by trial and error it can tell if
the new imaginary build will work or fail. if can tell if it will fail it will be out right deleted. 
 If it has a real hard time telling if it will work it will become a dream. If nothing of real interest it will be forgotten.
 If the build work it will hit you like a great idea from out of the blue, eureka!
 
Title: Re: The last invention.
Post by: LOCKSUIT on July 02, 2017, 12:24:31 am
I understand what you're saying. The image editor must be 3D though. You make real-world objects 3D into the mind. Then it plays around with the 3d objects. If the configuration looks good, it incomes passed your attention system as an eureka, else dream, else forgotten. The objects it plays with are things you sense, ex. "how can I fix this", or "fish on the moon". Once it looks right, you will then eureka the way it looks (the fix or the fish on the moon), and then know the actions to make the fix, or will simply see a fish on the moon (how you see stories).
Title: Re: The last invention.
Post by: keghn on July 02, 2017, 03:29:05 am
 LSD effect are not going to happen at the start of empty soul. No data to work with. 
 A life form must live a life of real life experience to be hack. Big data. 

 Then starting simple With completely randomly selected instructions and tamed values.  A simple program to be 
manipulate later on
   Starting simple on trying out to see what works. When something work 
 it is remembers. If it crashes it is deleted and a back step take place. 

 A script code, such as bash, and a script code that can run and control  a photo editor, May be 
 something like the command line video editor like MLT and melt: 

https://en.wikipedia.org/wiki/Media_Lovin%27_Toolkit 


 Blender might of scripting commands?

 A script program can be started automatically. 
 Then have a program to evolve it from a dead simple starter program by adding or editing in commands and values.
 If the program works it is saved. Then a clones are made modified and then tested.
 The script programs that fail are saved but labeled as duds so it learns from its is mistakes.
 This is for the correct use of script instructions. 
 Then a second test of it being used on real video. 

Video Editing in the Shell - MLT melt - FFMPEG - Linux 1: 
https://www.youtube.com/watch?v=r4PWv0emS7A 

Title: Re: The last invention.
Post by: elpidiovaldez5 on July 10, 2017, 03:29:01 am
Hi again !

I have just been looking at your work after replying to one of your comments.  It is awesome stuff.  I hope to keep in touch and see how it progresses.

Could I ask what software you use as the audio back-end for your phoneme recognition ?  I would like to do a project on audio, but have never faced up to learning how to get the audio and spectral analysis.

Title: Re: The last invention.
Post by: korrelan on July 10, 2017, 11:57:23 pm
The software is written from scratch using the Win API interface.

The audio stream is sampled using the WaveAddBuffer function in the Winmm.dll library.  I then pass the sample through a FFT to extract the frequencies and several custom filters; then render the spectrograph.  The samples also run through a custom loop buffer that records in ‘real’ time but only sends frames of data through a TCP pipe to the main application when called.  The callback and loop buffer keep the two applications synchronised.

 :)
Title: Re: The last invention.
Post by: korrelan on August 01, 2017, 10:32:53 am
"Woe to you, oh Earth and sea, for the Devil sends the Beast with wrath
Because he knows the time is short
Let him who hath understanding reckon the number of the Beast
For it is a human number, its number is six hundred and sixty six"

Crank up the volume...

https://www.youtube.com/watch?v=_3Vynew5mrw

 :D

Edit... I'd reached 666 posts lol
Title: Re: The last invention.
Post by: LOCKSUIT on August 01, 2017, 11:29:06 am
I don't think I want to know what korrelan has been up to
Title: Re: The last invention.
Post by: selimchehimi on August 01, 2017, 11:38:58 am
Wooow that seems really interesting, I love all the work that you put in to solve AI. I will definitely take the time to check all your videos  ;)
Title: Re: The last invention.
Post by: korrelan on August 26, 2017, 02:36:07 pm
https://www.youtube.com/watch?v=BDIvV9w7NtU

Hi all… I’ve been taking a rest from the development of my AGI for a few weeks.

For anyone interested… There is a very good reason why I do this periodically… if you spend too much time focusing on one topic your brain begins to work against you.  One of our main learning mechanisms is repetition; your brain likes to translate common thought processes/ experiences into automatic sub-conscious functions (like riding a bike or driving to work).  Although this is great for most mental operations it’s obviously a major disadvantage when mental flexibility is required regarding a problem space you are trying to solve.  If you concentrate too long on a topic your ideas and mental flexibility will… dry up; you will literally get stuck in a rut (writers block)... an ingrained pattern of thinking as the common used synaptic pathways that represent your… I’ll leave for another post.

Ok back to the video. 

Up to this point I’ve been using random/ trig spatial distribution to set the initial vectors of the connectomes neurons in the various cortical layers.  This has been adequate up to date but when the connectome expands through simulated growth and neurogenesis inserts new neurons in high activity areas to enhance the ‘thought’ facet resolution; I’ve been getting clashes and general spatial problems because of the initial uneven distribution of neurons on a layer.

This has been solved by using the Fibonacci sequence to set the initial equally spaced neuron vectors within the connectomes spherical volume.  At the beginning of the video you can see the equal spatial distribution on the connectomes outer layers mirroring the common Fibonacci pattern.

A small update but this solves loads of problems.

 :)
Title: Re: The last invention.
Post by: infurl on August 27, 2017, 12:15:35 am
For anyone interested… There is a very good reason why I do this periodically… if you spend too much time focusing on one topic your brain begins to work against you.  One of our main learning mechanisms is repetition; your brain likes to translate common thought processes/ experiences into automatic sub-conscious functions (like riding a bike or driving to work).  Although this is great for most mental operations it’s obviously a major disadvantage when mental flexibility is required regarding a problem space you are trying to solve.  If you concentrate too long on a topic your ideas and mental flexibility will… dry up; you will literally get stuck in a rut (writers block)... an ingrained pattern of thinking as the common used synaptic pathways that represent your… I’ll leave for another post.

You're on the right track but you're overthinking overthinking. It's precisely your sub-conscious functions and automatic thought processes that you should be trying to exploit to get the work done. Most people have had the experience of going to sleep with a problem and waking up with a solution. While I do change major tasks periodically, typically every four to six weeks, it's because of mental fatigue rather than getting into a rut. When it comes to problem-solving you can't beat good old-fashioned daydreaming (interspersed with intense periods of good old-fashioned hard work of course). But in the end, it's whatever process works for you.
Title: Re: The last invention.
Post by: keghn on August 27, 2017, 01:28:25 am
 Good work as always @Korrelan. Burn out is a good thing and can be a bad thing. Keep person form getting in a
never ending behavior loop. The way i see it the the neurons store pattern loop. Like Ram store a behavior routine.
 But under the rules of the life of a cell it will not be over used, and complain if damaged. It will have a purpose or
be deleted or killed off, and it needs time to rest. Like rules of a trade unions. 
 But if the behavior routine is in a silicon chip it is dead and lifeless.
 So in my AGII i am have a program that simulates it having a fake life. Because it is needed in machine to have feelings.

 I think it is important to note that the neuron bundle that hold behavior pattern have no idea what they have stored
with in them. They only know that they have purpose and belong to a bigger whole.

 So i use different ways of getting same thing done. Using different neurons to do the same job. 

Title: Re: The last invention.
Post by: LOCKSUIT on August 27, 2017, 04:12:40 am
You may have to rest, but for me I don't, and it pays me off by working every day...all day ! I am happy here.
Title: Re: The last invention.
Post by: korrelan on August 27, 2017, 12:38:57 pm
@ Infurl

Quote
It's precisely your sub-conscious functions and automatic thought processes that you should be trying to exploit to get the work done.

For common tasks that have to repeated I would agree.  Unfortunately back engineering the human nervous system however doesn’t seem to follow this curve lol; when covering new ground I have to keep my thoughts as flexible as possible; especially at my age lol.  But as you said we each have to do what works best for our own mental machinery.

Quote
Most people have had the experience of going to sleep with a problem and waking up with a solution.

I’ve done some research on this.  It appears to be a commonality between a problems facets and the facets belonging to a known solution to a similar problems pattern.  It’s a pattern alignment in the GTP; where the problems facets will trigger the same/ similar facets in the known solution leading to a link or similarity at that stage in the solution.

Quote
When it comes to problem-solving you can't beat good old-fashioned daydreaming

I agree; daydreaming/ dreaming leverages the above system to align/ match facets and solve problems. Taking a rest from the problem and doing something completely unrelated can often lead to a ‘Eureka’ moment… especially if the new task shares many similar experience/ knowledge/ skills facets with the initial problem.

@Keghn

Quote
Burn out is a good thing and can be a bad thing.

Hmmm… I pretty sure ‘burn out’ in any kind of machine is a bad thing lol.  It sounds like you are working along bio-chemical lines of research too. Cool.

@Lock

Quote
You may have to rest, but for me I don't,

We are all different and there are many variables including life’s general pressures, the problem space and the level of concentration required etc.  I know my limitations from experience and try to optimize my mental abilities; you are thirty odd years younger than me and still enjoy the natural mental flexibility of youth… at the expense of focus/ wisdom of course lol.  I’m pleased you are happy though.

 :)
Title: Re: The last invention.
Post by: keghn on August 27, 2017, 07:05:51 pm
 @Korrelan. There are may type of burn out. I do not really mean a killer burn out from long work and having a boss like
Trump. When a system or a small sub part record there time at a job. When it reaches a high value it look for other job with
a low burn out value. The Burn out value is down counted by time. Job down count can be fast.
Title: Re: The last invention.
Post by: keghn on August 29, 2017, 02:00:44 am
 In the human body a muscle can get tired. And the neurons that hold the program to move the muscle. The leg muscle
is big and take a lot of stress. So the brain neurons that hold any leg movement must be copied a couple of time. So that
while a neuron program is running there are other a resting up. Same thing for actions that are repetitive mentally or
physically.
Title: Re: The last invention.
Post by: LOCKSUIT on August 29, 2017, 02:42:43 am
So you're saying if my leg gets tired I may choose to use another leg (no pun intended). While on the other hand (no pun intended), my thought process may get tired, BUT, as you said, that doesn't mean I'll switch the thought process topic, rather, my brain will sub-consciously (no pun intended) transfer it over to awake neurons with more energy and continue with "this leg" (no pun intended), continuing thinking about how I can use my leg to open a jar of nuts, 5, hours, straight, then finally give up and choose to think about those peanuts on the table.
Title: Re: The last invention.
Post by: Art on August 29, 2017, 03:23:04 am
@ Lock.

I think he means that eventually everyone simply needs to take a rest.

See, you're already delirious! Take five...minutes, hours, days///whatever it takes for you to "recharge / regroup". O0
Title: Re: The last invention.
Post by: keghn on August 29, 2017, 03:35:52 am
 Well basically Yes. But only if the reward is worth it to train other neurons. Cell in the body are live thing not machines.
 The need to rest, feed , and repair. Then need to a JOB that has purpose. Just like in any swarm or group the dead beet are
removed,  Or they fight back and go cancerous. Under the rule of swarm logic and swarm intelligence. A AGI will not have
this but in order to make human like intelligence in machine we go to fake it. And that will be good enough.
Title: Re: The last invention.
Post by: ranch vermin on October 05, 2017, 03:16:37 pm
Hello, crazy ranch is back to be with the motley crue here.  your network looks very neat and has lots of features.   it looks like something to be proud of.

You robo eye rig was grippy! (towards the beginning of the thread.)

A more sentient network is going to need more than 340k synapses, I read that ur looking into maybe doing some gpu code.
A gpu would give you alot more synapses and neurons at once,   for more resolution, or more work on the low res.   I use open cl,  but if you dont know direct x i think you can maybe use something that does the gpu conversion for you,  (offloads your iteration loops) i thought i saw one in python, but ive been out of the game for a while.

Theres newer stuff these days from when i was doing it,  but my gtx980 is better than the cards with bigger "number" these days (300 dollar cards.) with mine having double the cores!   its all cheatery and phony in the computing industry.

Title: Re: The last invention.
Post by: LOCKSUIT on October 06, 2017, 12:55:55 pm
You know ASI? ASI can give you something very powerful in a small package. ASI can run a powerful AI in just my pc.
Title: Re: The last invention.
Post by: raegoen on October 06, 2017, 09:01:25 pm
Hi Korrelan, your work is very fascinating, I have read your blog and it seems you've made incredible strides towards AGI! Are you planning on releasing any part of your engine/emulator/NN for others to build upon? I know you are writing a book, are you planning on putting a lot more detail and mathematical explanations of your model in there? Thanks, looking forward to future updates!
Title: Re: The last invention.
Post by: korrelan on October 07, 2017, 01:16:48 am
@Ranch

Good to see you back. 

Yeah! I will be looking into GPU eventually but ATM I’m messing around with machine consciousness.  I think I have a GTX 980 some where… I’ll check.  Currently I’m experimenting to see how much ‘intelligence’ I can squeeze out off a limited number of neurons/ synapse.

I’ll be posting some vids of machines screaming in pain and confusion soon lol.

@Lock

ASI = Artificial Super Intelligence? I’ m not sure what you are saying lol.

@ Raegoen

Glad you like my work so far.  I’m writing two books; one on human neural function/ theory and a second on actually creating/ coding machine consciousness/ intelligence.  Both have around 200 odd pages ATM… I really need to pare them down lol.

 :)
Title: Re: The last invention.
Post by: ranch vermin on October 07, 2017, 09:10:59 am
I spose, the less neurons you need the better.     
Im working on something a little secretive in the performance area of slower computers,  and ive nearly got it going. 
its very exciting. :)
Title: Re: The last invention.
Post by: Art on October 10, 2017, 01:02:57 pm

Glad you like my work so far.  I’m writing two books; one on human neural function/ theory and a second on actually creating/ coding machine consciousness/ intelligence.  Both have around 200 odd pages ATM… I really need to pare them down lol.

 :)

Wouldn't 200 pages actually be EVEN?  ;D JK....

You're going to offer them as ebooks for those nice, rainy day reads right?
Good luck with them!
Title: Re: The last invention.
Post by: LOCKSUIT on October 12, 2017, 08:35:14 am
E-books :)
I love free books.

So korrelan tell us what you foresee as the steps you will take to reach an AGI. What level will come forth first? Then, what? And then what? Are you first cracking Language??
Title: Re: The last invention.
Post by: korrelan on October 15, 2017, 11:56:02 pm
I've been messing with the vision modules on my AGI.  This gave me an idea...

Anyone notice anything weird about this picture.

(https://i.imgur.com/NMYkJOK.jpg)

 :)
Title: Re: The last invention.
Post by: ranch vermin on October 16, 2017, 12:27:58 am
what is it, no dif?

<edit>  looks like good old neural net sensor grain based rotatah to me! </edit>
Title: Re: The last invention.
Post by: korrelan on October 16, 2017, 01:03:49 am
Yeah! The image is a orientation/ depth map.  I noticed my AGI was mapping some colours to distances, which seemed like a strange thing to do. This is a representation of how the AGI is 'experiencing' colours... and it does look to me as though it has a kind of 3D depth. I wondered if anyone else could see the effect or if it was just my eyes playing tricks on me.

It might be my colour blindness effecting me, I see the blues as deeper/ father away than the greens.

 :)
Title: Re: The last invention.
Post by: ranch vermin on October 16, 2017, 05:53:37 pm
oh!  i thought it was an orientation map.  well it is...  but i thought it was an orientation map to rotate the network to the right spin when doing id'ing.   so its experiencing colours. 
Title: Re: The last invention.
Post by: WriterOfMinds on October 16, 2017, 06:07:32 pm
I wonder if the appearance of depth for human viewers is a psychological artifact of looking at terrestrial maps. Maybe the blue areas come across as "water" and the green ones are "hills."
Title: Re: The last invention.
Post by: korrelan on October 16, 2017, 09:37:06 pm
@ Ranch

You are correct.  The same/ similar topology is generated as a rotational/ orientation map.  Because my AGI’s visual cortex is based on the mammalian ocular schema colour is mapped in a very similar way.

@WOM

You may be correct about the deep water, high green land masses.

I've asked several people if they experience the same illusion of depth when they view the image and the general consensus is… no lol.  I actually see a 10mm ish difference in the depth field between the green and blue.  For me it’s like a shallow hologram… this is obviously a phenomenon from my colour blindness and I've never before experienced this type of optical illusion.  It’s just so pronounced… its deff weird.

It just struck me that the AGI was mapping colours to distance and when I viewed the back engineered map, I too saw depth lol… I think I'm spending too much time in front of my monitors lol.

 :)

Edit: In hind sight the mapping of colour to distance kind of makes sense.  The machine has been observing many different images including ‘real life’.  So perhaps blue skies, water/ oceans = distant and green grass etc in the foreground = closer (as WOM suggested).  It could be a side effect of ‘seeing’ an unbalanced/ bias set of visual experiences at this point in its development.

 :)
Title: Re: The last invention.
Post by: Art on October 16, 2017, 10:09:23 pm
I too saw the green as hills in a topography and blues as a depth or water. I'm color deficient as well.
Not color blind, because I see colors but there is a confusion between shades of colors like olive green etc. and tan's or brown.
I have a friend who is Green / Red color "blind". He drives but knows that the green light is always on the bottom.

Anyhow...that's my read.
Title: Re: The last invention.
Post by: korrelan on October 17, 2017, 02:32:08 pm
Solved it... it's a form of chromostereopsis... who knew lol.

https://en.wikipedia.org/wiki/Chromostereopsis

 :)
Title: Re: The last invention.
Post by: korrelan on November 10, 2017, 11:58:50 am
https://www.youtube.com/watch?v=5gZS56JUFk0
Title: Re: The last invention.
Post by: ranch vermin on November 10, 2017, 12:14:00 pm
is that just a filter, or does it take learning to better the result?
Title: Re: The last invention.
Post by: korrelan on November 14, 2017, 11:43:54 pm
I just read your question and thought… what’s he on about? The video is on the previous page and I’d not noticed.  I was rushing getting ready to leave for the first call of my working day, that’s not the video I thought I had posted lol. That was for Yot’s thread on outlines.

I’ll explain it here anyway… It uses two kinds of machine learning. 

The first is an adaptive convolutional filter I designed that learns the best parameters to apply to each section/ type of image. It automatically adjusts for brightness, saturation, clarity/ sharpness, resolution, etc.  Its job is to learn/ adapt the best method for extracting a template that the pixel bot (PB) can follow.

The second machine learning technique is the pixel bot.  This little guy is a simple bot with eight pixel sensors around its perimeter.  I have taught/ trained it to follow and trace outlines. 

So… the pixel bot learns to follow outlines and the adaptive convolution filter learns to extract the most information from the image.  If the PB manages to create a full out line then it notifies the filter that this is a good convolution combination for this type of image… and a box is drawn around the shape on the original image.

Notice that he outline is stabilized in the left window.  As soon as the mouse pointer enters a shape the PB converts the object into a set of vector coordinates, these are then easily centred and stabilized.

That’s how the system manages to easily extract numbers from captures etc… it learns to extract letters/ numbers from the confusion.

You can see the PB learning process here…

https://www.youtube.com/watch?v=WH9Bc4aF6Nc

I was taking a break from my AGI and thinking about Yot’s outline routines.

 :)
Title: Re: The last invention.
Post by: ranch vermin on November 15, 2017, 07:53:42 am
I wouldve thought it would have involved alot of noise before its learnt,  how come its only off or an exact outline.

If I had a random convolution filter, it would be all kinds of blurs and differences.
Title: Re: The last invention.
Post by: ivan.moony on November 15, 2017, 07:59:02 am
Korrelan, very interesttng, I think it would be how humans detect edges (they move eye focus around an object). It seems that neural nets can do more than I thought.
Title: Re: The last invention.
Post by: korrelan on November 15, 2017, 10:28:35 am
Quote
I wouldve thought it would have involved alot of noise before its learnt,  how come its only off or an exact outline.

I have manually trained the pixel bot (PB) to follow outlines, because I’ve used my judgement to best trace lines the PB has learnt to do the same, so this includes jagged outlines. It will run along a fairly jagged edge and produce a straight line quite well, it could do with more training though as sometimes it gets confused on certain patterns of pixels lol.

Quote
If I had a random convolution filter, it would be all kinds of blurs and differences.

The convolution filter is indeed based on the human eye and is localised around the mouse pointer.  It affects small regions individually not the whole image.  It does not use a random filter; again I have manually trained the filter to best extract the required detail.

Quote
Korrelan, very interesttng, I think it would be how humans detect edges (they move eye focus around an object)

This stems from my AGI research, I use a model of the mammalian visual cortex which learns to detect lines and gradients automatically; this is just a very simplified version used just for outlines.

Quote
It seems that neural nets can do more than I thought.

I tried to keep the project simple so I’ve not used any kind of neural net, it’s all good old fashioned look up lists.  So the PB for example just stores a list of perimeter sensor readings along with the direction I’ve told it to move.  The PB can then easily and quickly find the closest match in the list and move in the relevant direction.

The system is basically just reproducing my skills at following an outline.

I still disagree that using outlines is a good method for recognising objects, both scale and rotational invariance can be accounted for but occlusion cannot.  Feature detection is still by far the best method.

 :)
Title: Re: The last invention.
Post by: korrelan on November 15, 2017, 12:14:18 pm
This early low resolution test example shows the use of a three layer neural net based on the mammalian visual cortex.  The four angled coloured lines in the centre of the windows shows the basic colours and orientations the system is going to learn.  The left window shows the current learned orientation map, the right window is the image its learning from.

https://www.youtube.com/watch?v=MuMXGzPZ2Nk

The first half of the vid shows the system being subjected to just diagonal lines and the orientation map automatically learns to recognise these lines.  You can see the self organising distribution in the neural layer only using the two diagonal colours.

At 0:17 you can see the output from the LGN running through the map and only the diagonal lines are detected.

At 0:30 a picture of faces is loaded and again because the system was trained on just diagonal lines it only detects diagonal lines in the face image.

At 0:50 I start re-training the system on the faces.  You can see the neural plasticity of my system at work as the orientation map slowly evolves to incorporate the horizontal and vertical lines it’s experiencing in the face image.

At 1:23 I stop the training, the self organising nature of the system has built a map that best fits/ represents the input data.  The black dots on the left window represent the locations of output layer pyramid cells; each cell is surrounded by a receptive field tuned to recognise a particular feature in the image.

The rest of the vid just shows how the system is now interpreting the face image with all four orientations learned.  Obviously my current system uses hundreds of orientations and gradient combinations to detect features in the visual LGN output. 

If the system was only subjected to diagonal lines again it would very slowly forget how to recognise the vert/ horiz lines.  This plasticity effect falls off as the neurons mature so the system eventually reaches a balanced representation of the experienced input, with a patch of neurons able to detect every facet of the incoming data.

The advantage of this approach is that each patch of neurons in the left window will only fire when a particular pattern of input is supplied.  The neuron patches or image facets never move and so can be easily linked, searched, etc to figure out what the system is looking at.

This gives an idea of how my whole AGI system functions; it is very closely modelled on the mammalian cortex and is basically able to adapt and learn anything its sensors encounter.  It automatically extracts and organises relevant information from the data streams.

It takes a human foetus nine months to generate its V1 orientation/ gradient maps this is my system producing and equivalent map in a matter of seconds from visual experiences… it learns extremely quickly.

https://www.youtube.com/watch?v=4BClPm7bqZs

And if you were to zoom into the neuron sheets/ maps you would see the individual neurons and synapse.

https://www.youtube.com/watch?v=YVO0f76CI2s

 :)
Title: Re: The last invention.
Post by: keghn on November 15, 2017, 02:54:38 pm
 Unsupervised truth.
 I have had a thought of using GANs to do transmissions and self learning at the same time.
 There is problem in neuroscience of what is the meaning of "Same" or "equal" is, at the level of a neuron.
 GAN are make up of two NNs. The detector and the re generator NN, that regenerate what is being  detected.
 IF you had a  alternating rows of detector NN {DeNN) and re generator NN (ReNN). The information could be past along
in a daisy chain fashion.  Like so:
DeNN, ReNN,  DeNN, ReNN, DeNN, ReNN, DeNN, ReNN, DeNN, ReNN.............................
 
 These NNs here only detect and regenerate the color of one pixel. So it will not be too slow.

 The first DeNN detect the color form the real world. Then the next ReNN generates it. And the next DeNN learns to detect it.
 And this keep going on and on so the brain has many true reference of this color. So when doing edge detection or blob
detection, the colors form two pixel can be compared to see if they are the same or different.


 
Title: Re: The last invention.
Post by: ranch vermin on November 15, 2017, 03:24:52 pm
woah now i understand,  thanks for that.

looks like a cool alternative to normal edge detection cause ur getting the angle as well,  but i guess the filter could too,  but its cool that youve adapted machine memory to do it.    they are general purpose indeed.

To keghn,   yes - if you had one neural network generating images, and the other video trained network telling it yes or no,  it will start to generate the diagonal lines - but i think that gets more interesting if the concepts are more general and abstract, then the generation might be less restricted and generate some more interesting images.