I’ve been busy lately so I’ve not had much time to work on my AGI project. I have managed to make some progress integrating the sensory cortexes though.
As you know from my previous posts on this thread I have designed a single neuromorphic neural net that can learn un-supervised from its experiences. The basic idea is to create an intelligent ‘alien’ connectome and then teach it to be human. My connectome design can recognise objects or audio phonemes equally as well, its self organising and self categorising.
When fed an ocular sensory stream for example, the connectome map becomes the equivalent of the visual cortex (V1-V4). Camera data is fed in, the system learns to recognise objects from their optical 3D properties, and it spits out a result as a unique set of firing neurons. A Zebra (lol) for example will always produce the approx same output, no matter what angle, scale or lighting conditions are used/ viewed once its been learned. Audio, visual, tactile, joint torque, etc are all learned, categorised and encoded into a small set of output neurons; it’s these output neurons from the individual maps this example uses.
The next step was to integrate the various sensory cortex maps into one coherent system.
This is the basic sensory connectome layout (excuse crudity it was drawn a while ago).
I recorded the outputs of the input sensory maps whilst recognising various sensory streams relative to their functions, audio, visual, etc. I then chained them all together and fed them into a small/ young connectome model (9.6K neurons, 38K synapse, 1 core). This connectome is based on a tube of neuron sheet; the output of the five layers is passed across the connectome (myelinated white matter) to the inputs of the frontal cortex (left side).
As usual the injected pattern is shown on the right, the pattern number and the systems confidence in recognising the pattern is shown lower left. The injected pattern is a composite of outputs from five sensory maps, audio, visual, etc. (40 patterns)
On the right just below the main input pattern of (5 X 50 inputs) you can see the sparse output of the frontal cortex; this represents the learned output of the combined sensory maps inputs. This gets injected (cool word) back into the connectome, it will eventually morph the overall ‘thought’ pattern to be a composite of input and output, so any part of the overall pattern from any sensory map will induce the same overall ‘thought’ pattern through out the whole connectome. This will enable the system to ‘dream’ or ‘imagine’ the mixed combinations of sensory streams; just the same as it can a single stream.
0:19 shows the 3D structure and the cortical columns formed on the walls of the tube, the periodic clearing of the model during the video shows only the neurons/ synapse/ dendrites involved in recognising that particular pattern.
Anyway… the purpose of the test was to show the system had confidence (lower left) in the incoming mixed sensory streams, and could recognise each mixed pattern combination.
Each pattern was recognised.