I believe I'm more in that camp that true learning, something from nothing, and not a 'directed' approach is how true AI will emerge.
Good to see someone who agrees lol.
If we can build a machine that can learn anything and experience reality as we do… it should be able to learn to be ‘Human’.
When I started the only human cortex area that had been really studied, and the research peer reproduced, was the visual areas. So I designed a connectome to reproduce the V1 map patterns found in primates. I figured because most of the cortex has a similar repetitive structure, if I could get this working the rest would fall into place. When I tested my visual network and it also recognized audio/ tactile sensory streams, it spurred me on; been at this for long, long time lol.
Monkey V1 cortex map with a section of my map (top left) overlaid to show the similarity.
Vid showing real time creation of map from visual data.
I'm guessing as much time was spent on visualization as was spent on the wiring/building of the network!
Yeah! The visualization is important; it’s how I work best. The vids are from the CAD software I designed to build the connectome in 3D virtual space. I have written analytical routines to track the learning/ logic of the system but I still prefer to see what’s happening in real time. I can freeze, rewind, etc and actually move/ view deep within the connectome and see/ edit what’s happening.
This example shows how a trained area of cortex responds to the test patterns I'm feeding in. You can see the relevant areas around the output pyramid cells actually firing correctly. Visualization makes the job much easier.
Is this all yours or what is the project?
It’s my own personal spare time project at the moment.
Constant streaming data and mixed signals, 'frame of thought'.
I use loop buffers on the audio/ visual/ tactile/ joint position/ etc input streams, so the system has time to catch up if it lags behind on the processing. Also means I can freeze the simulation at any point and debug synapse etc. I designed the program to run on multiple cores and cpu’s so I can add as many processors as I need as the complexity grows. I’ve also written auto-balancing software that shares the workload quotas across cores based on the complexity/ time cycle of cortex areas. I’ve also just increased the overall speed by realizing that a lot of problems I’ve encountered, can be solved by the system going into a sleep cycle. It needs frequent sleep… and it dreams.
Do you feel that is enough to identify situations and multiple events happening at the same time with comprehension? Seems there would be a challenge in seeing the difference in similar situations.
The system grows and adapts to the complexity of its experiences. Because each experience is broken down into its finite facets there is always a deeper resolution/ difference the system can exploit to tell the frames/ chains apart. The seed doesn’t even know the difference between light and dark at first, it takes time to build the hierarchical networks/ knowledge/ feedback required to recognize complex concepts etc. I use Tv, audio books and human interaction to train my AGI.
Or you might get into a situation where you have never ending growth of the network trying to maintain separation.
The opposite problem to separation applies. The more information is soaked up by the system the finer detailed the neural maps become. Once saturation point is reached the whole outer cortex (not the inner) has to grow in surface area to allow neurogenesis to insert more neurons at areas of high activity to increase the resolution of that area.
This is also part of the mechanism that produces prediction/etc, because the system is self organizing similar facets of ‘thought’ patterns tend to cluster together on the cortex surface. So when a new neuron arrives at its location it waits, sends out dendrites and connects to neighboring networks, sometimes these networks will always fire in a specific sequence, this always follows that… the neuron will short cut the sequence. This allows the system to glean a prediction before something happens as well as still process all the relevant data that produced the patterns in the first place.
All available cortex real estate is used, so if the system is blind it adapts and the visual areas are used to enhance audio/ sensory resolutions. This was a bonus I gained from my connectome rules/ design.
You have any info on where we could read into your project more?
I have been working on a site that explains my progress but it’s not finished yet. There are only a few vids on my Youtube page that show my early work on audio visual recognition.
Thanks for your interest; it helped me clarify my own mind.