Ai Dreams Forum

Member's Experiments & Projects => General Project Discussion => Topic started by: Korrelan on December 01, 2019, 11:24:30 AM

Title: The New Species (Project Progress)
Post by: Korrelan on December 01, 2019, 11:24:30 AM
Hi, I’m Korrelan and welcome to my AI-Dreams project page.

Project Goal
I personally believe/ know that we are each a closed box, we run a personal internal simulation of reality which we can only experience through our own senses.  We are not computers but we are bio chemical, electrical machines and as such can be simulated using conventional computer hardware.

My goal/ project has always been about understanding and recreating a neuromorphic simulation of the mammalian brain, and to create an ‘alien’ intelligence based on the same schema.  To build a machine that can learn anything… and then teach it to be human.

Global thought pattern (GTP) theory.
The GTP is the activation pattern that is produced by electro-chemical activity within the brain/ connectome.  When a neuron fires action potentials travel down axons, electro-chemical gates regulate synaptic junctions, dopamine and other compounds are released that regulate the whole system.  Long term memories are stored in the physical structure of the connectome, short term/ working memory is represented by the current GTP pattern of activation within the connectome.

It’s a symbiotic relationship, within the system the GTP defines the connectome, and the connectome guides the GTP.   The GTP is a constantly changing, morphing pattern balanced on the edge of chaos that represents the AGI’s understanding of that moment in time, with all the memories, experiences and knowledge it comprises.

Title: Re: The New Species (Project Progress)
Post by: Korrelan on December 01, 2019, 11:25:48 AM
I think it’s productive to periodically clear out/  re-focus and set new goals, to this end I’ve removed ‘The last invention’ thread, it has served it’s purpose, I enjoyed the process and thank you to everyone who participated/ contributed, but… its time to move on, hence… The New Species


First a brief recap of what I feel I’ve achieved so far… My theories/ insights/ findings so far are supported by research notes (Meta database), book, videos, etc and most importantly are provable from the actual working simulation.  Some videos can be found on my Youtube ( page, and this brief/ initial post will probably change/ expand over the next few days, months, years… millennia. 

I’ve built a PC cluster that is capable of running the large scale parallel simulations I require for my research.  I’ve custom written the MPI for the cluster (or any number of processors), the sensory input modules, simulation software suite and built the required hardware/ stereo audio/ vision/ accelerometers/ bot head, etc.  The latest iteration of the anthropomorphic arm is designed and 80% built, this will allow/ supply the dexterity/ torque/ finger tip pressure feed backs required for the neural model.

GTP Theory
I have devised a full theory of brain functionality and rigorously tested the base concepts required to build/ complete the said theory/ project.  I’ve tested and proved each brain structure deducing both morphology and functionality.

Model Morphology
I have a single viable spatial-temporal 3D model that exhibits all the phenomena I deem required at this point in time.  I have perfected algorithms to simulate sleep cycles, neuro-synapto genesis, maturation, etc.  Once the model is ‘initiated’ it needs no further user input, it grows/ scales/ matures according to a set of specific algorithms, becoming more ‘salient/ intelligent’ through experience/ over time.

All the major memory types are represented, long, short, explicit, implicit, episodic, plus others, etc.  The model can automatically categorise and store an unlimited amount (hardware dependant) of high dimensional (holographic) information/ knowledge and recall it instantly.  I’ve tested it reliably up to 120 million episodic memory engram's; all were recalled within 5ms, no matter how complex.

Knowledge Generality
The model stores its knowledge in an efficient compact and most importantly, general format.  All types of experience, information, knowledge can be stored and retrieved using the same single format/ schema and all the sensory/ internal domains share and recombine the same memory facets (generality).

Attention/ Saliency/ Prediction
Attention arises within the system as predicted; sections of neo-cortex adapt and learn the sparse results of the sensory cortices.  Prediction is implemented and affects every aspect of the model, as do many other required systems/ phenomena including thought inertia, self-repair, etc.

Audio, vision, tactile, etc are tested and mapped to the correct locations within the connectome model.  The model can recognise faces, objects, spoken words and a large variety of other sensory streams including emotional cues and tactile stimulation.  The cortical maps (ie visual orientation) generated are very similar/ compatible to maps discovered by recognised research.

Accelerated Learning Techniques
I have devised and tested a set of accelerated learning techniques designed to quickly teach/ test my models across all the sensory modal domains.

Human Neural Disorders
I’ve have used many cross reference sources of information whilst building the model, one of the key sources of insights has been studying/ cross referencing academia’s findings/ insights into known human mental phenomena/ disorders. I’ve used the information to derive/ simulate the same outcomes/ prognosis within the model working on the assumption that if the model can mimic a similar set of symptoms then I must be on the correct track. Topics covered/ simulated include dementia, schizophrenia, multiple personality, meditation, hypnosis, near death experiences, etc.

There is an innate intelligence within the schema, this is how it’s able to self-organise/ categorise and recognise sensory stimulus. There is no single learning methodology, the system learns to learn from scratch, it adapts to its learning environment.  It can basically learn to learn anything within the scope of its sensory field.  I believe it’s this base/ core intelligence magnified/ compounded that we human’s exhibit.
The next chapter…

Up to this point I’ve been relying on simple ML derived tools to track the complex patterns that arise within the connectome, but I now have ‘Igor’ ( (get it?) lol, a decapitated version of the neocortex model. It is able to track the millions of patterns that arise with ease, and will enable me to move on to the next phase. This helpful development was always was part of my plan (honest!)

Thought/ Self awareness/ Consciousness (TSC)
So far whilst building/ designing the model I’ve been suppressing the long range/ internal tracts/ GTP that connect the various cortex areas with the hippocampus and other deep brain structures, and with good reason.  The sensory input cortices have a specific job to do, and it was extremely important that I isolate their functionality so I could design/ test them.  I have seen/ recorded evidence from my simulations of how TSC will arise and function within the system, indeed it was these phenomena that were interfering with testing of the neo-cortex.

Once the system reaches a certain level of maturation/ complexity it stops learning/ responding purely to sensory input and a neural wide event happens, a spark.  It’s a cascade or feedback loop, which flushes the connectome with internally generated patterns. From this point onwards the system integrates and learns external sensory information along with its own internal GTP patterns/ imagination… this is where the real fun starts lol.

Nah! Well…probably… but not as we know it… Jim.

Humanities Epitaph
Ok… let’s activate the speech module, rev-up the GTP and start poking this thing with a sharp stick… just to see what happens.

Just to get us back up to speed the last few previous topics/ videos were…
Title: Re: The New Species (Project Progress)
Post by: Korrelan on December 06, 2019, 11:28:19 AM
I've started to experiment/ integrate the internal long range afferent tracts inside the model, connecting the various deep brain structures rather than just the connectome within the actual neocortex. This is so the GTP can start to flow/ cycle through the model.

Now the tracts are implemented my theory postulates that mini-maps should arise within the neocortex, a Mini-map is a region of neocortex that learns to integrate the sparse outputs of other cortical regions relevant to it, or relevant to the task currently being processed.

So there is a small section within each Mini-map dedicated/ that corresponds to a larger region of cortex.  Unlike the sensory association regions which are build from the lateral afferent axons within the neocortex structure/ layers, Mini-maps are built/ connected by the long range tracts between diverse/ distant cortex regions.

The 3D render shows an isolated region/ Mini-map and the afferent axons feeding it. The model view in the background shows the cortical regions being processed by this single Mini-map.

Title: Re: The New Species (Project Progress)
Post by: Korrelan on December 08, 2019, 02:11:31 PM
The frontal cortex is a special case/ area of the neo-cortex, mainly because it has no direct association to the sensory areas.  It receives its stimulus only through the long range afferent axons that originate from the various sensory/ attention mini-maps and deep brain structures.  It purely processes the internal activity generated by the rest of the connectome and then influences/ conducts the whole system.

Starting @0:40 in the video 2% (for clarity) of the tracts involved in a short ‘thought frame’ are shown as they are activated by the GTP (vastly slowed down), any white tracts are associated with the frontal cortex, this shows that the area is ‘forming/ growing’ according to plan and is consequently processing the various streams within the GTP.  The algorithms that generate the networks are working correctly and the frontal cortex is beginning to recognise and integrate/ control/ influence the rest of the connectome.

So… regarding the sensory neo-cortex, the sensory areas are recognising the sensory input streams, the association areas are recognising the mixed combinations of sensory data, the attention areas are recognising the commonalities in the streams, the mini-maps are recognising the diverse associations and now the frontal cortex is starting to build/ form/ recognise the facets within the GTP… it’s starting to track and recognise it’s own internally generated patterns.

Title: Re: The New Species (Project Progress)
Post by: Korrelan on December 15, 2019, 11:31:24 AM

Title: Re: The New Species (Project Progress)
Post by: Korrelan on December 16, 2019, 02:24:10 PM

I’ve figured it all out; I know exactly what’s happening now and why.

By design the base theta wave is required to drive the GTP, it provides an essential element of timing and sequential organisation.  It has a narrow bandwidth but a high area of effect so there is plenty of bandwidth for sub patterns to travel through the connectome.

The main network loops involved in sustaining the theta rhythm are the brain stem, thalamus and the neocortex.  The main timing/ driving force is the brain stem, this provides the synchronous push (like pushing a pendulum) to keep the rhythm cycling.

The reason I was confused earlier is because I was not injecting the circadian rhythms into the brain stem so the rhythm should not have started. But the frequency of the visual images being injected was enough to start and stabilise it… who knew lol.

In this video you can see the interactions between the stem, thalamus (the ball) and neocortex. The shape of the thalamus is not important, just the functionality of its neuron types. When I click the PAT it starts injecting 400K image frames and you can see the EEG (right) cycle out of theta up into the delta/ alpha/ gamma frequencies. @0:20 you can also see on the graph (bottom left) that the connectome is recognising the images in relation to the theta wave. When the PAT is turned off the connectome cycles back down into a resting state of theta.  Also notice that although the theta is stable the firing activities creating it are extremely varied. (top right), this because they are filtered through the neocortex, which has learned external knowledge from the senses. So the GTP theta wave actually partly consists of memory fragments as well as what the connectome ‘thinks’ about those fragments, in the same terms as it’s learned from experience.

The core/ base theta wave will contain the personality of the machine, the most relevant memories/ experiences and any logic/ knowledge required, any single internal or sensory memory engram can effect/ sway the GTP which will fire other associated memory facets.  This puts the GTP in a similar state to when the memories/ experiences were ingrained, but all under the control of the base personality theta wave.

Title: Re: The New Species (Project Progress)
Post by: Korrelan on January 16, 2020, 07:10:50 PM
In theory any section of cortex can learn any sensory modality, just for kicks this is a lobe-less visual V1 recognising accelerated spoken speech in real time.

The graph (bottom left), is a ML tool I can train to recognise complex patterns, each colour represents neuron clusters recognising a single word in the paragraph, the jumping small white marker is showing the confidence in recognising/ locking onto each word in the paragraph out of all the possible phoneme combinations as the cortex receives the sensory stream. The secondary spikes on the graph are the same phonemes being in more than one word.

Title: Re: The New Species (Project Progress)
Post by: Korrelan on January 26, 2020, 10:12:23 AM
When this machine gets a foot... it will tap it along to the music lol.

Base theta neural rhythm synchronising to the beat of the music, resulting in brain wide synchronous memory recall.  System has tuned to a 'western rhythm' through experience and exposure to this genre of music.

Custom windowed/ filtered audio module on the left, audio cortex throughput (L2-3) showing current and predictive response/ recall on the right. EEG & GTP level/ synchronicity bottom right.

Houston: K4 can we have an atmosphere analysis...
Houston: K4.... Oi... K4?
Houston: K4, please turn down the Pink Floyd and pay attention...
K4: Chill dude... stop hitting me with those negative vibes man...  I was in the groove... I'm on it...

This is a continuation of my research into savant syndrome and ultimately machine consciousness.  So far my theories on what consciousness both is and how it functions have been approximately correct, but with consciousness comes free will which is causing me a few problems lol.

Progress is still going steady on AGI - Artificial General Insanity

Title: Re: The New Species (Project Progress)
Post by: Korrelan on May 18, 2020, 02:12:08 PM
Just bringing my project thread up to date…

Sleep Spindles - K Complex

On the right hand graph the blue line/ trace (bottom) is an EEG equivalent, the white trace above it represents GTP complexity, the higher the white trace the more information is being processed.

The system is resting after a previous learning session, the base Thalamocortical Theta wave is running, you can see its influence on the GTP.

This shows the effect of injecting a stimulus during sleep, the equivalent of hearing a noise in REM II. The sudden spike in GTP complexity is the sensory information flowing from the sensory cortices into the GTP, the EEG spike is the connectome recognising key components of the stimulus.

With a lack of external sensory stimulus to drive the GTP's narrative the connectome is 'dreaming' , constructing its own disordered narrative based on the Transmitter tags laid down in the previous learning session. The sudden sensory stimulus is influencing the GTP's wondering narrative, it's being included/ fitted into the overall GTP.

Theta timed place cells.

This is a single-shot exposure to the location data. The colours on the 'pattern lock' graph correspond to the colours on the 'location' map. The height of the pattern lock represents confidence in recognition. The green blob moving around the locations represents the models current perceived location.


If there is a rock on the ground, with another rock next to it they obviously exist as they are in our reality, no form, function, meaning, just separate collections of matter held together by atomic forces, held down by gravity, etc.  When a human looks at the rocks they are given meaning by our mind, we perceive a quantity, a form and a function, but this only exists in our mind, the rocks haven’t changed.

Statistics, like mathematics and language are post-intelligence, they are constructs of the human mind, and they don’t actually exist in actual reality. We have created them in order to find/ add meaning to our reality; a different mind would perceive reality differently.  Our understanding of reality is not how it actually is, it’s just how we perceive it, it’s our interpretation.

All of our technology is based on the manipulation of how we perceive our reality; we build machines to leverage the natural forces/ processes around us.  An aeroplane manipulates air pressure differentials to fly, combustion and jet engines utilize chemical reactions to harness explosive reactions and translate it to kinetic energy.  Every machine and technology we devise harnesses some natural law/ process to achieve function.

Then we come to the transistor, which again harnesses a natural force of nature, the movement of electrons through/ across various materials to create a flow gate/ switch.

But… everything past the transistor, so the processor, the logic, the machine code, the high level languages are all man made/ designed.  Everything stems from that single/ simple switching process; it’s a huge hierarchical schema of human derived concepts based on our perception of reality, not actual reality.

Our minds did this, and our minds exist pre-perception in actual reality, they function using the laws/ forces/ processes of actual reality not human derived/ perceived logic. 

So… to build a human type mind/ intelligence, human derived concepts like mathematics and language are useless.  But we have reached a milestone where we can cheat, and use our technology to sufficiently simulate the forces/ processes of actual reality.

My tech does not use logic, mathematics, statistics, language, weights, back propagation, etc. The intelligence comes from the simulation of actual reality; the connectome model uses simulated natural forces/ processes to achieve perception.

Title: Re: The New Species (Project Progress)
Post by: frankinstien on May 18, 2020, 06:12:06 PM
When you talk about Engrams as episodic memory  I'm assuming some kind of generalized neural structure like a cortical column, how is information like oxytocin used, and how are the receptors for oxytocin simulated in your model?
Title: Re: The New Species (Project Progress)
Post by: Korrelan on May 18, 2020, 10:15:34 PM
Hi Frank

Keep in mind that this is not a human connectome, it’s an ‘alien’ (for want of a better word) connectome that functions using the same principles as a human connectome.  Although I’m sticking as closely as possible to the human model I’m under no illusions, besides, there is so little empirical information available regarding the human connectome at the resolutions I require.

The goal of the project is infinitely scaleable intelligence, imagination, self awareness, consciousness, etc.

Cortical Columns

Cortical columns are not an innate part of the neocortex’s DNA derived structure; they are a consequence of learning and interlayer dendritic branching.


The 3D connectome model resides inside a 3D octree style volumetric map, this simulates cerebral fluid, there is no flow (yet) but there is dissipation and accumulation from synapse terminating in the fluid. Each voxel can contain varying levels of transmitters, etc. The transmitters affect the neuron/ synapse properties/ maturation, global maturation cycles, sleep cycles, plasticity, etc.

ATM I have 17 transmitter (peptide) equivalents that affect the system in various ways, as to which is the parallel to Oxytocin… I literally have no idea.  I have avoided including any emotional effects to date as preliminary experiments made the system extremely unstable, which made rigorous testing impossible.  Even my vocal traits were being misunderstood by the system... there should have been no frowns lol.

I would need a lot more information on the origination, localised quantities, etc… emotional intelligence is last on my list ATM.

Nice site BTW.

Title: Re: The New Species (Project Progress)
Post by: Freddy on May 20, 2020, 02:44:49 AM

My tech does not use logic, mathematics, statistics, language, weights, back propagation, etc. The intelligence comes from the simulation of actual reality; the connectome model uses simulated natural forces/ processes to achieve perception.

If I understand this correctly then is any intelligence shown from your model an emergent thing, rather than coaxed by rules (maths etc)?
Title: Re: The New Species (Project Progress)
Post by: Korrelan on May 20, 2020, 01:47:20 PM
Hi Freddy… 02:44… you’re a night owl lol… good question…

Emergence, its one of those words that has a lot of incorrect connotations but yes, the intelligence emerges from the physical (simulated) 3D structure of the connectome.

As above, our computers are based on a simple switch, the transistor.  The logic of the computer is achieved by combining those switches into evermore complex units/ combinations to produce gates, processors, etc. 

So technically it’s the physical 3D structure of the processor, the layout/ connections/ wiring that creates the functionality. If any of the components were removed or wired differently it would affect/ stop the functionality. We do not consider the functionality/ output of the processor emergent because it has been designed, wired in a specific way to do a specific job.

The human connectome also uses a single unit for ‘logic’, the synaptic gate but… the wiring schema is not fixed, the wiring is shaped over time through experience and information is actually embedded in the processor. 

Its hard to describe… its like ‘intelligence’ is an innate law of nature, it is already embedded in everything/ ‘reality’ but you need a specific mechanism to access/ process it. E.g.

The state of a chess game is stored in the physical 3D positions of the pieces.  The spatial layout is acting as a memory, objects (usually) stay were they are put.

The shaft ratios of a gear box, although we use mathematics to design the gear box the actual output 2:1 calculation/ action is derived from physical forces, the movement and interconnection of the gears provides the running ‘intelligence/ logic’ for the system.

A piece of string of arbitrary length, if divided/ folded can he a half, quarter, eighth, etc.  We understand this using our mathematics and measurements but… that’s not what is doing the ‘calculation’ it’s a physical property of string/ reality, something folded in half is half as long.

We get used to thinking about reality using our own human derived schemas, but there is always an underlying property/ law of reality that we are manipulating/ interpreting to acheive our function/ goal.

I have discovered (reverse engineered) a specific physical structure/ schema/ network that is able to adapt and learn using these principles.  It’s uses spatiotemporal relationships, and forces like inertia to build an internal model of reality, to achieve ‘intelligence’, just like we do.

Formal Maths

I suppose its down to semantics, I obviously use statistics and logic to run the biological simulation, the neuromorphic connectome/ processor… but the AGI then runs on the simulated processor, which does not use any formal statistical analysis, heuristics, rules or mathematical functions.


BTW I'm not talking about the 'panpsychist' idealogy, everything is not conscious lol.

Title: Re: The New Species (Project Progress)
Post by: Korrelan on May 20, 2020, 05:05:27 PM

The ‘Pattern Lock ‘ box displays a colour graded graph that represents 80 patterns, one colour for each pattern.  The scrolling bar marks the current pattern being injected into the system.  The height/ peak of any section of the graph represents specific output neurons firing/ tuned to one of the 80 patterns.  So in its simplest form the peak should follow the moving cursor box as the connectome recognises that specific pattern.  The length and complexity of the patterns can vary between videos.

Title: Re: The New Species (Project Progress)
Post by: Freddy on May 20, 2020, 06:09:03 PM
Hi Freddy… 02:44… you’re a night owl lol… good question…

Yes some nights I can't sleep, so I code or research. Gotta make the most of the time :)

Thanks for the detailed explanation, now it makes a lot more sense to me.  O0
Title: Re: The New Species (Project Progress)
Post by: Korrelan on May 22, 2020, 02:03:32 PM
If you reverse engineer the human ocular system (ignoring focal distance and depth of field, etc) this is an approximate procedural/ algorithmic representation of the (colour) retinal schema projected onto V1.

Multiple interpolated (receptive field ranges) resolutions are recognised at the same time through a polar centric schema, a low resolution periphery graded to a high resolution fovea. So for any high resolution information there is also a low resolution spatial element of the surrounding overall area/ scene. The schema of the retina provides a top down influence for free, so… if the rough outline is cat shaped and fovea detects feline’s eyes… it’s probably a cat.

The small image (bottom left of left window) represents the whole of the circular/ retina, spatially arranged into a Cartesian format which is roughly analogous to the V1 spatial map. Rotational invariance simply moves the overall image detail left or right, scale invariance moves the image detail up/ down.

Eye movements and saccades are learned/ made relative to this polar map, so for any unrecognised object/ patch in the periphery the fovea can instantly traverse to focus on that region.

For textures or repeating patterns like fur, the center high res fovea recognises the texture, and all other joined/ un-bordered retinal locations with the same/ similar colour/ contrast attributes are assumed to be the same material, show by the colour fill.

If this retinal modality/ output is then translated into neural code, even foetal models can instantly recognise simple objects after one exposure.

The extra spikes for some objects on the pattern lock are caused by extreme similarities.  The system has noticed (from its point of view, not ours) similarities in composition, colour, etc.  Prior to being shown these objects the system had learned to ‘see’ from scratch, it learned to differentiate and categorise the visual sensory stream, its then able to recognise new objects from memory.  The spikes are an accumulation of episodic memories firing that encode/ remember the salient qualities of each object.