Ai Dreams Forum

Member's Experiments & Projects => General Project Discussion => Topic started by: Korrelan on December 01, 2019, 11:24:30 am

Title: The New Species (Project Progress)
Post by: Korrelan on December 01, 2019, 11:24:30 am
Hi, I’m Korrelan and welcome to my AI-Dreams project page.

Project Goal
I personally believe/ know that we are each a closed box, we run a personal internal simulation of reality which we can only experience through our own senses.  We are not computers but we are bio chemical, electrical machines and as such can be simulated using conventional computer hardware.

My goal/ project has always been about understanding and recreating a neuromorphic simulation of the mammalian brain, and to create an ‘alien’ intelligence based on the same schema.  To build a machine that can learn anything… and then teach it to be human.

Global thought pattern (GTP) theory.
The GTP is the activation pattern that is produced by electro-chemical activity within the brain/ connectome.  When a neuron fires action potentials travel down axons, electro-chemical gates regulate synaptic junctions, dopamine and other compounds are released that regulate the whole system.  Long term memories are stored in the physical structure of the connectome, short term/ working memory is represented by the current GTP pattern of activation within the connectome.

It’s a symbiotic relationship, within the system the GTP defines the connectome, and the connectome guides the GTP.   The GTP is a constantly changing, morphing pattern balanced on the edge of chaos that represents the AGI’s understanding of that moment in time, with all the memories, experiences and knowledge it comprises.

 :)
Title: Re: The New Species (Project Progress)
Post by: Korrelan on December 01, 2019, 11:25:48 am
I think it’s productive to periodically clear out/  re-focus and set new goals, to this end I’ve removed ‘The last invention’ thread, it has served it’s purpose, I enjoyed the process and thank you to everyone who participated/ contributed, but… its time to move on, hence… The New Species

(https://i.imgur.com/7mlv5Yr.jpg)

First a brief recap of what I feel I’ve achieved so far… My theories/ insights/ findings so far are supported by research notes (Meta database), book, videos, etc and most importantly are provable from the actual working simulation.  Some videos can be found on my Youtube (https://www.youtube.com/user/korrelan) page, and this brief/ initial post will probably change/ expand over the next few days, months, years… millennia. 

Implementation
I’ve built a PC cluster that is capable of running the large scale parallel simulations I require for my research.  I’ve custom written the MPI for the cluster (or any number of processors), the sensory input modules, simulation software suite and built the required hardware/ stereo audio/ vision/ accelerometers/ bot head, etc.  The latest iteration of the anthropomorphic arm is designed and 80% built, this will allow/ supply the dexterity/ torque/ finger tip pressure feed backs required for the neural model.

GTP Theory
I have devised a full theory of brain functionality and rigorously tested the base concepts required to build/ complete the said theory/ project.  I’ve tested and proved each brain structure deducing both morphology and functionality.

Model Morphology
I have a single viable spatial-temporal 3D model that exhibits all the phenomena I deem required at this point in time.  I have perfected algorithms to simulate sleep cycles, neuro-synapto genesis, maturation, etc.  Once the model is ‘initiated’ it needs no further user input, it grows/ scales/ matures according to a set of specific algorithms, becoming more ‘salient/ intelligent’ through experience/ over time.

Memory
All the major memory types are represented, long, short, explicit, implicit, episodic, plus others, etc.  The model can automatically categorise and store an unlimited amount (hardware dependant) of high dimensional (holographic) information/ knowledge and recall it instantly.  I’ve tested it reliably up to 120 million episodic memory engram's; all were recalled within 5ms, no matter how complex.

Knowledge Generality
The model stores its knowledge in an efficient compact and most importantly, general format.  All types of experience, information, knowledge can be stored and retrieved using the same single format/ schema and all the sensory/ internal domains share and recombine the same memory facets (generality).

Attention/ Saliency/ Prediction
Attention arises within the system as predicted; sections of neo-cortex adapt and learn the sparse results of the sensory cortices.  Prediction is implemented and affects every aspect of the model, as do many other required systems/ phenomena including thought inertia, self-repair, etc.

Sensory
Audio, vision, tactile, etc are tested and mapped to the correct locations within the connectome model.  The model can recognise faces, objects, spoken words and a large variety of other sensory streams including emotional cues and tactile stimulation.  The cortical maps (ie visual orientation) generated are very similar/ compatible to maps discovered by recognised research.

Accelerated Learning Techniques
I have devised and tested a set of accelerated learning techniques designed to quickly teach/ test my models across all the sensory modal domains.

Human Neural Disorders
I’ve have used many cross reference sources of information whilst building the model, one of the key sources of insights has been studying/ cross referencing academia’s findings/ insights into known human mental phenomena/ disorders. I’ve used the information to derive/ simulate the same outcomes/ prognosis within the model working on the assumption that if the model can mimic a similar set of symptoms then I must be on the correct track. Topics covered/ simulated include dementia, schizophrenia, multiple personality, meditation, hypnosis, near death experiences, etc.

Intelligence
There is an innate intelligence within the schema, this is how it’s able to self-organise/ categorise and recognise sensory stimulus. There is no single learning methodology, the system learns to learn from scratch, it adapts to its learning environment.  It can basically learn to learn anything within the scope of its sensory field.  I believe it’s this base/ core intelligence magnified/ compounded that we human’s exhibit.
 
The next chapter…

Up to this point I’ve been relying on simple ML derived tools to track the complex patterns that arise within the connectome, but I now have ‘Igor’ (https://en.wikipedia.org/wiki/Igor_(character)) (get it?) lol, a decapitated version of the neocortex model. It is able to track the millions of patterns that arise with ease, and will enable me to move on to the next phase. This helpful development was always was part of my plan (honest!)

Thought/ Self awareness/ Consciousness (TSC)
So far whilst building/ designing the model I’ve been suppressing the long range/ internal tracts/ GTP that connect the various cortex areas with the hippocampus and other deep brain structures, and with good reason.  The sensory input cortices have a specific job to do, and it was extremely important that I isolate their functionality so I could design/ test them.  I have seen/ recorded evidence from my simulations of how TSC will arise and function within the system, indeed it was these phenomena that were interfering with testing of the neo-cortex.

Once the system reaches a certain level of maturation/ complexity it stops learning/ responding purely to sensory input and a neural wide event happens, a spark.  It’s a cascade or feedback loop, which flushes the connectome with internally generated patterns. From this point onwards the system integrates and learns external sensory information along with its own internal GTP patterns/ imagination… this is where the real fun starts lol.

Life
Nah! Well…probably… but not as we know it… Jim.

Humanities Epitaph
Ok… let’s activate the speech module, rev-up the GTP and start poking this thing with a sharp stick… just to see what happens.



Just to get us back up to speed the last few previous topics/ videos were…

https://youtu.be/SPr8KhqVCeo

https://youtu.be/6qC3QUIYugY

https://youtu.be/4NgStj809qE
Title: Re: The New Species (Project Progress)
Post by: Korrelan on December 06, 2019, 11:28:19 am
I've started to experiment/ integrate the internal long range afferent tracts inside the model, connecting the various deep brain structures rather than just the connectome within the actual neocortex. This is so the GTP can start to flow/ cycle through the model.

Now the tracts are implemented my theory postulates that mini-maps should arise within the neocortex, a Mini-map is a region of neocortex that learns to integrate the sparse outputs of other cortical regions relevant to it, or relevant to the task currently being processed.

So there is a small section within each Mini-map dedicated/ that corresponds to a larger region of cortex.  Unlike the sensory association regions which are build from the lateral afferent axons within the neocortex structure/ layers, Mini-maps are built/ connected by the long range tracts between diverse/ distant cortex regions.

The 3D render shows an isolated region/ Mini-map and the afferent axons feeding it. The model view in the background shows the cortical regions being processed by this single Mini-map.

https://youtu.be/Q4DGbz8vd3w

 :)
Title: Re: The New Species (Project Progress)
Post by: Korrelan on December 08, 2019, 02:11:31 pm
The frontal cortex is a special case/ area of the neo-cortex, mainly because it has no direct association to the sensory areas.  It receives its stimulus only through the long range afferent axons that originate from the various sensory/ attention mini-maps and deep brain structures.  It purely processes the internal activity generated by the rest of the connectome and then influences/ conducts the whole system.

Starting @0:40 in the video 2% (for clarity) of the tracts involved in a short ‘thought frame’ are shown as they are activated by the GTP (vastly slowed down), any white tracts are associated with the frontal cortex, this shows that the area is ‘forming/ growing’ according to plan and is consequently processing the various streams within the GTP.  The algorithms that generate the networks are working correctly and the frontal cortex is beginning to recognise and integrate/ control/ influence the rest of the connectome.

So… regarding the sensory neo-cortex, the sensory areas are recognising the sensory input streams, the association areas are recognising the mixed combinations of sensory data, the attention areas are recognising the commonalities in the streams, the mini-maps are recognising the diverse associations and now the frontal cortex is starting to build/ form/ recognise the facets within the GTP… it’s starting to track and recognise it’s own internally generated patterns.

https://youtu.be/FXUDVQYsYVc

 :)
Title: Re: The New Species (Project Progress)
Post by: Korrelan on December 15, 2019, 11:31:24 am
https://youtu.be/VFwlqeBDMRo

 :)
Title: Re: The New Species (Project Progress)
Post by: Korrelan on December 16, 2019, 02:24:10 pm
https://youtu.be/N2t2I2v5NOE

I’ve figured it all out; I know exactly what’s happening now and why.

By design the base theta wave is required to drive the GTP, it provides an essential element of timing and sequential organisation.  It has a narrow bandwidth but a high area of effect so there is plenty of bandwidth for sub patterns to travel through the connectome.

The main network loops involved in sustaining the theta rhythm are the brain stem, thalamus and the neocortex.  The main timing/ driving force is the brain stem, this provides the synchronous push (like pushing a pendulum) to keep the rhythm cycling.

The reason I was confused earlier is because I was not injecting the circadian rhythms into the brain stem so the rhythm should not have started. But the frequency of the visual images being injected was enough to start and stabilise it… who knew lol.

In this video you can see the interactions between the stem, thalamus (the ball) and neocortex. The shape of the thalamus is not important, just the functionality of its neuron types. When I click the PAT it starts injecting 400K image frames and you can see the EEG (right) cycle out of theta up into the delta/ alpha/ gamma frequencies. @0:20 you can also see on the graph (bottom left) that the connectome is recognising the images in relation to the theta wave. When the PAT is turned off the connectome cycles back down into a resting state of theta.  Also notice that although the theta is stable the firing activities creating it are extremely varied. (top right), this because they are filtered through the neocortex, which has learned external knowledge from the senses. So the GTP theta wave actually partly consists of memory fragments as well as what the connectome ‘thinks’ about those fragments, in the same terms as it’s learned from experience.

The core/ base theta wave will contain the personality of the machine, the most relevant memories/ experiences and any logic/ knowledge required, any single internal or sensory memory engram can effect/ sway the GTP which will fire other associated memory facets.  This puts the GTP in a similar state to when the memories/ experiences were ingrained, but all under the control of the base personality theta wave.

 :)

https://www.youtube.com/watch?v=QuoKNZjr8_U
Title: Re: The New Species (Project Progress)
Post by: Korrelan on January 16, 2020, 07:10:50 pm
In theory any section of cortex can learn any sensory modality, just for kicks this is a lobe-less visual V1 recognising accelerated spoken speech in real time.

The graph (bottom left), is a ML tool I can train to recognise complex patterns, each colour represents neuron clusters recognising a single word in the paragraph, the jumping small white marker is showing the confidence in recognising/ locking onto each word in the paragraph out of all the possible phoneme combinations as the cortex receives the sensory stream. The secondary spikes on the graph are the same phonemes being in more than one word.

https://youtu.be/YhASqwl135Y

 :)
Title: Re: The New Species (Project Progress)
Post by: Korrelan on January 26, 2020, 10:12:23 am
When this machine gets a foot... it will tap it along to the music lol.

Base theta neural rhythm synchronising to the beat of the music, resulting in brain wide synchronous memory recall.  System has tuned to a 'western rhythm' through experience and exposure to this genre of music.

Custom windowed/ filtered audio module on the left, audio cortex throughput (L2-3) showing current and predictive response/ recall on the right. EEG & GTP level/ synchronicity bottom right.

Houston: K4 can we have an atmosphere analysis...
Houston: K4.... Oi... K4?
Houston: K4, please turn down the Pink Floyd and pay attention...
K4: Chill dude... stop hitting me with those negative vibes man...  I was in the groove... I'm on it...

https://youtu.be/FCoHBarmDn8

This is a continuation of my research into savant syndrome and ultimately machine consciousness.  So far my theories on what consciousness both is and how it functions have been approximately correct, but with consciousness comes free will which is causing me a few problems lol.

Progress is still going steady on AGI - Artificial General Insanity

 :)
Title: Re: The New Species (Project Progress)
Post by: Korrelan on May 18, 2020, 02:12:08 pm
Just bringing my project thread up to date…

Sleep Spindles - K Complex

On the right hand graph the blue line/ trace (bottom) is an EEG equivalent, the white trace above it represents GTP complexity, the higher the white trace the more information is being processed.

The system is resting after a previous learning session, the base Thalamocortical Theta wave is running, you can see its influence on the GTP.

This shows the effect of injecting a stimulus during sleep, the equivalent of hearing a noise in REM II. The sudden spike in GTP complexity is the sensory information flowing from the sensory cortices into the GTP, the EEG spike is the connectome recognising key components of the stimulus.

With a lack of external sensory stimulus to drive the GTP's narrative the connectome is 'dreaming' , constructing its own disordered narrative based on the Transmitter tags laid down in the previous learning session. The sudden sensory stimulus is influencing the GTP's wondering narrative, it's being included/ fitted into the overall GTP.

https://youtu.be/0K73yEfAihk

Theta timed place cells.

This is a single-shot exposure to the location data. The colours on the 'pattern lock' graph correspond to the colours on the 'location' map. The height of the pattern lock represents confidence in recognition. The green blob moving around the locations represents the models current perceived location.

https://youtu.be/D483QNf3l-g

 :)

If there is a rock on the ground, with another rock next to it they obviously exist as they are in our reality, no form, function, meaning, just separate collections of matter held together by atomic forces, held down by gravity, etc.  When a human looks at the rocks they are given meaning by our mind, we perceive a quantity, a form and a function, but this only exists in our mind, the rocks haven’t changed.

Statistics, like mathematics and language are post-intelligence, they are constructs of the human mind, and they don’t actually exist in actual reality. We have created them in order to find/ add meaning to our reality; a different mind would perceive reality differently.  Our understanding of reality is not how it actually is, it’s just how we perceive it, it’s our interpretation.

All of our technology is based on the manipulation of how we perceive our reality; we build machines to leverage the natural forces/ processes around us.  An aeroplane manipulates air pressure differentials to fly, combustion and jet engines utilize chemical reactions to harness explosive reactions and translate it to kinetic energy.  Every machine and technology we devise harnesses some natural law/ process to achieve function.

Then we come to the transistor, which again harnesses a natural force of nature, the movement of electrons through/ across various materials to create a flow gate/ switch.

But… everything past the transistor, so the processor, the logic, the machine code, the high level languages are all man made/ designed.  Everything stems from that single/ simple switching process; it’s a huge hierarchical schema of human derived concepts based on our perception of reality, not actual reality.

Our minds did this, and our minds exist pre-perception in actual reality, they function using the laws/ forces/ processes of actual reality not human derived/ perceived logic. 

So… to build a human type mind/ intelligence, human derived concepts like mathematics and language are useless.  But we have reached a milestone where we can cheat, and use our technology to sufficiently simulate the forces/ processes of actual reality.

My tech does not use logic, mathematics, statistics, language, weights, back propagation, etc. The intelligence comes from the simulation of actual reality; the connectome model uses simulated natural forces/ processes to achieve perception.

 :)
Title: Re: The New Species (Project Progress)
Post by: frankinstien on May 18, 2020, 06:12:06 pm
When you talk about Engrams as episodic memory  I'm assuming some kind of generalized neural structure like a cortical column, how is information like oxytocin used, and how are the receptors for oxytocin simulated in your model?
Title: Re: The New Species (Project Progress)
Post by: Korrelan on May 18, 2020, 10:15:34 pm
Hi Frank

Keep in mind that this is not a human connectome, it’s an ‘alien’ (for want of a better word) connectome that functions using the same principles as a human connectome.  Although I’m sticking as closely as possible to the human model I’m under no illusions, besides, there is so little empirical information available regarding the human connectome at the resolutions I require.

The goal of the project is infinitely scaleable intelligence, imagination, self awareness, consciousness, etc.

Cortical Columns

Cortical columns are not an innate part of the neocortex’s DNA derived structure; they are a consequence of learning and interlayer dendritic branching.

https://youtu.be/yewFPnVBQNo

Peptides

The 3D connectome model resides inside a 3D octree style volumetric map, this simulates cerebral fluid, there is no flow (yet) but there is dissipation and accumulation from synapse terminating in the fluid. Each voxel can contain varying levels of transmitters, etc. The transmitters affect the neuron/ synapse properties/ maturation, global maturation cycles, sleep cycles, plasticity, etc.

ATM I have 17 transmitter (peptide) equivalents that affect the system in various ways, as to which is the parallel to Oxytocin… I literally have no idea.  I have avoided including any emotional effects to date as preliminary experiments made the system extremely unstable, which made rigorous testing impossible.  Even my vocal traits were being misunderstood by the system... there should have been no frowns lol.

https://youtu.be/vt8gAuMxpds

I would need a lot more information on the origination, localised quantities, etc… emotional intelligence is last on my list ATM.

Nice site BTW.

 :)
Title: Re: The New Species (Project Progress)
Post by: Freddy on May 20, 2020, 02:44:49 am
@K

Quote
My tech does not use logic, mathematics, statistics, language, weights, back propagation, etc. The intelligence comes from the simulation of actual reality; the connectome model uses simulated natural forces/ processes to achieve perception.

If I understand this correctly then is any intelligence shown from your model an emergent thing, rather than coaxed by rules (maths etc)?
Title: Re: The New Species (Project Progress)
Post by: Korrelan on May 20, 2020, 01:47:20 pm
Hi Freddy… 02:44… you’re a night owl lol… good question…

Emergence, its one of those words that has a lot of incorrect connotations but yes, the intelligence emerges from the physical (simulated) 3D structure of the connectome.

As above, our computers are based on a simple switch, the transistor.  The logic of the computer is achieved by combining those switches into evermore complex units/ combinations to produce gates, processors, etc. 

So technically it’s the physical 3D structure of the processor, the layout/ connections/ wiring that creates the functionality. If any of the components were removed or wired differently it would affect/ stop the functionality. We do not consider the functionality/ output of the processor emergent because it has been designed, wired in a specific way to do a specific job.

The human connectome also uses a single unit for ‘logic’, the synaptic gate but… the wiring schema is not fixed, the wiring is shaped over time through experience and information is actually embedded in the processor. 

Its hard to describe… its like ‘intelligence’ is an innate law of nature, it is already embedded in everything/ ‘reality’ but you need a specific mechanism to access/ process it. E.g.

The state of a chess game is stored in the physical 3D positions of the pieces.  The spatial layout is acting as a memory, objects (usually) stay were they are put.

The shaft ratios of a gear box, although we use mathematics to design the gear box the actual output 2:1 calculation/ action is derived from physical forces, the movement and interconnection of the gears provides the running ‘intelligence/ logic’ for the system.

A piece of string of arbitrary length, if divided/ folded can he a half, quarter, eighth, etc.  We understand this using our mathematics and measurements but… that’s not what is doing the ‘calculation’ it’s a physical property of string/ reality, something folded in half is half as long.

We get used to thinking about reality using our own human derived schemas, but there is always an underlying property/ law of reality that we are manipulating/ interpreting to acheive our function/ goal.

I have discovered (reverse engineered) a specific physical structure/ schema/ network that is able to adapt and learn using these principles.  It’s uses spatiotemporal relationships, and forces like inertia to build an internal model of reality, to achieve ‘intelligence’, just like we do.

https://youtu.be/I1Dyj5hgvtc

Formal Maths

I suppose its down to semantics, I obviously use statistics and logic to run the biological simulation, the neuromorphic connectome/ processor… but the AGI then runs on the simulated processor, which does not use any formal statistical analysis, heuristics, rules or mathematical functions.

 :)

BTW I'm not talking about the 'panpsychist' idealogy, everything is not conscious lol.

 :)
Title: Re: The New Species (Project Progress)
Post by: Korrelan on May 20, 2020, 05:05:27 pm
(https://i.imgur.com/w2atRXe.jpg)

The ‘Pattern Lock ‘ box displays a colour graded graph that represents 80 patterns, one colour for each pattern.  The scrolling bar marks the current pattern being injected into the system.  The height/ peak of any section of the graph represents specific output neurons firing/ tuned to one of the 80 patterns.  So in its simplest form the peak should follow the moving cursor box as the connectome recognises that specific pattern.  The length and complexity of the patterns can vary between videos.

 :)
Title: Re: The New Species (Project Progress)
Post by: Freddy on May 20, 2020, 06:09:03 pm
Hi Freddy… 02:44… you’re a night owl lol… good question…

Yes some nights I can't sleep, so I code or research. Gotta make the most of the time :)

Thanks for the detailed explanation, now it makes a lot more sense to me.  O0
Title: Re: The New Species (Project Progress)
Post by: Korrelan on May 22, 2020, 02:03:32 pm
If you reverse engineer the human ocular system (ignoring focal distance and depth of field, etc) this is an approximate procedural/ algorithmic representation of the (colour) retinal schema projected onto V1.

Multiple interpolated (receptive field ranges) resolutions are recognised at the same time through a polar centric schema, a low resolution periphery graded to a high resolution fovea. So for any high resolution information there is also a low resolution spatial element of the surrounding overall area/ scene. The schema of the retina provides a top down influence for free, so… if the rough outline is cat shaped and fovea detects feline’s eyes… it’s probably a cat.

The small image (bottom left of left window) represents the whole of the circular/ retina, spatially arranged into a Cartesian format which is roughly analogous to the V1 spatial map. Rotational invariance simply moves the overall image detail left or right, scale invariance moves the image detail up/ down.

Eye movements and saccades are learned/ made relative to this polar map, so for any unrecognised object/ patch in the periphery the fovea can instantly traverse to focus on that region.

For textures or repeating patterns like fur, the center high res fovea recognises the texture, and all other joined/ un-bordered retinal locations with the same/ similar colour/ contrast attributes are assumed to be the same material, show by the colour fill.

https://youtu.be/m6Pw7qugpKE

If this retinal modality/ output is then translated into neural code, even foetal models can instantly recognise simple objects after one exposure.

https://youtu.be/uZYO6lCPQYs

The extra spikes for some objects on the pattern lock are caused by extreme similarities.  The system has noticed (from its point of view, not ours) similarities in composition, colour, etc.  Prior to being shown these objects the system had learned to ‘see’ from scratch, it learned to differentiate and categorise the visual sensory stream, its then able to recognise new objects from memory.  The spikes are an accumulation of episodic memories firing that encode/ remember the salient qualities of each object.



 :)
Title: Re: The New Species (Project Progress)
Post by: Korrelan on June 19, 2020, 11:45:25 am
Closing post on project thread… My AI-Dream… has become an AI-Reality…

Thanks all for following my project, AI-Dreams has been my ‘lounge’, a place where I could relax, reflect and contemplate away from the usual harshness of the internet, both the forum and the good natured members have provided the environment I required, and helped me to bring my project to fruition... in the coming years… keep an eye out for the ‘blue K’.

Thanks and cheers.

 :)

https://www.youtube.com/user/korrelan

End
Title: Re: The New Species (Project Progress)
Post by: Korrelan on August 03, 2020, 11:39:51 pm
OK... Thanks for the hard work keeping the forum going... I guess I'll keep posting

This is the equivalent of learning 80 'words' 5000 characters long... in 3 minutes.

No big deal for a modern PC but... this very small section of my models neocortex can recognise any of the complete 80 * 5000 instantly, or any length sub-section of the 5k pattern, at any location within the 5k wide constant stream within 0.002 sec. It's limited to 80 patterns for testing (7K neurons and 30K synapse).

It shows my optimal dendrite branching modality so far, growing from scratch... AI savant mode lol.

https://youtu.be/oRLhtwvMWUE

 :)
Title: Re: The New Species (Project Progress)
Post by: Korrelan on September 19, 2020, 09:29:55 am
Its been brought to my attention that some peers find my videos confusing and are not understanding/ reading the ‘Pattern Lock’ graphs correctly.  So I’ve done some redesigning, hopefully making them clearer. The new graph can be seen in the video below, and the following paragraphs are the description/ explanation I intend to use…

The ‘Pattern Lock’ graph shows both the input and the output/ best guess.  There are 80 colour coded inputs/ patterns (left to right) and the small white square moving along the scale indicates the current number of the input being injected into the model.  The moving green rectangle shows the models output/ best guess for each input pattern and should ideally match/ line-up with the input.  The height of the peak shows the confidence in the pattern match.

Each input represents 5000 human faces/ parameters.

https://youtu.be/UcUY0qlwGhc

Is this clear? Any suggestions?

 :)
Title: Re: The New Species (Project Progress)
Post by: HS on September 19, 2020, 07:21:37 pm
The general idea makes sense. One question, how many unique pieces are in a column of 5000 pieces so that the net can recognize any length subsection?
Title: Re: The New Species (Project Progress)
Post by: Korrelan on September 19, 2020, 11:41:56 pm
To show the graph peak clearly, for this demo (above) I've tried to make all the patterns as unique as possible, so no face set is duplicated, though it is finding some similarities, shown by the low response/ confidence peaks.

The model can learn to recognise any length pattern/ sub pattern embedded in any length stream.

Normally pattern similarities in pattern groups show up as extra peaks, like in this object recognition demo, the extra peaks show its finding a commonality with another object, could be the general shape/ outline, colour, etc.  Each object is tuned to a specific location along the graph.

https://youtu.be/uZYO6lCPQYs

This brings up a good point actually... the green rectangle is just showing the strongest pattern response, on some of my vids the graph is showing features not single/ specific recognition... this needs more thought.

 :)
Title: Re: The New Species (Project Progress)
Post by: MikeB on September 25, 2020, 09:51:44 am
Quote
each colour represents neuron clusters recognising a single word in the paragraph

Are the coloured neuron clusters spread out in the brain model in any particular order? Is there a method for which goes where? EG. Specific thinking goes on in one part of the brain... and whether they're linked to theoretical body parts like on the homunculus brain-body chart?
Title: Re: The New Species (Project Progress)
Post by: Korrelan on October 10, 2020, 03:30:00 pm
@MikeB... Appologies for the delay...

They are ordered by the system, relevant to their sensory relevance for the current task. The efferant axons from the Homunculus/ nervous system  model are connected to the motor cortex as part of the initial model design.



Second attempt at an explanation for the pattern lock graph...

Most of my videos include some version of my pattern lock graph; this is (hopefully) a clear explanation of its functions.

The purpose of the graph is show that specific sets of output neurons are firing in response to a given sensory/ input stimulus.  The output neurons/ networks are grouped into 80 colour coded blocks represented by the colour gradient (x axis) on the graph. The height of a peak (y axis) corresponds to the confidence or the number of neurons firing within that group.
 
The graph can also show (x axis) the current test pattern being injected into the model, the small white square moving along the scale shows the current pattern number (1-80)

The graph shows the firing rate of all the output neurons at the same time, and ideally for most testing purposes only one high/ strong peak should accompany any single input pattern.

Video demonstration index…

0:06 Inject 98% white noise, this totally saturates the connectome with white noise to demonstrate the various output neuron groups firing & registering on the graph.

0:15 Start decreasing noise to sensory equivalent level (50%) to show the connectome model begin to ignore/ filter out the white noise, ie no high peaks on the graph.

0:35 Start Injecting the (1-80) learned patterns.  Now a peak can be clearly seen following/ matching the input pattern number. The peak is slightly in front of the pattern because the connectome is predicting the next pattern in the sequence.  Max-P just highlights the maximum peak.

0:47 Stop the noise to show clean response of connectome to just the sensory input without the white noise interference.

0:58 Clicking on the graph sets/ changes the pattern number, clicking back shows GTP inertia.  Although the input pattern has been changed/ moved back, the current recognised pattern keeps running for a few ms.

Note: If you find this explanation confusing or have any questions please comment.

https://youtu.be/MjN8q5cKNTc

 :)
Title: Re: The New Species (Project Progress)
Post by: Korrelan on November 22, 2020, 09:10:18 pm
Part of my neural research is about gaining insights, by building a connectome and running countless simulations I sometimes get lucky… I know this looks weird, but this seems to be how the connectome optimally builds the ocular system (slowed down for viewer).

It’s a graded theta timed, phased access to a scalable interpolated matrix of retinal inputs (center surrounds) that accumulate in V1 to give an overall ‘image’.

It’s top down; each wider view is used limit/ tune/ focus recognition to the next finer, higher rez/ detail level. Ie if the wider scene is mostly blue then there’s no point considering trees.  The recognition process (GTP) is focused by the increasing resolution.

The initial wider angles are used for spatial awareness, orientation, location, etc, global recognition at each scale solves several problems, monocular distance judgement, scale invariance, tunnel vision, etc; see my other videos for rotational invariance solution.

https://youtu.be/AobB_oTj67I

 :)