Author Topic: The last invention.  (Read 9719 times)

keghn

  • *
  • Starship Trooper
  • *******
  • Posts: 371
Re: The last invention.
« Reply #165 on: May 16, 2017, 07:45:55 PM »
 I can envision spiked NN seeing a picture that is 1000 x 1000 pixels..........By moving into first layer of  a Siked Neural Network
with a depth of 5000 layer. Very deep.

 In the next, this image is moved into the second layer and the third step to the third layer and so on. Un changed.
 Each movement would be pixel to pixel. Or a daisy chain of spiked neurons.
 While a image is on a layer there is a dedicated 1000 x 1000 Spiked NN to observe what is there. And then these observing
Spiked NN would talk to each other to do more magik:)

WriterOfMinds

  • *
  • Roomba
  • Posts: 10
    • WriterOfMinds Blog
Re: The last invention.
« Reply #166 on: May 17, 2017, 12:53:55 AM »
If the goal here is something that strongly resembles the human brain, the impermanence of the learned structures doesn't necessarily strike me as a flaw.  Even long-term memories can fade if they aren't reinforced (either by recall or by experiencing the stimulus again).  Given the massive amounts of sensory data we take in, I'm sure forgetting is actually an important mechanism -- if you stop needing a piece of learned knowledge, it may be good for your brain to discard it, thereby freeing up wetware to learn something more important to you.

And of course, if a biological human brain is "turned off" at any point, an irreversible decay process begins.  Perfect recall is a super-human attribute, and korrelan isn't aiming for that yet, perhaps.

keghn

  • *
  • Starship Trooper
  • *******
  • Posts: 371
Re: The last invention.
« Reply #167 on: May 17, 2017, 01:35:53 AM »
 I have complete AGI theory that is not based on Neural network. I am working on A AGI theory that does use NN and
it is close to competion. I am not in hurry to finish the a AGI neural connectionist theory,. But if Find neural network
very interesting and studying them has made by complete theory even better.

rough synopsis. AGI basic structure. Jeff Hawkins influenced: 

https://groups.google.com/forum/#!topic/artificial-general-intelligence/UVUZ93Zep6Y



https://groups.google.com/forum/#!forum/artificial-general-intelligence




korrelan

  • *
  • Replicant
  • ********
  • Posts: 587
  • Look into my eyes! WOAH!
    • Google +
Re: The last invention.
« Reply #168 on: May 17, 2017, 09:28:40 AM »
@WoM… Well explained.

I seem to be having a problem explaining the system; some people just do not understand the schema… I need to pin the description down so I’ll have another go.

The Connectome

This represents the physical attributes and dimensions of the AGI’s brain.  It exists as a 3D vector model in virtual space.  It’s the physical wiring schema for its brain layout and comprises of the lobes, neurons, synapse, axons, dendrites, etc. Along with the physical representation of the connectome there are a set of rules that simulate biological growth, neurogenesis, plasticity, etc.  It’s a virtual simulation/ 3D model of a physical object… our brain.

The connectome is where experiences and knowledge are stored; the information is encoded into the physical structure of the connectome… Just like the physical wiring in your house encodes which light comes on from which switch.

The Global Thought Pattern (GTP)

This is the activation pattern that is produced by simulated electro-chemical activity within the connectome.  When a virtual neuron fires action potentials travel down virtual axons, simulated electro-chemical gates regulate synaptic junctions, simulated Dopamine and other compounds are released that regulate the whole system/ GTP. 

This would be the same as the electricity that runs through your house wiring.  The house wiring is just lengths of wire, etc and is useless without the power running through it.

The GTP is the ‘personality’…dare I say… ‘consciousness’ of the AI.

A Symbiotic Relationship

Within the system the GTP defines the connectome, and the connectome guides the GTP. This is a top down, bottom up schema; they both rely on each other.

Both the connectome and the GTP can be easily saved and re-started at any time.

The GTP is sustained by the physical structure/ attributes of the connectome.  As learning takes place the connectome is altered and this can result a gradual fade or loss of resolution within the GTP.  It should never happen but because I’m still designing the system on occasion it does.

Because the GTP is so complex and morphs/ phases through time; any incorrect change to the connectome can radically alter the overall pattern.  If the GTP is out of phase with the physical memories it laid down in the connectome then it can not access them and whole blocks of memories and experience become irretrievable. 

The connectome is plastic; so the irretrievable memories would eventually fade and the GTP would re-use the physical cortex space but… information is learned in a hierarchical manner.  So there is nothing to base new learning on if the original memories can’t be accessed… its like the game ‘Jenga’; knock the bottom blocks out and the whole tower comes falling down.

Once a GTP has become corrupt it then corrupts the connectome… or sometimes the GTP can fade out… then there is no point saving it… I have to start teaching it from scratch.  If you're wondering why I don’t just reload an earlier saved version of the connectome and GTP; I would need to know exactly when the corruption occurred… it could have been something the AGI heard or saw… or indeed a coding error within the simulation.  I do have several known good saved 'infant' AGI’s I can restart from and luckily the system learns extremely quickly… but it’s still a pain in the a**.

No one knows for sure how the human brain functions.  This is my attempt at recreating the mechanisms that produce human intelligence, awareness and consciousness.  I’m working in the dark; back engineering a lump of fatty tissue into an understandable, usable schema. I can’t reproduce every subtle facet of the problem space because my life just isn’t going to be long enough lol.  I have to take short cuts; accelerated learning for example… I can’t wait ten years for each connectome I test/ AGI to mature at human rates of development.  Our connectome develops slowly and this has a huge impact on stability… a lot of the GTP fading problems are caused because I’m forcing the system to mature to quickly… did you understand and recognise the alphabet at 1 hour old?

What I'm building is an ‘Alien’ intelligence based on the Human design.  If I create an intelligent, conscious machine that can learn anything… it can then learn to be ‘Human’ like...  and if it’s at least has intelligent as me… it can carry on where I leave off.

End Scene: Queue spooky music, fog & Lightning. Dr. Frankenstein exits stage left (laughing maniacally).

@Keghn

I’m reading your provided links now… this has evolved since I last saw it.

 :)
« Last Edit: May 17, 2017, 10:36:59 AM by korrelan »
It thunk... therefore it is!

LOCKSUIT

  • *
  • Terminator
  • *********
  • Posts: 924
  • First it wiggles, then it is rewarded.
    • Enter Lair
Re: The last invention.
« Reply #169 on: May 17, 2017, 02:00:27 PM »
How do you know your AI is learning? What has it achieved? Aren't you just playing with electricity packets and relay reactions stuffz? It not doin anyting....we also already have decent pattern recognizers and hierarchies...

I'm really confused at your master plan.

My AI's first achievements will be learning to crawl the house, and much more.

keghn

  • *
  • Starship Trooper
  • *******
  • Posts: 371
Re: The last invention.
« Reply #170 on: May 17, 2017, 02:42:53 PM »
 Well my method of unsupervised learning is to have a bit stream coming into the brain. Could be a video stream.
 The Observation Neural network looking at the steam coming in. It could be CNN style. It would cut the image up
into squares. For now, let us say for now it looking for a vertical edge. This box with a edge is piped down into deep in the brain
and scans across a bunch of little spiked NN. The one that gives the best response is piped back up and goes deeper into the
observation Neural Network.

 A object is made up of subfeatures of lines. Like ball or a person in a wheel chair.
 Sub features are vertical lines, horizontal lines, solid colors of various types, and corners of various orientations.
 Each sub feature is paired with a weight so that you can transform one object into another for comparative clustering. 

 Clustering with Scikit with GIFs: 
https://dashee87.github.io/data%20science/general/Clustering-with-Scikit-with-GIFs/

 

Welcome

Please login or register.



Login with username, password and session length

Users Online

25 Guests, 0 Users

Most Online Today: 33. Most Online Ever: 208 (August 27, 2008, 08:24:30 AM)

Articles