The last invention.

  • 213 Replies
  • 13672 Views
*

keghn

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 499
Re: The last invention.
« Reply #165 on: May 16, 2017, 08:57:55 pm »
 I can envision spiked NN seeing a picture that is 1000 x 1000 pixels..........By moving into first layer of  a Siked Neural Network
with a depth of 5000 layer. Very deep.

 In the next, this image is moved into the second layer and the third step to the third layer and so on. Un changed.
 Each movement would be pixel to pixel. Or a daisy chain of spiked neurons.
 While a image is on a layer there is a dedicated 1000 x 1000 Spiked NN to observe what is there. And then these observing
Spiked NN would talk to each other to do more magik:)

*

WriterOfMinds

  • Trusty Member
  • **
  • Bumblebee
  • *
  • 27
    • WriterOfMinds Blog
Re: The last invention.
« Reply #166 on: May 17, 2017, 02:05:55 am »
If the goal here is something that strongly resembles the human brain, the impermanence of the learned structures doesn't necessarily strike me as a flaw.  Even long-term memories can fade if they aren't reinforced (either by recall or by experiencing the stimulus again).  Given the massive amounts of sensory data we take in, I'm sure forgetting is actually an important mechanism -- if you stop needing a piece of learned knowledge, it may be good for your brain to discard it, thereby freeing up wetware to learn something more important to you.

And of course, if a biological human brain is "turned off" at any point, an irreversible decay process begins.  Perfect recall is a super-human attribute, and korrelan isn't aiming for that yet, perhaps.

*

keghn

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 499
Re: The last invention.
« Reply #167 on: May 17, 2017, 02:47:53 am »
 I have complete AGI theory that is not based on Neural network. I am working on A AGI theory that does use NN and
it is close to competion. I am not in hurry to finish the a AGI neural connectionist theory,. But if Find neural network
very interesting and studying them has made by complete theory even better.

rough synopsis. AGI basic structure. Jeff Hawkins influenced: 

https://groups.google.com/forum/#!topic/artificial-general-intelligence/UVUZ93Zep6Y



https://groups.google.com/forum/#!forum/artificial-general-intelligence




*

korrelan

  • Trusty Member
  • ********
  • Replicant
  • *
  • 678
  • Look into my eyes! WOAH!
    • Google +
Re: The last invention.
« Reply #168 on: May 17, 2017, 10:40:40 am »
@WoM… Well explained.

I seem to be having a problem explaining the system; some people just do not understand the schema… I need to pin the description down so I’ll have another go.

The Connectome

This represents the physical attributes and dimensions of the AGI’s brain.  It exists as a 3D vector model in virtual space.  It’s the physical wiring schema for its brain layout and comprises of the lobes, neurons, synapse, axons, dendrites, etc. Along with the physical representation of the connectome there are a set of rules that simulate biological growth, neurogenesis, plasticity, etc.  It’s a virtual simulation/ 3D model of a physical object… our brain.

The connectome is where experiences and knowledge are stored; the information is encoded into the physical structure of the connectome… Just like the physical wiring in your house encodes which light comes on from which switch.

The Global Thought Pattern (GTP)

This is the activation pattern that is produced by simulated electro-chemical activity within the connectome.  When a virtual neuron fires action potentials travel down virtual axons, simulated electro-chemical gates regulate synaptic junctions, simulated Dopamine and other compounds are released that regulate the whole system/ GTP. 

This would be the same as the electricity that runs through your house wiring.  The house wiring is just lengths of wire, etc and is useless without the power running through it.

The GTP is the ‘personality’…dare I say… ‘consciousness’ of the AI.

A Symbiotic Relationship

Within the system the GTP defines the connectome, and the connectome guides the GTP. This is a top down, bottom up schema; they both rely on each other.

Both the connectome and the GTP can be easily saved and re-started at any time.

The GTP is sustained by the physical structure/ attributes of the connectome.  As learning takes place the connectome is altered and this can result a gradual fade or loss of resolution within the GTP.  It should never happen but because I’m still designing the system on occasion it does.

Because the GTP is so complex and morphs/ phases through time; any incorrect change to the connectome can radically alter the overall pattern.  If the GTP is out of phase with the physical memories it laid down in the connectome then it can not access them and whole blocks of memories and experience become irretrievable. 

The connectome is plastic; so the irretrievable memories would eventually fade and the GTP would re-use the physical cortex space but… information is learned in a hierarchical manner.  So there is nothing to base new learning on if the original memories can’t be accessed… its like the game ‘Jenga’; knock the bottom blocks out and the whole tower comes falling down.

Once a GTP has become corrupt it then corrupts the connectome… or sometimes the GTP can fade out… then there is no point saving it… I have to start teaching it from scratch.  If you're wondering why I don’t just reload an earlier saved version of the connectome and GTP; I would need to know exactly when the corruption occurred… it could have been something the AGI heard or saw… or indeed a coding error within the simulation.  I do have several known good saved 'infant' AGI’s I can restart from and luckily the system learns extremely quickly… but it’s still a pain in the a**.

No one knows for sure how the human brain functions.  This is my attempt at recreating the mechanisms that produce human intelligence, awareness and consciousness.  I’m working in the dark; back engineering a lump of fatty tissue into an understandable, usable schema. I can’t reproduce every subtle facet of the problem space because my life just isn’t going to be long enough lol.  I have to take short cuts; accelerated learning for example… I can’t wait ten years for each connectome I test/ AGI to mature at human rates of development.  Our connectome develops slowly and this has a huge impact on stability… a lot of the GTP fading problems are caused because I’m forcing the system to mature to quickly… did you understand and recognise the alphabet at 1 hour old?

What I'm building is an ‘Alien’ intelligence based on the Human design.  If I create an intelligent, conscious machine that can learn anything… it can then learn to be ‘Human’ like...  and if it’s at least has intelligent as me… it can carry on where I leave off.

End Scene: Queue spooky music, fog & Lightning. Dr. Frankenstein exits stage left (laughing maniacally).

@Keghn

I’m reading your provided links now… this has evolved since I last saw it.

 :)
« Last Edit: May 17, 2017, 11:48:59 am by korrelan »
It thunk... therefore it is!

*

LOCKSUIT

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1097
  • First it wiggles, then it is rewarded.
    • Enter Lair
Re: The last invention.
« Reply #169 on: May 17, 2017, 03:12:27 pm »
How do you know your AI is learning? What has it achieved? Aren't you just playing with electricity packets and relay reactions stuffz? It not doin anyting....we also already have decent pattern recognizers and hierarchies...

I'm really confused at your master plan.

My AI's first achievements will be learning to crawl the house, and much more.

*

keghn

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 499
Re: The last invention.
« Reply #170 on: May 17, 2017, 03:54:53 pm »
 Well my method of unsupervised learning is to have a bit stream coming into the brain. Could be a video stream.
 The Observation Neural network looking at the steam coming in. It could be CNN style. It would cut the image up
into squares. For now, let us say for now it looking for a vertical edge. This box with a edge is piped down into deep in the brain
and scans across a bunch of little spiked NN. The one that gives the best response is piped back up and goes deeper into the
observation Neural Network.

 A object is made up of subfeatures of lines. Like ball or a person in a wheel chair.
 Sub features are vertical lines, horizontal lines, solid colors of various types, and corners of various orientations.
 Each sub feature is paired with a weight so that you can transform one object into another for comparative clustering. 

 Clustering with Scikit with GIFs: 
https://dashee87.github.io/data%20science/general/Clustering-with-Scikit-with-GIFs/

*

ivan.moony

  • Trusty Member
  • ********
  • Replicant
  • *
  • 714
  • look, a star is falling
Re: The last invention.
« Reply #171 on: June 24, 2017, 02:41:05 pm »
I guess that neural networks approach is what is called bottom->up approach. It is about simulating neurons, then observing what they can do when they are grouped through multiple levels in some wholes. It is known that artificial neurons could be successfully used for a kind of fuzzy recognition, but I'm also interested in seeing how neurons could be used for making decisions and other processes that humans have in disposition.

Are there any concrete plans of how the complex decision process can be pursued by neurons, or we yet have to isolate decision phenomenon out of other processes that are happening in neural network. I'm sure (if NN resembles a natural brain) that the decision process is hidden somewhere in there.

I'm also wondering about other mind processes that may be hidden inside neural networks that are not observed in the present, but will be noticed in the future.
Wherever you see a nice spot, plant another knowledge tree :favicon:

*

korrelan

  • Trusty Member
  • ********
  • Replicant
  • *
  • 678
  • Look into my eyes! WOAH!
    • Google +
Re: The last invention.
« Reply #172 on: June 24, 2017, 04:14:27 pm »
If CPU designers and Electronics engineers where to wire transistors together using the same schema as a typical feed forward neural network simulation… nothing would work.  Although the transistors design is important it’s the design of the whole circuit with supporting components that achieve the desired results.

The human brain isn't just a collection of neurons wired together in a feed forward schema… it also has a complex wiring diagram/ connectome which is required to understand/ simulate any desired results.



This is a very simplified representation of a connectome in my simulations.  The circle of black dots represents the cortex, comprising distinct areas of neuron sheet tuned to recognise distinct parts of global thought pattern (GTP). Each section specialises through learning and experience and pumps the sparse results of its ‘calculations’ back into the GTP.  The lines linking the cortex areas represent the long range axons that link the various parts of the brain together.  There are obviously millions of these connecting various areas (not shown), the different colours represent sub-facets of the GTP, smaller patterns that represent separate parts/ recognised sensory streams/ ‘thoughts’, etc.  The sub-facets are generated by the cortex, sensory streams never enter the GTP, only the results of the recognition process. 

Points A & B represent attention/ relay/ decision areas of the cortex. These areas arise because of similarities detected between various patterns.  So the fact that an objects rough shape is round, or two events happen at the same time of day, that one event always follows another will trigger a recognition in one of these areas, this will inject a pattern into the GTP that the rest of the areas recognise for what it represents, they recognise it because the whole connectome has grown from a small basic seed, and they have developed together, using and recognising each others patterns, their pattern output  has influenced their connected neighbours development and vice versa… whilst experiencing the world. A Neural AGI seed...



So you have to consider an experience/ object/ thought broke down into its smallest facets, and each facet having a pattern that links all the relevant data for that facet. The pattern that represents that facet can be initiated from any part of the cortex, so hearing the word ‘apple’ starts a bunch of patterns that mean object, fruit, green, etc.  This is how we recognise similar situations, or a face in the fire, or pain in others, it drives empathy as well as many other human mental qualities.

The GTP is a constantly changing/ morphing pattern guided by the connectome that represents the AGI’s understanding of that moment in time, with all the memories/ experiences and knowledge that goes with it. As the connectome grows and the synapses are altered by the GTP to encode memories/ knowledge… one can’t exist without the other.

Simples… Make more sense?
« Last Edit: June 24, 2017, 04:55:30 pm by korrelan »
It thunk... therefore it is!

*

ivan.moony

  • Trusty Member
  • ********
  • Replicant
  • *
  • 714
  • look, a star is falling
Re: The last invention.
« Reply #173 on: June 24, 2017, 05:13:34 pm »
Is it possible to have three placeholders, say "left", "right" and "result"? Now  if we observe a world where "result" = "left" + "right", we would have it memorized like this:
Code: [Select]
left right result
-----------------
0    0     0
0    1     1
1    0     1
1    1     2
...

And if we later observe "left" as "1" and "right" as "1", what do we have to do to conclude that "result" is "2"? Naively, I can imagine a question: what is the result if left is 1, and right is 1. How would this question be implemented in neural networks?

[Edit]
Theoretically, is it possible to feed 1 as "result" and to get imagination of (("left" = 0, "Right" = 1) or ("left" = 1, "right" = 0))? Moreover is it possible to chain neural reactions, like you feed "cookies" and you get a chain ("flour" + "milk" + "chocolate") + "bake at 180°" + "serve at a plate"?
« Last Edit: June 24, 2017, 06:11:58 pm by ivan.moony »
Wherever you see a nice spot, plant another knowledge tree :favicon:

*

keghn

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 499
Re: The last invention.
« Reply #174 on: June 24, 2017, 06:39:24 pm »
ANN can do this with one gate. Left and right would be the inputs. Then the bias for
the third input.
 The output. would be different:
0  and 0 would give 0
0  and 1 would give 0.5
1  and  1 would give 1.0

 @Korrelan works with spiked neural network? Which are more closely to the logic
of brain neurons. Little different than the ones used in deep learning, that are so popular. 

 There are different styles of spiked neural logic. Every scientist have their own opinion 
on how they work, based on the latest science. Or a implementation to make their theories 
or project work. 

 In my logic  of spiked NN a zero is static noise, white noise of a low voltage. 
 Also noise are all values given at one time, max confusion.  No steady path to focus on
or follow.
 When spiked NN is train and generates a value it will go from wild random static to 
a solid un vibrating value from zero to something greater than zero. Then drop back down 
to  random low voltage static. 


*

korrelan

  • Trusty Member
  • ********
  • Replicant
  • *
  • 678
  • Look into my eyes! WOAH!
    • Google +
Re: The last invention.
« Reply #175 on: June 24, 2017, 07:03:08 pm »
I’m not sure what you’re asking…

You’re describing a logic AND gate. Where either a single ‘left’ or ‘right’ would result in 0, but both together would equal 1… but anyway…

This simple example can be handled by one single traditional simulated neuron.  A traditional simulated neuron can be viewed as an adaptable logic gate.

Traditional simulated neurons use weights as their inputs (any number), a threshold and a bias, you can forget the bias for this example.

The threshold is a trigger/ gate value, ie… if the sum of the weights is greater than the threshold then ‘FIRE’.

So in your example the threshold of a single neuron would be set to 2, either input would only add 1… both inputs would result in 2 (sum the inputs) and the neuron would fire a 1... not a 2.

Although usually the weights are less than 1, so your weights (inputs) would actually be 0.5 and 0.5 with a threshold of 1… firing a 1.

To code a simple neuron…

Input A
Input B
Thresh=1

If A + B >= Thresh then Output=1 else Output=0

Traditional neurons themselves are very simple… their strength lies in combining them into networks.

Edit: As Keghn said…

 :)
It thunk... therefore it is!

*

keghn

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 499
Re: The last invention.
« Reply #176 on: June 25, 2017, 12:00:43 am »
   
 There are the detector NN and then the generator NN 
 Generator NN are used for memory.   
 One detector NN can detect many objects. Like, cookies, flour, cake, and bowl.   
 Many raw pixel go into a 
black box. Then come out on the other side as output pixels . 
 Each output pixel is assigned to some object like cookie and spoon for another. 
 That output pixel only go high when a spoon is in the input image and 
turns off when removed. This is a detector NN. It is not memory. More for action. 
 It detects a fly and then brush it away. 
  So a real brain need to record and that is where the generator/recorde/synthesis NN   
comes into play. It Is pretty much the same as recording into computer memory, ram. 

 The activation of the detector NN are recorded and also the image that caused them to
activate.

 There is a address pointer into memory so it can be rewined, fast forwarded, jump 
ahead, and so on. 

 Using two or more address pointer into memory running at the same time is the basis 
 for a internal 3d simulator.
 Viewing two images at the same time is confusing. So rewind, speed up, slow down control
are used to keep thing in order.

 A simple NN can be pyramid. They are very flexible and can have many types of set ups. 
 Like ten inputs and ten outputs.   

 To remember you need an echo: 





*

korrelan

  • Trusty Member
  • ********
  • Replicant
  • *
  • 678
  • Look into my eyes! WOAH!
    • Google +
Re: The last invention.
« Reply #177 on: June 25, 2017, 10:21:24 pm »
Gave my bot a serious optics/ camera upgrade today. 

Got rid of the stereo SD web cams and replaced with a matched pair of true HD cams on 3D gimbals.

Upgraded the software etc, to handle the HD resolution.



 :)



:)
« Last Edit: June 25, 2017, 11:25:52 pm by korrelan »
It thunk... therefore it is!

*

infurl

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 264
  • Humans will disappoint you.
    • Home Page
Re: The last invention.
« Reply #178 on: June 25, 2017, 10:46:47 pm »
I'm working on something comparable at the moment too. I'm trying to build a drone that uses stereo vision. I've just acquired a pair of Runcam2 HD cameras but haven't decided on a gimbal mechanism yet. Can you provide more details about the gear that you are using?

*

korrelan

  • Trusty Member
  • ********
  • Replicant
  • *
  • 678
  • Look into my eyes! WOAH!
    • Google +
Re: The last invention.
« Reply #179 on: June 25, 2017, 11:49:11 pm »
I used to have a really antiquated control system but have recently found a much easier, neater method.

The stereo cameras are out of HD 3D gimbal domes.  I’m using passive baluns to convert the composite video signals to digital so I can pass them through a single cat5 cable, then baluns again to convert back at the DVR end.

The main gimbal is a HD PTZ camera minus the housing using the Pelco D protocols.

The 3D gimbals on the stereo cameras make it easy to adjust and calibrate the alignments for stereo overlap, etc.  They give a wide stereo view with a 30% overlap, the center HD cam has a X30 zoom lens, this provides long range and in combination with the other cams as periphery a high def fovea. 

I’ve tried moving the X,Y axis of the stereo cameras with a servos but they were a bug*er to keep aligned, so now I’ve opted for fixed cameras and just move the ‘head’.

I run the cameras through a 16 Cam HD DVR with a built in 485 PTZ controller, that came with a full SDK for windows, so I can easily get live feeds and control the PTZ motors across my network/ web. 

Made it really easy to control the motors and integrate the camera feeds into my AGI.



I also acquired a simple security 485 PTZ joystick controller for testing purposes, you can see it in this image.

 :)
It thunk... therefore it is!

 


Users Online

24 Guests, 0 Users

Most Online Today: 29. Most Online Ever: 208 (August 27, 2008, 09:36:30 am)

Articles