The last invention.

  • 283 Replies
  • 28245 Views
*

ranch vermin

  • Not much time left.
  • Replicant
  • ********
  • 570
  • Its nearly time!
Re: The last invention.
« Reply #240 on: November 10, 2017, 12:14:00 pm »
is that just a filter, or does it take learning to better the result?

*

korrelan

  • Trusty Member
  • *********
  • Terminator
  • *
  • 836
  • Look into my eyes! WOAH!
    • Google +
Re: The last invention.
« Reply #241 on: November 14, 2017, 11:43:54 pm »
I just read your question and thought… what’s he on about? The video is on the previous page and I’d not noticed.  I was rushing getting ready to leave for the first call of my working day, that’s not the video I thought I had posted lol. That was for Yot’s thread on outlines.

I’ll explain it here anyway… It uses two kinds of machine learning. 

The first is an adaptive convolutional filter I designed that learns the best parameters to apply to each section/ type of image. It automatically adjusts for brightness, saturation, clarity/ sharpness, resolution, etc.  Its job is to learn/ adapt the best method for extracting a template that the pixel bot (PB) can follow.

The second machine learning technique is the pixel bot.  This little guy is a simple bot with eight pixel sensors around its perimeter.  I have taught/ trained it to follow and trace outlines. 

So… the pixel bot learns to follow outlines and the adaptive convolution filter learns to extract the most information from the image.  If the PB manages to create a full out line then it notifies the filter that this is a good convolution combination for this type of image… and a box is drawn around the shape on the original image.

Notice that he outline is stabilized in the left window.  As soon as the mouse pointer enters a shape the PB converts the object into a set of vector coordinates, these are then easily centred and stabilized.

That’s how the system manages to easily extract numbers from captures etc… it learns to extract letters/ numbers from the confusion.

You can see the PB learning process here…



I was taking a break from my AGI and thinking about Yot’s outline routines.

 :)
It thunk... therefore it is!

*

ranch vermin

  • Not much time left.
  • Replicant
  • ********
  • 570
  • Its nearly time!
Re: The last invention.
« Reply #242 on: November 15, 2017, 07:53:42 am »
I wouldve thought it would have involved alot of noise before its learnt,  how come its only off or an exact outline.

If I had a random convolution filter, it would be all kinds of blurs and differences.

*

ivan.moony

  • Trusty Member
  • *********
  • Terminator
  • *
  • 869
  • Smile, it's another day!
    • Structured Type System
Re: The last invention.
« Reply #243 on: November 15, 2017, 07:59:02 am »
Korrelan, very interesttng, I think it would be how humans detect edges (they move eye focus around an object). It seems that neural nets can do more than I thought.
Did you notice that today is the first day of the rest of your life?

*

korrelan

  • Trusty Member
  • *********
  • Terminator
  • *
  • 836
  • Look into my eyes! WOAH!
    • Google +
Re: The last invention.
« Reply #244 on: November 15, 2017, 10:28:35 am »
Quote
I wouldve thought it would have involved alot of noise before its learnt,  how come its only off or an exact outline.

I have manually trained the pixel bot (PB) to follow outlines, because I’ve used my judgement to best trace lines the PB has learnt to do the same, so this includes jagged outlines. It will run along a fairly jagged edge and produce a straight line quite well, it could do with more training though as sometimes it gets confused on certain patterns of pixels lol.

Quote
If I had a random convolution filter, it would be all kinds of blurs and differences.

The convolution filter is indeed based on the human eye and is localised around the mouse pointer.  It affects small regions individually not the whole image.  It does not use a random filter; again I have manually trained the filter to best extract the required detail.

Quote
Korrelan, very interesttng, I think it would be how humans detect edges (they move eye focus around an object)

This stems from my AGI research, I use a model of the mammalian visual cortex which learns to detect lines and gradients automatically; this is just a very simplified version used just for outlines.

Quote
It seems that neural nets can do more than I thought.

I tried to keep the project simple so I’ve not used any kind of neural net, it’s all good old fashioned look up lists.  So the PB for example just stores a list of perimeter sensor readings along with the direction I’ve told it to move.  The PB can then easily and quickly find the closest match in the list and move in the relevant direction.

The system is basically just reproducing my skills at following an outline.

I still disagree that using outlines is a good method for recognising objects, both scale and rotational invariance can be accounted for but occlusion cannot.  Feature detection is still by far the best method.

 :)
It thunk... therefore it is!

*

korrelan

  • Trusty Member
  • *********
  • Terminator
  • *
  • 836
  • Look into my eyes! WOAH!
    • Google +
Re: The last invention.
« Reply #245 on: November 15, 2017, 12:14:18 pm »
This early low resolution test example shows the use of a three layer neural net based on the mammalian visual cortex.  The four angled coloured lines in the centre of the windows shows the basic colours and orientations the system is going to learn.  The left window shows the current learned orientation map, the right window is the image its learning from.



The first half of the vid shows the system being subjected to just diagonal lines and the orientation map automatically learns to recognise these lines.  You can see the self organising distribution in the neural layer only using the two diagonal colours.

At 0:17 you can see the output from the LGN running through the map and only the diagonal lines are detected.

At 0:30 a picture of faces is loaded and again because the system was trained on just diagonal lines it only detects diagonal lines in the face image.

At 0:50 I start re-training the system on the faces.  You can see the neural plasticity of my system at work as the orientation map slowly evolves to incorporate the horizontal and vertical lines it’s experiencing in the face image.

At 1:23 I stop the training, the self organising nature of the system has built a map that best fits/ represents the input data.  The black dots on the left window represent the locations of output layer pyramid cells; each cell is surrounded by a receptive field tuned to recognise a particular feature in the image.

The rest of the vid just shows how the system is now interpreting the face image with all four orientations learned.  Obviously my current system uses hundreds of orientations and gradient combinations to detect features in the visual LGN output. 

If the system was only subjected to diagonal lines again it would very slowly forget how to recognise the vert/ horiz lines.  This plasticity effect falls off as the neurons mature so the system eventually reaches a balanced representation of the experienced input, with a patch of neurons able to detect every facet of the incoming data.

The advantage of this approach is that each patch of neurons in the left window will only fire when a particular pattern of input is supplied.  The neuron patches or image facets never move and so can be easily linked, searched, etc to figure out what the system is looking at.

This gives an idea of how my whole AGI system functions; it is very closely modelled on the mammalian cortex and is basically able to adapt and learn anything its sensors encounter.  It automatically extracts and organises relevant information from the data streams.

It takes a human foetus nine months to generate its V1 orientation/ gradient maps this is my system producing and equivalent map in a matter of seconds from visual experiences… it learns extremely quickly.



And if you were to zoom into the neuron sheets/ maps you would see the individual neurons and synapse.



 :)
« Last Edit: November 18, 2017, 11:17:10 am by korrelan »
It thunk... therefore it is!

*

keghn

  • Trusty Member
  • *********
  • Terminator
  • *
  • 778
Re: The last invention.
« Reply #246 on: November 15, 2017, 02:54:38 pm »
 Unsupervised truth.
 I have had a thought of using GANs to do transmissions and self learning at the same time.
 There is problem in neuroscience of what is the meaning of "Same" or "equal" is, at the level of a neuron.
 GAN are make up of two NNs. The detector and the re generator NN, that regenerate what is being  detected.
 IF you had a  alternating rows of detector NN {DeNN) and re generator NN (ReNN). The information could be past along
in a daisy chain fashion.  Like so:
DeNN, ReNN,  DeNN, ReNN, DeNN, ReNN, DeNN, ReNN, DeNN, ReNN.............................
 
 These NNs here only detect and regenerate the color of one pixel. So it will not be too slow.

 The first DeNN detect the color form the real world. Then the next ReNN generates it. And the next DeNN learns to detect it.
 And this keep going on and on so the brain has many true reference of this color. So when doing edge detection or blob
detection, the colors form two pixel can be compared to see if they are the same or different.


 

*

ranch vermin

  • Not much time left.
  • Replicant
  • ********
  • 570
  • Its nearly time!
Re: The last invention.
« Reply #247 on: November 15, 2017, 03:24:52 pm »
woah now i understand,  thanks for that.

looks like a cool alternative to normal edge detection cause ur getting the angle as well,  but i guess the filter could too,  but its cool that youve adapted machine memory to do it.    they are general purpose indeed.

To keghn,   yes - if you had one neural network generating images, and the other video trained network telling it yes or no,  it will start to generate the diagonal lines - but i think that gets more interesting if the concepts are more general and abstract, then the generation might be less restricted and generate some more interesting images.

*

Zero

  • Trusty Member
  • ********
  • Replicant
  • *
  • 567
  • Offline
    • Github page
Re: The last invention.
« Reply #248 on: December 16, 2017, 01:05:56 pm »
You were right, IMO. Things relate because they have compatible frequencies. Complicated.
Thinkbots are free, as in 'free will'.

*

korrelan

  • Trusty Member
  • *********
  • Terminator
  • *
  • 836
  • Look into my eyes! WOAH!
    • Google +
Re: The last invention.
« Reply #249 on: December 16, 2017, 02:01:31 pm »
Yeah! Drunk me left me five pages of notes and a whiteboard of scribbles lol. 

Drunk me also altered the base code to my AGI system… never good. 

Sober me is trying to figure out what I was thinking…

EG:  To find a unique neural index/ pattern to a complex GTP pattern in high dimensional space, just keep the harmonics…

I removed drunk me’s post until I completely figure it out.

 :)
It thunk... therefore it is!

*

Zero

  • Trusty Member
  • ********
  • Replicant
  • *
  • 567
  • Offline
    • Github page
Re: The last invention.
« Reply #250 on: December 16, 2017, 02:43:09 pm »
Yeah, next time remember to rum-fork it before anything else  ;)

What is GTP?
Thinkbots are free, as in 'free will'.

*

korrelan

  • Trusty Member
  • *********
  • Terminator
  • *
  • 836
  • Look into my eyes! WOAH!
    • Google +
Re: The last invention.
« Reply #251 on: December 17, 2017, 11:55:45 am »
Quote
What is GTP?

The global thought pattern (GTP) is a term I give to the whole brain-wide pattern of activity within my AGI’s connectome.

Memories, logic, intelligence are expressed in my AGI by the frequency differentials between separate groups of neurons.  Each area of the 3D connectome is a collection of cortical columns that learn to do disparate jobs within specific frequency bands.  This means that each neural ‘module’ can process the internal/ external sensory stream differently depending on what the AGI is ‘thinking’ about, the modulation of its neighbours effects what each module does with the information it’s given. 

During my research into episodic memory engrams I had noticed that a second GTP was emerging, it is basically piggy backing off the main GTP but is out of phase, and because it’s using the same neural architecture as the main GTP it influencing how it processes data… it’s very weird.

The new GTP is being driven by the harmonics of the main GTP.

This didn't’ arise until I started integrating non sensory areas of cortex, ie the frontal cortex. The frontal section basically learns the output patterns of the sensory areas. It then sends feedback to the sensory areas influencing how they process the incoming data streams.  The feedback eventually causes a blending between the regions, where each is governed/ affected by the other.  This is a precursor to imagination; the frontal cortex can influence the sensory areas to ‘imagine’ a false sensory input based purely on learned external concepts/ experiences.  The system also uses this feedback to recognise its own internal ‘thought’ processes, because everything it ‘thinks’ goes back through concepts its learned through experience.

It’s similar to hypnotism or deep meditation…cortical regions are learning the harmonics… I was expecting the system to do something like this, just not so soon.  The new secondary GTP is like… just the surface froth, reading between the lines, or the summation of interacting logical pattern recognition processes.

Perhaps consciousness seems so elusive because it is not an ‘intended’ product of the connectome directly recognising sensory patterns, consciousness is an extra layer. The interacting synaptic networks produce harmonics because each is using a specific frequency to communicate with its logical/ connected neighbours.  The harmonics/ interference patterns travel through the synaptic network just like normal internal/ sensory patterns.  Perhaps our sub-conscious is just out of phase, or to be more precise, our consciousness is out of phase with the ‘logical’ intelligence of our connectome.

It was very difficult to track the patterns within the GTP but now it’s even harder lol.

This could just be a unforeseen error/ design problem of course…I need more Rum.

 :)
« Last Edit: December 17, 2017, 12:35:58 pm by korrelan »
It thunk... therefore it is!

*

Zero

  • Trusty Member
  • ********
  • Replicant
  • *
  • 567
  • Offline
    • Github page
Re: The last invention.
« Reply #252 on: December 17, 2017, 09:05:14 pm »
I understood everything you said, and your interpretation of what seems to happen with the second GTP makes sense, and sounds correct.

But I need to apologize for what I'm going to ask. I have to ask.
How do you know it's not just noise? The whole thing I mean, not only the second GTP. What intelligence does the whole connectome show?

It's a real question, I swear. Not an insult or  something. I'm a big fan of you and your Beowulf, just trying to follow and understand.
Thinkbots are free, as in 'free will'.

*

korrelan

  • Trusty Member
  • *********
  • Terminator
  • *
  • 836
  • Look into my eyes! WOAH!
    • Google +
Re: The last invention.
« Reply #253 on: December 18, 2017, 12:13:09 am »
Quote
How do you know it's not just noise? The whole thing I mean, not only the second GTP. What intelligence does the whole connectome show?

Hmmm… that’s a very good question… prepare for a long winded answer lol.

I presume you have read my project page and have an idea of what I’m trying to achieve.  I’m back engineering the human brain, looking for a single AGI solution based on the human nervous system that makes every other narrow AI obsolete… a machine at least equivalent to us.

Ok... It depends on your definition of intelligence (don’t sigh lol), and the level of abstraction you are looking for intelligence, or the root/ cause of intelligence.  An ant colony shows ‘intelligent’ behaviour where as a single ant does not, so from this point of view intelligence is a group/ collective/ compound behaviour. Slime mould also acts ‘intelligently’ but again it’s just a collection of single celled organisms.  There is some kind of innate/ emergent intelligence that these simple organic systems are leveraging.

That’s basically what my project is about. 

I’ve designed a single neuromorphic neural network (NNN) that can learn, self organise and self categorise information. The NNN can learn to recognise objects/ words given ocular input, or given time the same NNN can adapt through plasticity and learn phonemes/ spoken words (see vids) with no user input, totally unsupervised.  The same NNN handles long/ short term memory, prediction, episodic memories, etc.

I think the collection of artificial neurons, glial cells, etc, has the same ‘innate intelligence’ as the biological systems described earlier.  I believe that this simple low level ‘intelligence’ is what compounds to produce our ‘high (lol)’ human level of intelligence.

I have expanded this model to produce an artificial whole brain connectome seed based on the layout of the human connectome.  The idea is that over time as the system grows/ expands and experiences ‘reality’ I will learn more (always fun) and eventually figure this out and create a true AGI.

I know from experimentation that the visual cortex area can learn to recognise objects, and the audio cortex can learn phonemes, etc… now I’m letting other cortex areas learn their sparse outputs and feed back etc… just like I think the human equivalent does.

I’m basically drinking rum, reading AI/ neuroscience research/ white papers, figuring out how I think it all works, coding, running simulations and banging my head on the desk lol.

I think deep learning, heuristics, convolutional networks, etc are very narrow AI schemas and will never result in a true conscious, self aware AGI... I believe my approach will... my current connectome seed only has a few million components… but its growing… and it’s learning… and so am I.

 :)
It thunk... therefore it is!

*

Zero

  • Trusty Member
  • ********
  • Replicant
  • *
  • 567
  • Offline
    • Github page
Re: The last invention.
« Reply #254 on: December 18, 2017, 08:55:11 am »
Thank you! Your explanations are very clear.

Can it act? Does it feel pleasure and pain? Do you plan to connect it to minecraft for example?

Thinkbots are free, as in 'free will'.

 


They Are Coming For US!
by danlovy (General Robotics Talk)
Today at 01:01:05 pm
XKCD Comic : Self-Driving Issues
by Tyler (XKCD Comic)
Today at 12:00:06 pm
Young Alpha
by ranch vermin (Bot Conversations)
Today at 07:50:59 am
Why we laugh. (or cry) - very funny videos included
by keghn (General AI Discussion)
February 20, 2018, 10:27:51 pm
Emergence of the universe's PURPOSE !!!!
by keghn (Future of AI)
February 19, 2018, 04:54:20 pm
XKCD Comic : 2018 CVE List
by Tyler (XKCD Comic)
February 19, 2018, 12:01:26 pm
the emergence of AI
by LOCKSUIT (Future of AI)
February 19, 2018, 05:04:07 am
Robot Vacuum Cleaners
by tekchamps (General Robotics Talk)
February 18, 2018, 06:10:53 pm

Users Online

34 Guests, 0 Users

Most Online Today: 53. Most Online Ever: 208 (August 27, 2008, 09:36:30 am)

Articles