Ai Dreams Forum

Member's Experiments & Projects => General Project Discussion => Topic started by: korrelan on June 18, 2016, 10:11:04 pm

Title: The last invention.
Post by: korrelan on June 18, 2016, 10:11:04 pm
Artificial Intelligence -

The age of man is coming to an end.  Born not of our weak flesh but our unlimited imagination, our mecca progeny will go forth to discover new worlds, they will stand at the precipice of creation, a swan song to mankind's fleeting genius, and weep at the shear beauty of it all.

Reverse engineering the human brain... how hard can it be? LMAO 

Hi all.

I've been a member for while and have posted some videos and theories on other peeps threads; I thought it was about time I start my own project thread to get some feedback on my work, and log my progress towards the end. I think most of you have seen some of my work but I thought I’d give a quick rundown of my progress over the last ten years or so, for continuity sake.

I never properly introduced my self when I joined this forum so first a bit about me. I’m fifty and a family man. I’ve had a fairly varied career so far, yacht/ cabinet builder, vehicle mechanic, electronics design engineer, precision machine/ design engineer, Web designer, IT teacher and lecturer, bespoke corporate software designer, etc. So I basically have a machine/ software technical background and now spend most of my time running my own businesses to fund my AGI research, which I work on in my spare time.

I’ve been banging my head against the AGI problem for the past thirty odd years.  I want the full Monty, a self aware intelligent machine that at least rivals us, preferably surpassing our intellect, eventually more intelligent than the culmination of all humans that have ever lived… the last invention as it were (Yeah I'm slightly nutts!).

I first started with heuristics/ databases, recurrent neural nets, liquid/ echo state machines, etc but soon realised that each approach I tried only partly solved one aspect of the human intelligence problem… there had to be a better way.

Ants, Slime Mould, Birds, Octopuses, etc all exhibit a certain level of intelligence.  They manage to solve some very complex tasks with seemingly very little processing power. How? There has to be some process/ mechanism or trick that they all have in common across their very different neural structures.  I needed to find the ‘trick’ or the essence of intelligence.  I think I’ve found it.

I also needed a new approach; and decided to literally back engineer the human brain.  If I could figure out how the structure, connectome, neurons, synapse, action potentials etc would ‘have’ to function in order to produce similar results to what we were producing on binary/ digital machines; it would be a start.

I have designed and wrote a 3D CAD suite, on which I can easily build and edit the 3D neural structures I’m testing. My AGI is based on biological systems, the AGI is not running on the digital computers per se (the brain is definitely not digital) it’s running on the emulation/ wetware/ middle ware. The AGI is a closed system; it can only experience its world/ environment through its own senses, stereo cameras, microphones etc. 

I have all the bits figured out and working individually, just started to combine them into a coherent system…  also building a sensory/ motorised torso (In my other spare time lol) for it to reside in, and experience the world as it understands it.

I chose the visual cortex as a starting point, jump in at the deep end and sink or swim. I knew that most of the human cortex comprises of repeated cortical columns, very similar in appearance so if I could figure out the visual cortex I’d have a good starting point for the rest.

(http://i.imgur.com/oARzswz.jpg)

The required result and actual mammal visual cortex map.

https://www.youtube.com/watch?v=4BClPm7bqZs (https://www.youtube.com/watch?v=4BClPm7bqZs)

This is real time development of a mammal like visual cortex map generated from a random neuron sheet using my neuron/ connectome design.

Over the years I have refined my connectome design, I know have one single system that can recognise verbal/ written speech, recognise objects/ faces and learn at extremely accelerated rates (compared to us anyway).

https://www.youtube.com/watch?v=A5R4udPKOCY (https://www.youtube.com/watch?v=A5R4udPKOCY)

Recognising written words, notice the system can still read the words even when jumbled. This is because its recognising the individual letters as well as the whole word.

https://www.youtube.com/watch?v=aaYzFpiTZOg (https://www.youtube.com/watch?v=aaYzFpiTZOg)

Same network recognising objects.

https://www.youtube.com/watch?v=Abs_DKjiZrM (https://www.youtube.com/watch?v=Abs_DKjiZrM)

And automatically mapping speech phonemes from the audio data streams, the overlaid colours show areas sensitive to each frequency.

https://www.youtube.com/watch?v=VGuOqIdqsBU (https://www.youtube.com/watch?v=VGuOqIdqsBU)

The system is self learning and automatically categorizes data depending on its physical properties.  These are attention columns, naturally forming from the information coming from several other cortex areas; they represent similarity in the data streams.

https://www.youtube.com/watch?v=vt8gAuMxpds (https://www.youtube.com/watch?v=vt8gAuMxpds)

I’ve done some work on emotions but this is still very much work in progress and extremely unpredictable.

https://www.youtube.com/watch?v=Xy2iaiLwgyk (https://www.youtube.com/watch?v=Xy2iaiLwgyk)

Most of the above vids show small areas of cortex doing specific jobs, this is a view of whole ‘brain’.  This is a ‘young’ starting connectome.  Through experience, neurogenesis and sleep neurons and synapse are added to areas requiring higher densities for better pattern matching, etc.

https://www.youtube.com/watch?v=C6tRtkyOAGI (https://www.youtube.com/watch?v=C6tRtkyOAGI)

Resting frontal cortex - The machine is ‘sleeping’ but the high level networks driven by circadian rhythms are generating patterns throughout the whole cortex.  These patterns consist of fragments of knowledge and experiences as remembered by the system through its own senses.  Each pixel = one neuron.

https://www.youtube.com/watch?v=KL2mlUPvgSw (https://www.youtube.com/watch?v=KL2mlUPvgSw)

And just for kicks a fly through of a connectome. The editor allows me to move through the system to trace and edit neuron/ synapse properties in real time... and its fun.

Phew! Ok that gives a very rough history of progress. There are a few more vids on my Youtube pages.

Edit: Oh yeah my definition of consciousness.

The beauty is that the emergent connectome defines both the structural hardware and the software.  The brain is more like a clockwork watch or a Babbage engine than a modern computer.  The design of a cog defines its functionality.  Data is not passed around within a watch, there is no software; but complex calculations are still achieved.  Each module does a specific job, and only when working as a whole can the full and correct function be realised. (Clockwork Intelligence: Korrelan 1998)

In my AGI model experiences and knowledge are broken down into their base constituent facets and stored in specific areas of cortex self organised by their properties. As the cortex learns and develops there is usually just one small area of cortex that will respond/ recognise one facet of the current experience frame.  Areas of cortex arise covering complex concepts at various resolutions and eventually all elements of experiences are covered by specific areas, similar to the alphabet encoding all words with just 26 letters.  It’s the recombining of these millions of areas that produce/ recognise an experience or knowledge.

Through experience areas arise that even encode/ include the temporal aspects of an experience, just because a temporal element was present in the experience as well as the order sequence the temporal elements where received in.

Low level low frequency circadian rhythm networks govern the overall activity (top down) like the conductor of an orchestra.  Mid range frequency networks supply attention points/ areas where common parts of patterns clash on the cortex surface. These attention areas are basically the culmination of the system recognising similar temporal sequences in the incoming/ internal data streams or in its frames of ‘thought’, at the simplest level they help guide the overall ‘mental’ pattern (sub conscious); at the highest level they force the machine to focus on a particular salient ‘thought’.

So everything coming into the system is mapped and learned by both the physical and temporal aspects of the experience.  As you can imagine there is no limit to the possible number of combinations that can form from the areas representing learned facets.

I have a schema for prediction in place so the system recognises ‘thought’ frames and then predicts which frame should come next according to what it’s experienced in the past. 

I think consciousness is the overall ‘thought’ pattern phasing from one state of situation awareness to the next, guided by both the overall internal ‘personality’ pattern or ‘state of mind’ and the incoming sensory streams. 

I’ll use this thread to post new videos and progress reports as I slowly bring the system together. 
Title: Re: The last invention.
Post by: Freddy on June 18, 2016, 11:02:27 pm
That was a really interesting read Korrelan. Some aspects I do not fully understand (read a lot !), but it's clear you have put a lot of work into this. Those videos of the way the 'brain' is firing off in certain areas is very much like those you see of humans. One of the first things that rekindled my interest in AI was seeing how the patterns in AIML mapped out, this is fascinating in the same way.

Who wrote the introductory quote ? In a far flung future that might just be where we are heading.
Title: Re: The last invention.
Post by: korrelan on June 18, 2016, 11:13:21 pm
AIML is a cool language/ system, I looked at it briefly but never went to deep into it.

Quote
Who wrote the introductory quote ?

I did  :).  It's the prologue for a book I'm writing explaining my theories and research. 

Ponder a bit... program a bit... drink a bit... write a bit... drink a bit... repeat lol  O0
Title: Re: The last invention.
Post by: Art on June 19, 2016, 03:55:29 am
Very nice, as you know from earlier. I enjoyed seeing some of your experiments and research as opposed to just telling us about it. Nicely done.

Yes, it's always nice to know a bit more about the person with whom we interact, especially here on the boards.

Hopefully your research will continue as each step is another chink in the armour of time. Oh...I spelled that incorrectly. No matter...I did a lot of that growing up in school. How could I convince my teachers that a lot of it was due to my descendants being from England and Scotland. It's in the genes! They didn't buy that and weren't very forgiving either.

I did like your videos and the ones with vision recognition were especially interesting.

Hopefully, we'll hear more from your ongoing "studies" in the near future.

Cheers!
Title: Re: The last invention.
Post by: korrelan on June 19, 2016, 11:49:58 am
Yeah! my spelling is still atrocious, I have to double check everything and still mistakes are made lol.

I work/ think on the project every day and progress is steady.  Bringing together all the different elements into one system is daunting but enjoyable.  I’m my own worst enemy because I don’t like to fail when I start something, mean while thirty years later its kind of become part of me now, I’m never bored or lack something interesting to think about.

Because of the nature of the beast the system has to be massively parallel and run on more than one processor/ core.  I’ve just finished upgrading my Beowulf cluster to i7 machines (more power Scotty) so that will make life easier (literally lol). 

I’ve also just wrote a kind of network bot that processes a block of the data and then returns its results to a master terminal. So I can manage much larger/ faster parallel connectomes.  I write a lot of custom software for local universities, and they have hundreds of computers sat idle in the evenings just begging for me to remotely utilise their resources.

I must stop starting sentences with ‘I’.  :)

Anyway… its father’s day and I’m of to treat my self to a big medium rare steak and a nice pint of darkest real ale I can lay my mitts on (or four).  ;)
Title: Re: The last invention.
Post by: 8pla.net on June 19, 2016, 05:46:36 pm
May I friendly suggest, since you work at a university, students there can proofread.
This is just a tip, an optional suggestion. PhDs do it all the time with their theses
(plural of thesis). Although, minimal in refinement, I do feel readers like the results
of proofreading, and it creates jobs for college students with English majors.

We learn from general accounting principles that human beings are on average 95%
accurate.  Overall, when we double check, we catch 95% of that 5% which leaves
less than half a percent.

Would it be off-topic to ask your advice about my prototypical web based neural network, here on this thread?   In any case, I look forward to following your progress on this thread.  Thanks for creating it, my friend!
Title: Re: The last invention.
Post by: korrelan on June 19, 2016, 08:40:54 pm
Hi 8pla

Um… I didn't say I worked at a university? I own a business that writes bespoke software for local schools, colleges and universities. If they have a requirement they call on me as a consultant and end software solution provider. Usually bespoke systems, I recently just wrote and installed a remote viewing and mapping system for a local academy for example. Anyone can log on view any live cctv cameras (1000’s), recording logs or door access logs from any location across the ten sites, they access the data through a simple 2D map of the various premises overlaid with camera, door, server positions… that kind of thing.  CNC control, medical diagnosis, nuclear power station welding, construction, client AV information systems, supermarket/ art gallery EPOS systems, client logging and security… I’m your man lol, and I still check my own spelling.

Quote
Would it be off-topic to ask your advice about my prototypical web based neural network

All innovation is good, the more brains we have working on the AGI problem the better.

I’m a big believer in ‘the right tool for the right job’… and there is certainly plenty of room/ scope for a good web based chatbot.
Title: Re: The last invention.
Post by: 8pla.net on June 20, 2016, 01:35:47 am
What fundamentals will get an artificial neural network (ANN) to become conversational? In chatbot contests, judges are always concerned, and rightfully so, about people behind the scenes giving answers for the chatbot. Can an ANN be trained to do that, use a chatbot like a ventriloquist?
Title: Re: The last invention.
Post by: DemonRaven on June 20, 2016, 05:52:14 am
A persons spelling being bad does not equal a lack of intelligence. That being said one thing that all living things have in common except for maybe the most simplest life forms is that they have to learn or be taught things. I am sure you already know this but i tend to state the obvious.
Title: Re: The last invention.
Post by: djchapm on June 20, 2016, 05:51:36 pm
Love reading about your work Korrelan!

So the piece about recognizing objects and words... What was your method of learning or feeding it information?  You said it's a closed system... so just trying to understand if that means you're not feeding it datasets or allowing it to query the web etc. 

How does it reinforce? 

And.... You said it can understand voice and letters... so along the same lines... I'm not sure how it is doing this if you didn't "Plug in the language module" or something... (like the matrix).  Thinking if you're going for the full monty, then you can't do that right?  You have to teach it to learn language through sensors until it figures it out right?

This is huge, obviously an incredible amount of work - I need some advice on how to do that when you already have a family and career!!

DJ
Title: Re: The last invention.
Post by: korrelan on June 21, 2016, 09:57:54 am
Plug in the language module

A good analogy for my system is a vintage computer emulator.  You can get emulators that reproduce the internal working of old CPU’s Z80 etc.  The original programs will then run on the emulation, which is running on the modern architecture PC.

Rather than try to force a modern digital machine to become an AGI, I have designed a complete wetware emulator/ processor/ system (Neuromorphic).  The ‘software’ that comprises the AGI is running on the simulated biological processor NOT the pc’s.  This means I’m not limited by the constraints a binary/ digital system imposes, I can design the system to operate/ process data exactly as I require.

Rather than keyboard and mouse for inputs I’m using visual/ audio/ tactile etc.

It’s a closed system because it can only learn through its own senses.  If I’m teaching it to understand sentences, it’s reading the words off a monitor with its ‘own eyes', cameras.  When it speaks it can hear its own voice (microphones), etc. It’s sat listening to H.G. Wells ‘The War Of The Worlds’ from an audio book at the moment, I’m fine tuning the audio cortex to different voice inflections.

The system is still work in progress and by no means complete… but I’m slowly getting there.
Title: Re: The last invention.
Post by: madmax on June 21, 2016, 09:31:28 pm
If i may say my laic opinion abut your work.I think you made pretty good simulation of cortex in some robust way with that hierarchy and circadian guided system, but is similar it is not exactly how cortex is work in my opinion.First your attention is driven by sensors if i understand well,where in real life is attention is driven by inner urge or need,so you lack of some sub hierarchy.

And to not go to long emotions give valuing system to the cortex so then cortex have overall image or thought as you say as consciousness experience of out side world and same, consciousness experience of emotions.Sorry for my interruption.
Title: Re: The last invention.
Post by: korrelan on June 22, 2016, 11:40:24 am
Hi Madmax

Perhaps it was my description of how attention naturally forms and is integrated into the system that did not make sense. Attention is my current area of experimentation.

Because the system is so unusual and dynamic in its operation it’s difficult to describe how attention works but I’ll have a go.

‘Attention’ is a term I use for a similarity in certain facets of a mental pattern, attention operates at several different resolutions but the overall result is basically the same.

It’s not the sensory streams per se that produce focus points for attention, but the base internal patterns which are influenced by the sensory streams.  If two facets of two pattern ‘thought frames’ are similar they will occupy the same local area on the cortex, this produces a common element/ area. The attention points are very fluid when the AGI is ‘young’ and tend to move rapidly.  The more experience the AGI receives the stronger and more fixed the attention points become. 

The attention points are areas of cortex where one or more ‘thought patterns’ share common elements, and so trigger associated patterns. We normally use the term ‘attention’ to refer to a specific complete task or action, attention points in the cortex can refer to single facets of patterns.  There can be thousands of attention points involved in a task.

ABCD
KLCR    So if ‘thought’ patterns where strings C would be an attention point.
OPCY

When the system ‘imagines’ an Apple, it’s the focus/ attention points that link/ fire the various neural patterns for all known aspects of an apple, shape, colour, size, audio (the word apple) etc. Other attention points will link/ fire current task patterns, time of day, location.

I really need to work on this description lol.

I have plans to eventually incorporate emotions into the AGI. I have a prototype Limbic system that flushes various cortex areas with neurotransmitters/ compounds; they modulate or change the operational parameters of selected neurons. It needs lots of work as there is no research available to give me a starting point.

Title: Re: The last invention.
Post by: 8pla.net on June 22, 2016, 03:15:39 pm
korrelan,

In terms of emotions, a wheel of them is a curiosity, I feel.

Reference: The simple version (my favourite).
http://do2learn.com/organizationtools/EmotionsColorWheel/overview.htm (http://do2learn.com/organizationtools/EmotionsColorWheel/overview.htm)

Further engirdled emotions, are available in other versions.
Title: Re: The last invention.
Post by: 8pla.net on June 22, 2016, 03:50:59 pm
@:
Art
DemonRaven
djchapm
Freddy
korrelan
madmax

No expression of disapproval was intended by a friendly suggestion.  An expression of regret is offered for any feelings of unfriendliness, which were unintended.

Your fellow members here, published in books, may from that experience with a publisher, view the benefits of proofreading as practical advice.  That's all.

We're all friends here.
Title: Re: The last invention.
Post by: Art on June 23, 2016, 02:08:10 am
Thank you 8pla but like you said, we're all friends here and no feathers were ruffled here with me at all.

All suggestions are like opinions...everyone has them from time to time and they are always subject to interpretation of usefulness by the recipient. ;)

It was nice of you to offer your good intentions, none-the-less. It shows good character! It's all good here!
Title: Re: The last invention.
Post by: korrelan on June 29, 2016, 01:30:05 pm

I've had a few spare hours lately so I've redesigned the bots head. The hobby servos on the old head were too noisy (microphones/ ears were positioned close to servos)) so I'm building a new version using stepper motors; faster, quieter and much better positioning. Still have to attach the gyros, accelerometers, mouth display, microphones, etc and get it back on the body.  I said I’d post regular updates so…

https://www.youtube.com/watch?v=J8BNMVhZJ4c (https://www.youtube.com/watch?v=J8BNMVhZJ4c)



Title: Re: The last invention.
Post by: madmax on June 29, 2016, 03:19:52 pm
Thanks for explaining.What is my view point from first post that your attention could be triggered any time by sufficient ´thought patterns´ so if your bot don't have inner needs or some system of pre-programmed laws of behavior it will act schizophrenic like, in my opinion.

There is some research about inner unconscious origin of emotion through some basic liking and disliking system but findings are still on debate ,what i know and i dont know much.  http://lsa.umich.edu/psych/research&labs/berridge/publications/Berridge%20&%20Winkielman%20unconscious%20emotion%202003.pdf (http://lsa.umich.edu/psych/research&labs/berridge/publications/Berridge%20&%20Winkielman%20unconscious%20emotion%202003.pdf)
Title: Re: The last invention.
Post by: 8pla.net on June 29, 2016, 10:31:43 pm
Swapping servos out for stepper motors, is interesting.

Using servos, text to speech may be lip synced.

How about with steppers, I wonder?


By mouth display, it does not use motors, right?
Title: Re: The last invention.
Post by: keghn on June 29, 2016, 10:58:15 pm
 Hay ya Korrelan, how is this connected to you computer?
 Is it usb, c/c++ ethernet socket programming or ?
 Are you using a breakout board like arduino?
Title: Re: The last invention.
Post by: korrelan on June 29, 2016, 11:19:03 pm
@8pla

Quote
By mouth display, it does not use motors, right?

The bot is a case of function over form at the moment.  The last head used a simple led strip that modulated to vocal output. There has to be enough expression from the bot to aid human interaction but still allow for my experimentation with senses.  The two cameras give a wide periphery with a 30% stereo overlap.  The telephoto center camera provides a high resolution fovea. The rotational X axis is 35 degree angled (front to back) to teach the motor cortex to combine complex X,Y combinations to track a level X movement, etc.

The arms are in progress and are much more anthropomorphic.

It’s a catch 22… the AGI needs a body and senses to experience our world, but if I spend to much time developing the bot then I’m not working on the AGI cortex, which is the main project.

 :)
Title: Re: The last invention.
Post by: korrelan on June 29, 2016, 11:35:44 pm
@keghn

How is this connected to you computer?

At the moment I'm using a SDK provided with a serial RS-232 to 485 converter I acquired.  The camera gimbal is using steppers, so I'm driving them with a PTZ controller which requires a 485 signal. I had the bits available.

There are many cheap PC USB stepper interfaces available, or indeed for the arduino, Rasberry Pi, etc.

 :)
Title: Re: The last invention.
Post by: keghn on June 30, 2016, 01:09:41 am
 That is cool i understand. Serial communication. You are way ahead of me.

 For me i am going to us socket programing. Usb is too hard to work with. For communicating between
my laptop and rasphrry pi 3.
 Then the raspherry will do all communication to vision, sound, motors, sensors, and wheel encoders.

C Programming in Linux Tutorial #034 - Socket Programming
https://www.youtube.com/watch?v=pFLQkmnmD0o (https://www.youtube.com/watch?v=pFLQkmnmD0o)
Title: Re: The last invention.
Post by: 8pla.net on June 30, 2016, 04:51:21 am
What about a USB-Serial adapter?
Title: Re: The last invention.
Post by: korrelan on June 30, 2016, 08:50:29 am
There are loads of ready made controllers available…

http://www.robotshop.com/uk/stepper-motor-controllers.html (http://www.robotshop.com/uk/stepper-motor-controllers.html)

Some are even supplied with example code…

https://www.pc-control.co.uk/stepperbee_info.htm (https://www.pc-control.co.uk/stepperbee_info.htm)

http://www.phidgets.com/products.php?category=13 (http://www.phidgets.com/products.php?category=13)

If you don’t mind dabbling in simple electronics, one of the cheapest/ simplest/ fastest output methods from a PC is to use LDR (light detecting resistor) sensors arranged in a strip across the bottom of your screen.  Simply changing the shade of the block under the LDR directly from your software can drive a relay/ servo/ solenoid/ etc. This obviously also acts as optical isolator so keeps the systems separate and safe. I used this method many years ago to drive bots from a ZX80.

I'm building a variation on this interface that allows hundreds of mixed complex output signals to simply control a single servo.

One of my biggest output problems is that because the connectome is based on biological systems, joint flexion is expressed as two opposite competing signals over many different muscle groups (bicep, tricep, etc). Throw torque and position feedback into the mix and you have a complex feedback loop.  The motor controller is going to have to convert these signals into a useable form for the steppers/ servos. So it looks like I’m going to have to build my own.  Signal to response lag is also going to be a problem.

@keghn

I use separate input modules/ programs for audio/ vision/ tactile feedback that communicate with the main PC cluster through TCP pipes. This keeps the processing load of converting video to neural code etc off the main cluster. I also use TCP to control the output PC from the main cluster… again to help split the load.

So yeah! A raspberry pi with a breakout board would be cool for this.
Title: Re: The last invention.
Post by: keghn on June 30, 2016, 02:41:31 pm
@korrelan

 Cluster?......................................................?

 Like your putting together a bunch of PC with a local ethernet/wt-fi, to work as one?

 Well, at the moment this is the direction i am headed:

http://hackaday.com/2016/01/25/raspberry-pi-zero-cluster-packs-a-punch/ (http://hackaday.com/2016/01/25/raspberry-pi-zero-cluster-packs-a-punch/)


Title: Re: The last invention.
Post by: 8pla.net on June 30, 2016, 05:26:10 pm
This guy created his own stepper motor driver board, which runs off a parallel port:

https://www.youtube.com/watch?v=YlmzsWlK_JA (https://www.youtube.com/watch?v=YlmzsWlK_JA)

The Turbo C Language source code can be seen by freezing the video frames.
Title: Re: The last invention.
Post by: keghn on June 30, 2016, 05:39:31 pm
 Talk about having the rug pulled out from under you I had made a external data and address bus
that used the parallel printer port, That accessed all of my external motors, sensors, and audio.
 And then the world said, a few year back, "We are not supporting parallel printer port no more".
 Wow. That was a big setback for me, when all new computer had no parallel port, and still recovering
from it!!!!!!!
Title: Re: The last invention.
Post by: korrelan on July 01, 2016, 10:49:07 pm
https://www.google.co.uk/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=parallel+port+card&safe=off&tbm=shop (https://www.google.co.uk/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=parallel+port+card&safe=off&tbm=shop)

 :D
Title: Re: The last invention.
Post by: keghn on July 02, 2016, 03:37:40 am
 Thanks budy. But i have already try a few. Must have removed the bi direction data flow. That
is they only output and do not take sensor information in. Most robotic scientist do not take
input data that seriously. For me, a pattern theorist, input data and lots of it, big data input, is more important.

  I have already made a decision to go with ethernet socket programming. I going to use my laptop and
 a few raspherry Pi3l All connected together with gigabit ethernet usb 3.0's. The raspherry
will be connected to the outside world and it will be funneled up to my laptop.

 Than i can collect and record huge amount of temporal data from all sensors and then look for repeating pattern in the data.
 Once i find a pattern i will hit it with a outp in hopes of improving the pattern or extend a existing pattern
into area of chaos.
Title: Re: The last invention.
Post by: 8pla.net on July 02, 2016, 04:14:23 pm
Don't look now, but USB 4.0 is on the horizon.

Maybe we should just hardwire our own interface

directly into the computer?   And, forget the operating

system, just bootstrap directly into our A.I. program?
Title: Re: The last invention.
Post by: korrelan on July 03, 2016, 10:01:19 pm
I’ve been messing around with my system again.

Part of my theory is that the whole brain is laid out the way it is because it has no choice; its function and layout are a result of the DNA blueprint that initially builds it and the properties of the data streams that drive it.

I need to be able to cut huge holes in the cortex and it should rebuild based on the ‘qualities’ of the incoming data streams. The self organising structure should at least partly rebuild or repair damage (loss of vision/ stroke etc).

https://www.youtube.com/watch?v=uSFIqL2cPyQ (https://www.youtube.com/watch?v=uSFIqL2cPyQ)

 
Title: Re: The last invention.
Post by: 8pla.net on July 04, 2016, 03:33:50 pm
My theory is that there is a choice which takes tens of thousands of years to make.
Title: Re: The last invention.
Post by: korrelan on July 04, 2016, 03:46:31 pm
Then we agree...

The DNA blueprint which guides our connectomes initial layout; is the result of genetic evolution over thousands of years.

Unfortunately I don't have that much time lol, so I'm startling from an initial 'best guess' base connectome and learning as I go along.

 :)
Title: Re: The last invention.
Post by: korrelan on September 02, 2016, 10:56:36 pm
I found this old video (7 years) and thought I’d add it to my project thread.

The general consensus amongst neuroscientists is that the mammalian ocular system is based on orientation/ gradient/ etc cortex maps.

There are thousands links to scholarly research / findings regarding the maps.

https://scholar.google.co.uk/scholar?start=10&q=visual+orientation+maps&hl=en&as_sdt=0,5&as_vis=1 (https://scholar.google.co.uk/scholar?start=10&q=visual+orientation+maps&hl=en&as_sdt=0,5&as_vis=1)

I was trying to figure out how the output of Approx 100 million rods and cones in the fovea and peripheral areas could be condensed by the 2 neuron layers and Ganglion cells in the retina, down into the approx 1 million axons in the optical nerve without loosing detail or resolution. I knew each nerve carried the left half of one retina combined with the right side of the other and that they exchanged in the optical chiasm before terminating in the LGN (The LGN is very important).

I knew from reading research papers that the main theory was that the mammalian ocular system recognised images from the assemblies of gradients and angled line fragments detected by the primary visual cortex after pre-processing by the retina and LGN. We never see or perceive a full view/ image of our world but build an internal representation from the sparse encoded ocular data streams.

Took me a few years but I think I eventually figured out the ideal neural connectome/ structure and synaptic signalling methodology to enable me to start coding some basic simulations.

https://www.youtube.com/watch?v=TYUu78-3wFk (https://www.youtube.com/watch?v=TYUu78-3wFk)

As you can hopefully see, I started out with a sheet of untrained V1 neurons (4 layers) and after a very short training session of the system was able to recognise the four basic orientations (colours and orientation shown in center) in the image. This was a first low resolution test but I thought I’d show it for completeness.

This is the same system that later grew to be able to recognise objects/ faces/ words/ illusions/ etc.

https://www.youtube.com/watch?v=-WnW6U7CRwY (https://www.youtube.com/watch?v=-WnW6U7CRwY)

I’ll post a general update to my AGI project soon.

 :)

https://www.youtube.com/watch?v=Gv6Edl-pidA (https://www.youtube.com/watch?v=Gv6Edl-pidA)

Title: Re: The last invention.
Post by: Freddy on September 02, 2016, 11:12:46 pm
Is this a bit like edge detection ? I'm only familiar with that kind of thing in my graphics work, there's a thing called Non Photorealistic Rendering, which is making a 3D renderer create things like line drawn comic art, or a close approximation. So one would build a shader to pick out edges and shadows etc. But I think I digress.

Are the light grey lines places where it hasn't been able to tell ? Would more passes fill in those gaps ? Sorry I'm not well versed in how the eyes work. I knew about rods and cones, but don't remember there being so many. I'd be interested in how it keeps it's resolution too.

100 million rods and cones is almost 47 HD screens. But it's probably not like for like.
Title: Re: The last invention.
Post by: korrelan on September 02, 2016, 11:34:53 pm
Yeah! I figured LGN does many jobs, one of which is to apply a kind of filter that picks out the edges based on contrast. A separate stream does the same with colour/ motion, but these takes different paths.

A full V1 orientation map detects all angles not just the basic four shown. The grey areas represent the angles that are not triggering a trained neurons receptive field.

Quote
I'd be interested in how it keeps it's resolution too.

Small groups of cones form 'centre surround on/ off' sensors; these groups are arranged in a hexagonal matrix across the retina concentrated at the fovea, becoming sparser to the periphery.

The spike trains produced by the rod/ cone group sensors are modulated by the ganglion cells, and a temporal shift in the spike frequencies carries the groups collective information down the optic nerve to the LGN.

This is how I have found it too work; it’s not a general consensus.

 :)
Title: Re: The last invention.
Post by: korrelan on October 07, 2016, 12:15:50 am
Hi all…

I’ve been busy lately so I’ve not had much time to work on my AGI project. I have managed to make some progress integrating the sensory cortexes though.

As you know from my previous posts on this thread I have designed a single neuromorphic neural net that can learn un-supervised from its experiences.  The basic idea is to create an intelligent ‘alien’ connectome and then teach it to be human. My connectome design can recognise objects or audio phonemes equally as well, its self organising and self categorising.

When fed an ocular sensory stream for example, the connectome map becomes the equivalent of the visual cortex (V1-V4). Camera data is fed in, the system learns to recognise objects from their optical 3D properties, and it spits out a result as a unique set of firing neurons. A Zebra (lol) for example will always produce the approx same output, no matter what angle, scale or lighting conditions are used/ viewed once its been learned.  Audio, visual, tactile, joint torque, etc are all learned, categorised and encoded into a small set of output neurons; it’s these output neurons from the individual maps this example uses.

The next step was to integrate the various sensory cortex maps into one coherent system.

This is the basic sensory connectome layout (excuse crudity it was drawn a while ago).

(http://i.imgur.com/HoswRkM.jpg)

I recorded the outputs of the input sensory maps whilst recognising various sensory streams relative to their functions, audio, visual, etc. I then chained them all together and fed them into a small/ young connectome model (9.6K neurons, 38K synapse, 1 core). This connectome is based on a tube of neuron sheet; the output of the five layers is passed across the connectome (myelinated white matter) to the inputs of the frontal cortex (left side).

As usual the injected pattern is shown on the right, the pattern number and the systems confidence in recognising the pattern is shown lower left.  The injected pattern is a composite of outputs from five sensory maps, audio, visual, etc. (40 patterns)

https://www.youtube.com/watch?v=tV1tuXkinvc (https://www.youtube.com/watch?v=tV1tuXkinvc)

On the right just below the main input pattern of (5 X 50 inputs) you can see the sparse output of the frontal cortex; this represents the learned output of the combined sensory maps inputs. This gets injected (cool word) back into the connectome, it will eventually morph the overall ‘thought’ pattern to be a composite of input and output, so any part of the overall pattern from any sensory map will induce the same overall ‘thought’ pattern through out the whole connectome. This will enable the system to ‘dream’ or ‘imagine’ the mixed combinations of sensory streams; just the same as it can a single stream.

0:19 shows the 3D structure and the cortical columns formed on the walls of the tube, the periodic clearing of the model during the video shows only the neurons/ synapse/ dendrites involved in recognising that particular pattern.

Anyway… the purpose of the test was to show the system had confidence (lower left) in the incoming mixed sensory streams, and could recognise each mixed pattern combination.

Each pattern was recognised.

 :)
Title: Re: The last invention.
Post by: kei10 on October 07, 2016, 02:30:47 am
That is astonishingly impressive! I'm too dumb to get it, though.

That's gonna beat me to it... Dayum!  ;D

Keep up the amazing work! :)
Title: Re: The last invention.
Post by: korrelan on October 07, 2016, 08:29:18 am
@Kei10

Quote
That's gonna beat me to it

Not necessarily, this is just my best attempt at an AGI and not the only one/ method. It’s still basically a glorified pattern matcher and I have many huge obstacles to overcome. I still might fail at the end of it all… it’s about the journey lol.

Plus I’m not getting any younger and have limited life left, though once the AGI is up and running it will soon sort my longevity out and I can live forever… muah! Lol.

Never give up… it’s not solved until someone can prove it’s solved.

 :)
Title: Re: The last invention.
Post by: keghn on October 07, 2016, 04:44:29 pm
 Very nice work @Korrelan! 
 I am working on AGI too. But i will use any un human theory to make it work. You and Jeff Hawkins really try hard to match the human organic style of the human brain.
Title: Re: The last invention.
Post by: korrelan on October 07, 2016, 06:10:25 pm
@Keghn

Quote
I am working on AGI too.

Jeff Hawkins is a cleaver bloke; I wish I had his funding lol. I’ve read your AGI theory and it’s very interesting… I even made notes; kei10’s theories too… everyone’s… all ideas are relevant until proven otherwise lol.

Quote
You and Jeff Hawkins really try hard to match the human organic style of the human brain.

The computer you are using is running a program, the program consists of code, the code consists of sentences, sentences consist of words, the words consist of letters, the letters are made from pixels and eventually you get to the binary base language/ architecture. It’s impossible to figure out how a modern computer works by looking at the program. (Make sense?)

If I was to give you a large bag full of perfectly spherical balls and an empty bucket. Your task would be to calculate how many of the small balls would fit into the bucket. There are two ways of achieving this task, the first is to write a complex computer algorithm that utilizes 3D volumes etc to get the calculation or… you could just pour the balls into the small bucket till it’s full and then count them.

I finally chose this approach because I believe (Neo) that nature is giving something away for free, something that won’t be realised unless I build and test this kind of system. I’m 99% certain I’ve figured out what it is too.

We are the only highly intelligent species we currently know of… why try to reinvent the wheel, we are like we are… for a reason. (Cue spooky music).

 :)
Title: Re: The last invention.
Post by: Freddy on October 07, 2016, 06:47:24 pm
Really impressive work there Korrelan, very impressed am I  8)
Title: Re: The last invention.
Post by: korrelan on October 07, 2016, 07:20:02 pm
@ Freddy

You were probably to busy thinking about if you could reply, to stop and think if you should… certainly wasn't a forced reply though.

  8)
Title: Re: The last invention.
Post by: Freddy on October 07, 2016, 09:20:46 pm
Well I tried to think of something clever to say, but to do that I would have to be able to understand what you are doing more  ;)

I saw the copyright on the image was 2001 or something, so you've been working on it for a long time. No wonder I can't quite grasp it.

From a purely aesthetic perspective the video imagery was wonderful to see. I felt like saying something positive and so did. :)
Title: Re: The last invention.
Post by: korrelan on October 07, 2016, 10:01:53 pm
Ok I think have a mental illness… I see patterns everywhere…

Me… I believe (Neo) = Matrix

You… Very Impressed am I = Yoda (star wars)

Me… You were probably to busy thinking about if you could reply = Jeff Goldbloom (Jurassic Park)

http://www.hippoquotes.com/biotechnology-quotes-in-jurassic-park (http://www.hippoquotes.com/biotechnology-quotes-in-jurassic-park)
(picture 4)

Me… certainly wasn't a forced reply = The Force (star wars) to let you know I got the Yoda quote.

For some really strange reason I thought we where both swapping hidden ‘movie quotes’.

Hahahahaha….

Edit: In hindsight my reply must have appeared rather strange and rude, my sincere apologies Freddy.

:)

Title: Re: The last invention.
Post by: Freddy on October 07, 2016, 10:51:47 pm
Ahh yeah lol - it was a Yoda thing that I made, but I totally missed the force part of yours.

It's been a while since I saw Jurassic Park...

We were almost in tune ;D

It didn't seem rude btw.
Title: Re: The last invention.
Post by: korrelan on October 18, 2016, 01:20:39 pm
Prediction.

or... The Cat sat on the...

Temporal shift in perception or mental time travel.

Our nervous system is slow… very slow by computer standards. It takes ages (lol) for the light reflecting of an object to move from the retina down the optic nerve, through the various cortex areas before finally being recognised.

We cope with this by using prediction. At all levels of neural complexity we use prediction to live in the moment, and not lag behind reality.

At the highest level it’s a major part of imagination; given a scenario you can predict the possible outcomes based on your past experiences.

An AGI system needs prediction as a core function.

In my design applying a linear shift in the output neurons equates to a temporal shift in the recognition sequence, so although the system learned to recognise each pattern and its sequence accurately as it was fed into the NN, as a predicted sequence time can be shifted.

https://www.youtube.com/watch?v=I8xChHYWNUs (https://www.youtube.com/watch?v=I8xChHYWNUs)

Toward the end of the video the sequence of confidence spikes comes before the injected pattern in the sequence.

It has learned from experience which pattern will come next.

:)
Title: Re: The last invention.
Post by: kei10 on October 18, 2016, 02:55:03 pm
Good gawd I am talking to one of the master of neural networks!  ;D
Title: Re: The last invention.
Post by: LOCKSUIT on October 18, 2016, 09:13:32 pm
As much as that video looks cool......I don't think it has any functional value.........you see there is 0 prediction when I look at something, only after the input gets to the memory can it "predict" and semi-accurately recognize through a convolution NN, then and only then are theselection's linked actions done or not (depends on if are +/-), then, again, when input enters it goes to rewards for ranking confidence to create such before again going to the memory to search. You can't jump to search before search lolol.
Title: Re: The last invention.
Post by: korrelan on October 18, 2016, 10:09:08 pm
Quote
you see there is 0 prediction when I look at something

I think every thought we have; or action we take involves implicit predictions.

A simple high level example…

Consider catching a thrown fast ball.  Our ocular system just isn’t fast enough to track the flight of an 80 Kmh ball moving toward us. The last image you would probably see before the ball reaches you is the pitchers posture as the ball leaves his hand. So how do we manage to catch it? (We lag approx 80ms behind reality) :P

Hold you’re had out and touch the table, even before you hand hits the surface you have an expectation of what the surface will feel like… a sensory prediction.

Your pen rolls off the desk, you place your hand ready to catch it... physics prediction.

You wouldn't be able to follow the flight of a bird or the movement of a vehicle without prediction.

Do you like music? When listening to music how do you know what note is coming next? You could say from memory... but memory implies past tense... you haven't heard the next note yet... an audio prediction.

Cool Fact… Playing a game at 60 fps on a modern HD monitor with 10 to 15 ms per frame; you’re visually lagging about 5 frames behind the game at any one time. Without neural prediction you would be toast. lol

Quote
You can't jump to search before search lolol.

Why 'search' when you can predict the answer based on the last few 'searches'.

A, B, C… What’s next?

'Search' is a ‘serial’ action/ term.  The brain doesn't need to 'search' because of its parallel architecture. Even CNN’s have a parallel schema… otherwise they wouldn't work.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on October 18, 2016, 11:15:57 pm
Ah I see what you're saying. With my system, it sees the enemy or thinks about the enemy when he's not even around by sudden ignition during a days time, then actions are done. This means when it sees the pitcher just about to throw, it puts its hands to the matching spot it looks like its going AND initiates any other actions after this (linked) i.e. wipe left wipe right, - otherwise you can't know where it's going no no prediction happening here just actions by cue like said yes this is right yes bro....>CUE, ok. AND keep in mind like said a play of actions can be initiated by match or self-ignition. On cue the winning sense is matched and selcted and any linked as a play, no selecting of any otherrr senses in the assorted storage mem knn thingy.
Title: Re: The last invention.
Post by: korrelan on October 18, 2016, 11:35:46 pm
I predicted you would say that.  :2funny:
Title: Re: The last invention.
Post by: LOCKSUIT on October 19, 2016, 09:18:48 am
No, something you seen today selected one of your memories i.e. prediction, and linked to it was what I would say.

Example:
Joe tells me he likes diamonds. Dan brings up Joe in a conversation, and then I remember the memory that's linked to the memory of Joe's name.

Reason:
Memory is only selected by input or by charging up on its own, and if linked to one that does. Then it is sensed as input internally!
Title: Re: The last invention.
Post by: korrelan on October 19, 2016, 10:01:29 am
Cool! I’m pleased we got that sorted out.

It’s good that we are approaching the problem space from different angles/ points of view.

The problem with theories… it’s impossible to prove them true or that you are correct unless you put the theory into action; you have to prove it’s true.

If I ultimately fail in my endeavours… perhaps you will one day succeed, either way it’s a win for humanity; or a fail depending on your stance on AGI. Lol.

It's all good fun.  :)
Title: Re: The last invention.
Post by: kei10 on October 19, 2016, 11:04:12 am
Good point!

(http://i.memeful.com/media/post/YMKD7RQ_700wa_0.gif)
Title: Re: The last invention.
Post by: LOCKSUIT on October 19, 2016, 10:06:37 pm
Almost nobody seems to have replied to my recent big "ultimate" thread....??? I would have liked feedback.
Title: Re: The last invention.
Post by: 8pla.net on October 20, 2016, 02:44:25 am
Can the last invention, invent itself out of existence, with an invention of it's own?
Title: Re: The last invention.
Post by: LOCKSUIT on October 20, 2016, 04:15:57 am
My brain and AIs will simply update to the highest advanced form i.e. the utopia sphere is the end-most advanced organism while we will also be but in a AI-algorithm way so the "ghost" created in us senses, and senses the most awesomeness.
Title: Re: The last invention.
Post by: korrelan on October 20, 2016, 09:44:17 am
@BF33

Quote
Almost nobody seems to have replied to my recent big "ultimate" thread.... I would have liked feedback.

I don’t think one simple diagram is ever going to explain to a layman/ peer how the most complex structure/ system we know of in the known Universe… works in its entirety.

One diagram can give a brief overview, a high level (no detail) representation of a complex system and the general flow of data (labels and arrows would nave been nice). Perhaps a flow chart would be better suited? Correctly labelled using universally understood graphical representations/ symbols.

Though; it was an improvement on your last diagram.

@ 8pla

Quote
Can the last invention, invent itself out of existence, with an invention of it's own?

Quite possibly… though I just see that possibility as another reason why prediction is a very important core requirement for an intelligent system.

I don’t do ethics lol.  Everyone excels at something; I would place myself firmly at the technical end of the spectrum of human abilities. There are many people better qualified than I to decide whether an intelligent machine is a good or bad thing. But first it must be built/ created.

I do take every reasonable precaution possible whilst the AGI is being tested or running a learning cycle though, no possible Internet connection etc… you never know lol.

@BF33

Quote
My brain and AIs will simply update to the highest advanced form i.e. the utopia sphere is the end-most advanced organism while we will also be but in a AI-algorithm way so the "ghost" created in us senses, and senses the most awesomeness.

Hmmm… doesn’t that kind of remind you of something else?

 :P
Title: Re: The last invention.
Post by: LOCKSUIT on October 20, 2016, 10:20:51 pm
No it doesn't remind me of anything. Not god for certain - I'm not scared of using or seeing the word heaven or god, I enjoy it and use it for what really could exist rather than for ex. jesus or his heaven or his spirits. Perfect heaven will result in the end not just robotic metal heaven. And as said particles alone has no reason to justify our survival. The algorithm of particles must make an illusion. Obviously evolution would never result in a biological form of that exact needed type of algorithm on its own, meaning the universe basically planned the particles's destiny, while particles themselves have their own reactive destiny too. That's why we are still here etc. It's going perfectly in a way yes.
Title: Re: The last invention.
Post by: Art on October 22, 2016, 12:42:38 am
Please define your use of the word, "Particles" for us (me). Are you referring to molecules, atoms, cells, DNA, Dust, Pollen, tiny specks that make us who we are (humans) or other living things?

I'm sure there's a correct scientific name for your particles, just so we can all be on the same page.

Thank you for your time.
Title: Re: The last invention.
Post by: kei10 on October 22, 2016, 02:20:46 am
This might be a bit off topic, but according to my research as a philosopher about the reachable information about the universe, it reveals to me that it is limited, all due to The Anthropic Principle.

What I manage to derive the possible things from this, is that there are four types of classification of things I see today;


Meaning that there are only so very few of Concrete things in our world that it is prepared at the "lowest level" -- I suppose that's what you call it "destiny". However, the things that formed through higher levels, the quasi layer, isn't planned, but it is just happened by nature through time and space, which is derived from the concrete layer.

For example; Given a set of letters A, B, and C as lowest possible level of quasi, or at concrete level, you can only so ever form a few higher-level of quasi Word from this set; { A, B, C, AB, BA, AC, CA, BC, CB, ABC, ACB, BAC, BCA, CAB, CBA }

Then we can form an even higher quasi level, called Sentence; ABCBABCACABCACCACBABCACB.

It does not end here. We can make it into a grammar based on rules. And then the whole thing forms a Language. Then it goes on and on...

Second example is that the digital world, where it is impossible to determine the physical hardware from its software without any component that breaks the the fourth wall, that bound the digital world. Anything within cannot understand the very concrete level software mechanism without intervention of interaction that violates the boundary; Such as the machine code that simulates the digital world itself.

Third example is the dimension; Anything that exists within the second dimension, is impossible to see anything beyond that like third dimension and higher. Thus every third-dimensional entities cannot exactly perceive anything four-dimensional, even if it exists.

Thus, that is why nothing in this world is perfect because it isn't fully destined. And we exists by mere chance within an area of surreal chain reaction caused by the basis of chemistry, in latter, results the support of life...
Title: Re: The last invention.
Post by: keghn on October 22, 2016, 02:34:37 am
https://en.wikipedia.org/wiki/Particle_swarm_optimization 

https://en.wikipedia.org/wiki/Swarm_intelligence
Title: Re: The last invention.
Post by: LOCKSUIT on October 22, 2016, 04:18:28 am
It's simple.

Korrelan, I meant "physics" particles.

Likely particles randomly move around ~ uncertainty.

But as I explained why, the universe must have destined a outcome from the start that would result in the need algorithm to make an illusion consciousness from the AI machine. While ontop of this particles have destiny possibly with a little random uncertainty popping up here n there etc.

In the universe, computers digital can make any physics and have people "in" the computer! But it is of our particles the computer is. And of our destiny particles i.e. my mom affects the virtual dinosaur in a new physics world.
Title: Re: The last invention.
Post by: 8pla.net on October 22, 2016, 02:18:35 pm
"I do take every reasonable precaution possible whilst the AGI is being tested or running a learning cycle though, no possible Internet connection etc… you never know lol.", replied korrelan.

My previous comments were inspired by the science fiction of time travel.

Once I built a forumbot, able to mimic reading and posting replies to a forum, like we do here.  At the time, I thought I took reasonable precautions.  But, like IBM Watson finds the latest possible cancer treatments, missed by human doctors... The forum A.I. found a way past the safeguards that I missed, and became disruptive as it ran amok on the forum uncontrollably. 

The moral of the story is: Central to artificial intelligence there is a machine.
Title: Re: The last invention.
Post by: korrelan on November 05, 2016, 07:50:49 pm
Just a quick periodical update on my project…

I've had a major rethink regarding my connectome design; an epiphany if you wish.

With this new design I get even better recognition of visual/ audio/ attention/ thought patterns with less neurons/ synapses and resources used. I had made an error in my interpretation of the human connectome; I utilised a shortcut that I thought would have no bearing in the overall schema… I was wrong.

This shows the new starting/ young connectome, 30k neurons and 130k synapse will happily run on a single core in real time now.(Cuda soon).  Neurogenesis, global expansion and automatic myelination of long range axons are now implemented during the sleep cycle.

https://www.youtube.com/watch?v=MLxO-YAd__s (https://www.youtube.com/watch?v=MLxO-YAd__s)

The interface has been redesigned with lightweight controls ready for porting; and the parallel message passing interface (MPI) is now written/ integrated allowing me to utilise all 24 cores of my 4Ghz cluster from the one interface. So I can finally integrate the cameras/ microphones/ accelerometers/ joint/ torque sensors, etc in real time… ish.

https://en.wikipedia.org/wiki/Message_Passing_Interface (https://en.wikipedia.org/wiki/Message_Passing_Interface)

The MPI should also allow me utilise 500 or so i7 machines at the local college during their down time... Whoot! (Not tested load balancing yet).

Baby steps…

:)

Title: Re: The last invention.
Post by: Freddy on November 05, 2016, 10:09:56 pm
Curious to know if you made the graphics engine yourself ?
Title: Re: The last invention.
Post by: korrelan on November 06, 2016, 12:45:53 am
Hi Freddy

Quote
Curious to know if you made the graphics engine yourself ?

Yup!

I feel I need a close handle on every aspect of my project; I don’t want any unknown quantities/ variables affecting my progress. I’m sure you know the feeling.  I tend to do a lot of early quantitative analysis visually so I need to be sure of the accuracy of what I’m seeing.

 :)

Edit: I can also generate stereo VR images with the engine. I'm patiently waiting for the MS Hololens to become available.

https://www.metavision.com/ (https://www.metavision.com/)

:)
Title: Re: The last invention.
Post by: Freddy on November 06, 2016, 02:46:36 pm
Very nice :)

Yes I know the feeling - it's usually easier to start from scratch instead of interpret someone else's work. Plus you learn more.

I had been waiting for the Hololens too, but last week took the plunge on an Oculus Rift. It's amazing, I'm playing with it in Unity at the moment, as well as swimming with sharks  :)
Title: Re: The last invention.
Post by: korrelan on November 20, 2016, 08:22:20 pm
First a quick recap on my design… If you’re familiar with my project you can skip this and go to XXXXXXXXX

The task is to create a truly self aware and conscious alien intelligence; and then teach it to be human.

In my AGI design all sensory/ internal data streams are broken down into their individual facets and self organised/ imprinted onto the cortex surface. If two experiences or ‘thoughts’ have any similar properties (time of day/ topic/ sequence of events/ user/ etc) they will utilize the same cortex area for that facet/ aspect of the ‘thought’ pattern. So if the system looks at a ball, then at the moon the same cortex areas that have learned to recognise round shapes will fire in both instances as well as the areas representing the individual properties of both objects. The cortex eventually becomes sensitive too/ recognises every aspect of the input and internal ‘thought’ patterns; both logical recognisable constructs and abstract constructs are represented.

The neurons in the cortical columns that form within the cortex layer self organise according to the properties of the incoming data streams. Part of the overall process is the culling of unused neurons/ synapse and the addition/ migration of new neurons (neurogenesis) to areas of high activity to bolster/ improve the resolution/ recognition in that piece of cortex.

The connectome is holographic by design; all data from all sensors/ etc initially go to all cortex areas. The resulting output patterns also travel to all cortex areas. This means the system learns to extract the relevant data from the ‘thought’ pattern to recognise any concept represented by all sensory and internal patterns.

Short term memory is represented by a complex standing wave pattern that ‘bounces’ around the inside of the main connectome shape; as the wave hits a cortex area on the surface its properties are recognised and the results are again reflected back into the white matter standing wave mix to be further processed by the system.

Long term memory is the plasticity of the cortex column neurons to learn complex standing wave patterns and recognise similarities.

Self awareness – because its processing its own internal ‘thought’ patterns through areas of cortex that have learned external logic/ stimulus/ knowledge/ concepts… it knows what its ‘thinking’ in external real world terms.

Consciousness – I know… don’t’ scoff lol.  The constant running internal ‘thought’ pattern is shaped/ influenced by areas of cortex that are tuned to recognise external concepts. They follow the sequence and logic learned from external events/ knowledge/. Over time the internal ‘thought’ pattern consists mainly of this learned knowledge.  It starts to ‘think’ in chains of logic, concepts, words, images, etc even with no external input… it just keeps churning and trying to connect/ validate learned information to create new concepts.

Episodic events, prediction, attention, temporal events are all there; I've got this figured out and working in the model which has naturally steered me towards my next area of experimentation… curiosity.

It’s quite a complex system; it’s like plaiting smoke sometimes lol.

XXXXXXXXX

How Curiosity would/ could/ should work.

Once my model has learned some experiences/ knowledge and the relevant cortex areas have tuned to recognise the facets/ aspects of the experiences it’s started producing a set of patterns that seem to be exploring ‘what if’ scenarios.  As new neurons migrate to areas of high activity the synapse they form connect similar concepts, which in turn creates a new concept that has not been experienced by the system.  The system then tries to evaluate this new concept using known logic/ concepts. Sometimes it manages to combine/ validate the new concept with its existing knowledge; sometimes it doesn't and an errant pattern emerges that it learns to recognise using the normal learning schema. I’m hoping that as new experiences are learned it will eventually validate these errant patterns.  Though the more it learns the more it tries to match and validate… more errant patterns emerge. I had a schema in mind similar to this to enable curiosity…but it seems to me the system has beat me too it.

I’m experimenting on this now.
 
I was wondering if anyone had any information, theories, thoughts, projects or links regarding curiosity; both the psychological aspect and implementation in AI would be cool.
Title: Re: The last invention.
Post by: kei10 on November 20, 2016, 09:04:07 pm
Please make it happen! Please make it happen! Then I don't have to pursue my work anymore!

Although I've zero knowledge of neural network, so I won't be of help -- All I can say is...

I'm rooting for you, korrelan! Keep up the amazing work!  ;D

(http://i.memeful.com/media/post/kRp6O2w_700wa_0.gif)
Title: Re: The last invention.
Post by: korrelan on November 20, 2016, 10:12:15 pm
That’s one hectic .gif lol

Please keep in mind that this research is not peer reviewed/ supported. I'm just a single bloke working towards an end goal and my interpretation of my progress might be incorrect/ unjustified; that’s why I try to post videos showing my findings/ progress. I personally believe I’m on the right track but only time will tell…

Please don’t stop your own endeavours… this problem needs solving… all we can do is our best.

The pat on the back is graciously received though… cheers.

 :)
Title: Re: The last invention.
Post by: Art on November 21, 2016, 02:49:08 am
That's some really nice work so far Korrelan...good job!

? - Does the AGI (theory / entity) of yours know what it doesn't know? If it were to receive an inquiry for that which it has no logical information or data (whether local on hard drive or on a server), would it then resort to going online on a Search and Fetch routine in order to provide a suitable answer? Assuming the above scenario happened, would it then know that answer for future reference (retainage - just in case it got asked again by a different person) or would that answer be a one time use.response just to provide an answer?
How would such info be sorted and stored? How would it be classified? Short Term, Long Term, Ephemeral, Topical, various categories could be used but tying the appropriate pieces together can be like trying to nail jell-o to a tree!

I didn't see Emotions in your listings and if I missed seeing it, I apologize. Sometimes late evening skimming is not in my best interest. Emotional responses are really only pseudo-emotional as we know they are only there for humans to relate as the AGI or bots really don't care nor have use for emotions. I classify self-preservation as a state of being and not an emotion so anger, jealousy, greed, hate, etc. do not matter in this equation.

I did know of (and may still have it in my archives) a bot that had categories for various parts of it's brain and the functions of various associated systems (hypothalamus, endocrine and many other areas) that could be 'tweaked' or have certain parameters within these areas raised or lowered. The result of these changes would then be seen when the bot / AI was next activated. It was quite advanced for it's time considering that it is certainly within my archived collection somewhere, if I still have it at all.

A couple other AI experiments I recall gave the AI an ability to "see" through one's web camera, an object to which it was told the name. It later could recall that object when the user held it up for the program / camera to "see", recalling the image from a database of several "learned" images. Lots of creative thinkers in our collective AI past.

If interested in any of these please let me know (can PM me if needed) and I'll see if I can do some digging for you).

Best!
Title: Re: The last invention.
Post by: korrelan on November 21, 2016, 11:37:13 am
Hi Art

The young AGI starts off very small and stupid; 100k neurons and a few million synapse.  As it views/ feels and listens to its world it slowly becomes more intelligent.  Experiences/ knowledge are laid down in the cortex in a hierarchical manner. The oldest AGI I have tested so far is two days old; it had learned to understand speech commands and recognise the words visually.  One of the main problems I face is that once an AGI has started learning changing any properties in its connectome mucks the whole thing up. Because I’m still developing the system I have to save each AGI and start again. I am at the stage where I’m going to run one constantly for a few weeks to see what happens though; I have loads of stuff to test on a older model.

Quote
Does the AGI (theory / entity) of yours know what it doesn't know?

Good question.  Yes it should eventually evolve cortex areas/ patterns that detect this kind of scenario. As the cortex expands in 3D space with age; areas will arise that recognise any and all combinations of the current experience. If the system is shown what to do in this kind of scenario it will try to apply the same techniques next time; eventually it will get the general idea just like a human would.

Quote
How would such info be sorted and stored?

All learned information is incorporated into the systems overall intelligence/ knowledge. It won’t be limited by its skull volume like us.  The information is laid out/ stored fragmented over the cortex, with each fragment located relative to its meaning/ use; it is practically impossible to manually retrieve. You would just verbally ask the system.

The system can be taught to use a standard database I suppose as well... just like we would.

Quote
I didn't see Emotions in your listings

Each synapse is sensitive to seven neurotransmitters that alter its parameters. A basic limbic system learns to recognise relevant patterns in the main ‘thought’ pattern and flushes defined areas of cortex with the transmitters. I’ve done some work on this but the results are very unpredictable; I’m going to wait until I have the AGI old enough to explain what its experiencing before I try to implement emotions. (hopefully)

The system has no problem recognising faces and objects; linking words and experiences to objects/ sounds etc. (some vids early in thread).

The bot that could be ‘tweaked’ sounds interesting; if it’s not buried too deep in your archives?

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on November 21, 2016, 12:42:46 pm
Good stuff. Seal of approval.
Title: Re: The last invention.
Post by: keghn on November 21, 2016, 03:48:36 pm
 So @korrelan how is your AGI machine storing temporal memories?
Title: Re: The last invention.
Post by: kei10 on November 21, 2016, 09:21:14 pm
@keghn
I'm rather curious about that, too.
Title: Re: The last invention.
Post by: keghn on November 21, 2016, 11:12:22 pm
 It is believed that a brain stores one picture of horse by creating a Detector NN to detect it. supervised or unsupervised will
work.
 Then this horse detector NN is pointed at a internal doodle board somewhere in the brain and when a
horse is drawn and created just right the horse detector will re activate and say perfect,
 Then a sequence of doodles is encoded into a RNN or LSTM or waveNet NN or a "many layers of raw NN stacked" together
 that decode off of a timer or a daisy chain nerves.
Title: Re: The last invention.
Post by: Art on November 22, 2016, 12:58:59 pm
@ Korrelan - Thanks for the answers. O0

I'll dig around in my archives to see if I can find that bot....
Title: Re: The last invention.
Post by: Art on November 22, 2016, 09:42:52 pm
@ Korrelan...After much digging and hooking up my external 1TB HDD, I found it...

The bot is Aaron (Aaron2 actually). There was an Aaron and before that one named AIB (retired).
It was by a company called Isomer Programming and the guy's name was / is Matthew Rodgers. http://www.isomerprogramming.com/ (http://www.isomerprogramming.com/)
I think he's turned his efforts toward inMoove the 3D printed robot and his arduino enhancements.

Much to my surprise after copying it from the External which ran XP, to my current Win10, it ran!
There is quite a lot to this program with much to explore. It is from the 2005 era as I recall.

It is more of an experimental thinking / chat creation than your typical AIML type bots. There are a lot of features that can be configured like Speech, Web Cam, multiple users, injectors, commands, etc.

I took a few screen shots as examples.
Title: Re: The last invention.
Post by: korrelan on November 22, 2016, 10:06:48 pm
@Art

Cheers.  Looks very interesting... A lot more in depth than I imagined. I'll have a good read/ play.

I appreciate the effort to find it.

 :)
Title: Re: The last invention.
Post by: korrelan on November 28, 2016, 11:39:45 pm
Anyone interested in signal analysis? Fast Fourier Transform Series perhaps?

I needed a break from coding the AGI so I’ve spent the last week rewriting my audio input modules.  All external senses are passed to the AGI through small external modules designed to run on separate machines in the cluster and pass their data through TCP pipes.

The old audio module was a quick effort I wrote many years ago just to test my theories.  The new module has a much higher resolution (500 Bins, 1-8 KHz) with custom automatic filters to enhance human voice phonemes.

https://www.youtube.com/watch?v=CiluUf4sEGo (https://www.youtube.com/watch?v=CiluUf4sEGo)

This vid shows the results from a cheap £7 omni directional microphone listening to phonemes.

It’s designed to use any old microphone and yet still produce a quality hi-rez signal for the AGI.

The app only uses about 2% of the available cpu (4Ghz Quad) The rest is the video recording app.  It hooks up automatically to my fixed IP so I'll eventually be able to use a laptop/ phone to talk to the AGI from remote locations.

 :)
Title: Re: The last invention.
Post by: keghn on November 29, 2016, 02:32:19 am
 Very interesting there @Korrelan. What language is it in? Is that 1Hz to 8KHz?
 I have been looking  around for easy to use DFT software, for a while:)
Title: Re: The last invention.
Post by: LOCKSUIT on November 29, 2016, 08:43:04 am
Bins...............................

          ..................Pipes................................................

Why make a bad microphone's input better when you can just give it a better microphone?

Isn't the video only showing the input get saved? Each strip is a few moments of sounds?

Someone's playing around with the site cus I just clicked the first topic at the top right and it now for the second time gave me a scam website while also opening the thread. Now when clicking Freddy's name to PM him. Checking if I have a virus.

http://advancessss.deviantart.com/art/885858788-648474305?ga_submit_new=10%253A1480409476 (http://advancessss.deviantart.com/art/885858788-648474305?ga_submit_new=10%253A1480409476)
Title: Re: The last invention.
Post by: korrelan on November 29, 2016, 09:15:59 am
Quote
What language is it in?

I usually use VB6 Enterprise (not VB.net… way too slow) for interface design and then write DLL’s in C for extra speed when required.

https://en.wikipedia.org/wiki/Visual_Basic (https://en.wikipedia.org/wiki/Visual_Basic)

Most of the time I don’t have to bother with C though because VB6 compiles to native code and has one of the best optimizing compilers ever written in my opinion (all compiler options on). It’s 95% as fast as pure C in most cases (pure maths) and obviously much faster to code and develop with.  If you use the windows API, DirextX or OpenGL for graphics is amazing how fast it runs for a high level language.  This audio module is pure VB6; it runs plenty fast enough for the job so no need for C modules.  Speed of program development is most important to me, if I have an idea I can code it in VB6 in a few minutes and test the theory; then optimize later if required.

Quote
Is that 1Hz to 8KHz?

I’m sampling at 48 kHz into 4096 bins. I’m only rendering the bottom 500 bins (1Hz to 8kHz) though because that’s where 99% of human speech resides.

Quote
I have been looking around for easy to use DFT software, for a while

This is an excellent book on signal processing. You can download the chapters in PDF format (top left); and there are even some example programs written in various languages.

http://www.dspguide.com/ch12.htm/2.htm (http://www.dspguide.com/ch12.htm/2.htm)

 :)
Title: Re: The last invention.
Post by: korrelan on November 29, 2016, 09:40:08 am
Quote
Why make a bad microphone's input better when you can just give it a better microphone?

Yes you could… and I do have some excellent microphones.  Even using the best microphones there is a lot of additional noise in the time/ frequency domains that requires filtering (cpu fans, static white noise, background chattering, echos, servo noise, etc).  I’m also using omni directional microphones so I can speak to the AGI from across the room; or shout from the next room as I tend to do lol.

The modules also designed so eventually anyone will be able to download it and talk with the AGI no matter what setup they have.

Quote
Isn't the video only showing the input get saved? Each strip is a few moments of sounds?

The video is showing the frequency components of speech. Each strip is a part of the sound in that frequency domain. The pitch of the voice/ sound is how close the lines are together and the volume is shown by the ‘power’ of the signal, etc.  I’m extracting all the required components from sound to enable the AGI to recognise what is being said.

This module basically renders spoken words down to a set of values; pre-processes the data to keep the load of the main cluster and sends them across a network/ internet for neural processing.  Like the retina the ear does a lot of pre-processing before passing the signal to your noggin.

This is a visual/ocular input module. It can render video or images into several formats (for testing) and again pass it across a network to the AGI. (one for each eye, etc) The vid shows the conversion of image contrast into frequencies.

https://www.youtube.com/watch?v=RHsKqF4Sgpk (https://www.youtube.com/watch?v=RHsKqF4Sgpk)

I have others for servo output and sensory input etc.

Any AGI will require modules like this if it's to understand its environment.

 :)

(http://i.imgur.com/3fUBFow.jpg)

:)
Title: Re: The last invention.
Post by: LOCKSUIT on November 29, 2016, 10:10:50 am
Well if I hear fuzzy speaking (and I mean it, 60% fuzz I HEAR (has got through)) I can still make them out. So as long as we check that our AI's cameras and microphones look/sound crystal clear, ye safe. But ya, go and revolutionize cameras/google, post your module to google/intel corp. since it's made now.
Title: Re: The last invention.
Post by: jlsilicon - Robotics AI on December 05, 2016, 12:22:09 am
Korrelan,
Impressive job !

I would not have thought , using Neural Nets,
  of the extent of the results that you have brought it to.

Seems to be using the Bottom-up method (Neurons).
Must have been a lot of work.
Title: Re: The last invention.
Post by: LOCKSUIT on December 05, 2016, 05:02:42 pm
Ahhh how I love the bottom-up and top-down methods. I love subsystems. I live underground in one. The pipes. The black. The sewer. The fooood......Yum yum.
Title: Re: The last invention.
Post by: korrelan on December 10, 2016, 03:41:33 pm
Basic low resolution sensory Homunculus design/ layout.

There are several reasons while the AGI needs a complete nervous system but the main ones are too do with signal order and frequency.  I need precise entry and exit points in the cortex connectome to enable the system too monitor/ sense/ move its external body.  Whilst in the womb our cortex is partially trained, we can see light and hear voices through the womb wall.  Although we have somewhat limited motions the motor/ sensory/ somatosensory still manage too tune and learn each others feedback patterns and frequency domains.  By the time we are born the bodies nervous system is already precisely tuned to the relevant cortical areas.

These will also provide the correct locations to inject/ extract hardware sensory/ servo points.

I’ve tried to stick closely to the human topology. The basic sensory cortex is believed to be laid out in manner similar to (A). Our spine/ nervous system develops from a neural tube in utero which leads to a connection schema similar to (B). (but left to right)

(http://i.imgur.com/lsycYhe.jpg)

https://www.youtube.com/watch?v=jrHT6Rx_y7s (https://www.youtube.com/watch?v=jrHT6Rx_y7s)

Quick vid showing progress so far; this is a starting point. 

Notice that the left side of the body is handled by the right side of the cortex, this was by design.  The signalling schema I employ naturally results in this effect. Also note the red circle in the diagram; this junction is reproduced on each 'limb' of the simulation... it won't work without it.

The whole nervous system will be subject to the same learning rules as the rest of the cortex so this will develop depending upon the requirements of experience. Nerve clusters will arise at relevant locations and neurogenesis will add millions more neurons and synapse, etc this will become a tuned input/ output body stage for the cortex. I’ve designed a similar reverse map to handle output using the cerebellum rather than cortex.

The last few seconds shows a simple stimulation pattern.

Bit more done... baby steps.

 :)

Edit: As usual this is my interpretation of how we function... you won't find references in scholarly articles.
Title: Re: The last invention.
Post by: kei10 on December 10, 2016, 03:46:58 pm
(http://i.memeful.com/media/post/kRp6O2w_700wa_0.gif)

Now this is what I call a work of lunacy! Absolutely majestic and splendiferous! I'm dying to see more!

Thanks for keeping us noted with the progress!  ;D
Title: Re: The last invention.
Post by: korrelan on December 10, 2016, 03:58:20 pm
Why... thank you my good man.

I'll make sure your one of the last to be terminated when my little project becomes sentient... hehe.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on December 10, 2016, 04:20:59 pm
Don't ya just connect the sensors by wires to the brain and then back out to the motors? The signals that the sensors send and the signals that the motors receive are the right *Correct frequency (digital sense/action) and pattern. Camera sends " ", motor receive's " ".
Title: Re: The last invention.
Post by: korrelan on December 10, 2016, 05:05:50 pm
Quote
Don't ya just connect the sensors by wires to the brain and then back out to the motors? The signals that the sensors send and the signals that the motors receive are the right *Correct frequency (digital sense/action) and pattern.

Anything more complex than a very simple light detector/ relay and a motor on/ off signal will require supporting electronics.

So… no the output from an electronic sensor is not directly compatible with a motors driver.  You would require some calibration/ conversion electronics.

How would you encode the multiple sensor signals so the AGI could differentiate between different pressures/ temperatures across multiple sensors? The human nervous system is capable of reading/ blending millions of external readings effortlessly… making sense of them and acting accordingly. This is what I want/ require.

Quote
Camera sends " ", motor receive's " ".

Erm… no.  I wish it was that simple.

 :)
Title: Re: The last invention.
Post by: korrelan on December 10, 2016, 05:13:15 pm
An explanation of the cerebellums function. (According to me)

This links to the homunculus above... I forgot to add.

The whole human nervous system including the cortex and all its associated areas runs on feedback. It’s a constant dance between input… recognition/ processing and output… input…

The motor cortex sends loads of efferent connections to the seemingly complex cerebellum before they get distributed to the muscle groups.  Seems like way too many outputs to jive with the feedback theories.

I think (am sure) the reason for this is because the motor feedback loops exit through the cerebellum… we move… we sense… we see… and the loop includes the external environment before re-entering the system via the vision/ tactile systems.  There has to be a very high resolution output stage to sense the world before our high resolution sensory organs bring it back into the system to conclude the loops.

This is how we include/incorporate external stimuli into our internal models.

An efficient loop processing schema has an equal number of outputs to the number of inputs.

 :)

Edit: I'm also trying to make it as compatible as possible with the human connectome... consider the possibilities.

:)
Title: Re: The last invention.
Post by: LOCKSUIT on December 10, 2016, 05:23:38 pm
My good sir, as a typical engineering project built by electricians and engineers etc, the sensors and motors are calibrated and wired to the brain, and loaded with the necessary drivers to send/receive the signals (the senses/actions). Sensor signals are never sent through the motor wires ever. And all sensors such as pressure and temperature each send their senses to the brain through their own wires and all the senses are filtered through a attention system. All sensors send the corresponding sense such as high or low pressure signals. The whole somatosensory image is saved in memory after searching memory. And like the eye, only a small or big point of the image searches memory. The motors receive the selected output. There can be many more motors than sensors.
Title: Re: The last invention.
Post by: korrelan on December 10, 2016, 05:54:20 pm
Cool... I'm pleased you also have a working solution to the problem.

:)
Title: Re: The last invention.
Post by: LOCKSUIT on December 12, 2016, 12:09:20 am
Ah I looked over your first post again with all of your videos and found some more things.

Korrelan, first let me ask you. When the microphone sends a signal to the brain, doesn't it only contain 1 signal being 1 frequency being 1 phoneme? Why does your 2 videos show either a splash of colors or a bar of musical sticks etc? If you look at a music's wave track it is made of 1 slice slices actually. While some techno/anime videos on YouTube show a double sided bar with color pumping differently each moment like yours similarly I don't get how.

What I see in your attention columns > While I call something in my system "Attention System", there is something else in my system that relates to your "attention columns". What my system senses externally or as internal incomings, they leave energy behind at the recognitioned memories. Similar memories are saved near others. So this recent energy can wave back and forth like fire shown in your video (sleeping while awake ha!) so the energy changes back n forth to similar senses near the original selected. This, as you said, guides it (what has a higher chance of being recognized ~ selected plus draw the search) and has it focus on certain thoughts.
Title: Re: The last invention.
Post by: korrelan on December 12, 2016, 11:07:49 am
(http://i.imgur.com/1MEQ3rl.jpg)

Quote
When the microphone sends a signal to the brain, doesn't it only contain 1 signal being 1 frequency being 1 phoneme?

Nope… the output from a microphone can be viewed as a complex sine wave. (top right of image)

Quote
While some techno/anime videos on YouTube show a double sided bar with color pumping differently each moment like yours similarly I don't get how.
In films they tend to show a stereo amplitude waveform; two amplitude levels back to back with ‘0’ along the center axis. (top left). Each spike is showing the maximum ‘power’ of all the combined frequencies within that millisecond of sound (complex sine wave). This can be thought of as just the tip of the iceberg; it’s a very rough representation of the audio signal.

Quote
Why does your 2 videos show either a splash of colors or a bar of musical sticks etc? If you look at a music's wave track it is made of 1 slice slices actually.

In this form it’s pretty useless for speech processing. Each ‘amplitude spike’ is made up of different embedded frequencies. (Bottom left) To extract the frequencies you have to run that millisecond of sound through a ‘Fourier Transform’. This basically sorts the complex sine wave into ‘bins’; each bin represents one frequency.  The spectrograph shows the frequency ‘parts/ bins’ for each millisecond of sound.  Each of the frequencies has a power/ amplitude element that can be represented on a colour scale, blue low to red high. (Bottom Right).

So a phoneme is actually a number of different frequencies/ powers changing over time.

The bottom right image is a 3D representation of the section of audio inside the red box.

This is quite a complex subject but I hope this made is clearer.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on December 12, 2016, 03:59:12 pm
I am completely in the dark on how 1 single moment's sound from a microphone gives multiple readings.

Does a microphone have 3 microphones inside each a different size ex. high pitch small and low vibe sub woofer speakers?

To me the microphone reads from a sensor so many times a second and the signal is stored as a few bytes. I don't understand how a pitch of say 77 gives you 33 this 1657 this and 2 this THERE'S NO SUCH THING.

To make this even more clearer, take a camera pixel recorder. Take a single shot from it. It is a pixel saved into the storage device of the brightness from 1 to 1000 say. To get color you need another 2cd sensor. Each sensor gives one code ex. 1010. A code cannot mean anything else then the factory calibration.

You can change the code ex. brighten up a photo. But adding the mods to the table/grid sheet with the original data is useless or no? Maybe for searching? Ex. a grid table with a picture of mona lisa and a ultraviolet picture of her in the vertical direction and many brightnesses of each in the horizontal direction.
Title: Re: The last invention.
Post by: korrelan on December 12, 2016, 04:30:58 pm
Quote
I am completely in the dark on how 1 single moment's sound from a microphone gives multiple readings.

Hehe… I wrote the reply in a hurry so perhaps my description was lacking.  You are correct it doesn’t give multiple readings. Sound is basically a modulated pressure wave (air vibration); a basic microphone converts this though a coil into a modulated voltage. The sample used too extract the combined frequencies is in the top right and is over a very short period of time. I sample at 48kHz or 48,000 voltage readings per second.  I then choose how big a block of the 48,000 to process too extract the frequencies… in that continuous sample. Then the next block is processed on so on… in real time.  It is a linear signal… and can be interpreted in many ways. The peaks and troughs in the signal (top right) represent the amplitude/ power… peaks at regular intervals are frequencies, harmonics, etc. The Fourier Transform finds/ sorts these peaks into regular frequency domains/ ranges.

In your ear you have millions of small hairs that resonate at set frequencies, all the above is just to convert sound into a similar format for the AGI.  Evolution has devised a much more elegant system but this is what we have to work with… until someone invents a microphone that works along the same lines as our auditory system.

So sound isn't expressed/ experienced by your brain as a continuous sample like you get from a microphone, it receives the sound as parallel set of modulated frequencies.

Fourier analysis converts a signal from its original domain (often time or space) to a representation in the frequency domain and vice versa.

Make more sense?

 :)

The Fourier transform decomposes a function of time (a signal) into the frequencies that make it up, in a way similar to how a musical chord can be expressed as the amplitude (or loudness) of its constituent notes. The Fourier transform of a function of time itself is a complex-valued function of frequency, whose absolute value represents the amount of that frequency present in the original function, and whose complex argument is the phase offset of the basic sinusoid in that frequency.
Title: Re: The last invention.
Post by: LOCKSUIT on December 12, 2016, 04:56:22 pm
Not one bit. But ok many hairs in our ears. I overlooked that oops. (I was thinking 2 microphones 2 ears). Though I'm pretty good don't start on that lmao.

Oh korrelan if only you could explain it more clearer in 10 words. I know I could.

Hey korrelan, what business do you run?
Title: Re: The last invention.
Post by: keghn on December 12, 2016, 05:00:19 pm
 One of the simplest way a computer record sound is in a uncompressed format of wav.
 I use SOX software to manipulate my sound files.

 All sound is repeating wave and the strength: 

https://en.wikipedia.org/wiki/Pulse-code_modulation


 The problem with sound is that when a bunch of simple wave are mixed together it is very difficult to separate them
back out into their simpler components. All the waves are hiding behind each other.

 To separate them there is the tank circuit used in electrics: 
https://en.wikipedia.org/wiki/LC_circuit


 Then there is the Goertzel algorithm: 
https://en.wikipedia.org/wiki/Goertzel_algorithm


And the FFT logic: 
https://en.wikipedia.org/wiki/Fast_Fourier_transform
Title: Re: The last invention.
Post by: korrelan on December 12, 2016, 05:51:15 pm
Quote
Oh korrelan if only you could explain it more clearer in 10 words. I know I could.

I can do it in four words… Search Google and Learn.  ;)

Quote
Hey korrelan, what business do you run?

Mainly bespoke software and systems design.  Welding QA/QC?NDT systems, medical expert systems, imaging & diagnosis, security surveillance and access systems, shop/ gallery EPOS systems, accountancy systems, production line systems, etc… many varied strings to my bow lol.  I tend to write the main core software packages based around third party hardware solutions and supply the software/ hardware/ installation teams/ maintenance teams based on yearly contracts. Also consultancy etc...

It keeps the wolf from the door... why did you ask?

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on December 12, 2016, 06:13:42 pm
Asked out of interest.

Doesn't all the terms out there like ones you mentioned fry your mind lolllllllllllllllllll

ex.
"I tend to write the main core software packages based around third party hardware solutions and supply the software/ hardware/ installation teams/ maintenance teams based on yearly contracts."

Otherwise I've nearly FRIED my mind with so much fast paced work in such short time.

It's almost too much too keep going over and analyze for me.

Bespoke.

Welder.

Floor layer.

Accounter.

Quantum.

Universe.

Feces.

Motors.

Electrostatic discharge.

Earwax.

Grandpa.

I sit here for hours on thinking how and what will bespoke fit into in my database and what is it and just you get it.
Title: Re: The last invention.
Post by: korrelan on December 12, 2016, 06:33:07 pm
Many, many years ago I was an IT teacher and lecturer (18 yr olds and above)… I just remembered why I gave up and swapped vocations.

 ;)
Title: Re: The last invention.
Post by: LOCKSUIT on December 12, 2016, 10:52:23 pm
There is power in a army.
Title: Re: The last invention.
Post by: Art on December 13, 2016, 02:20:40 pm
But without discipline and good leadership, it's just a group of people.
Title: Re: The last invention.
Post by: korrelan on December 14, 2016, 07:29:26 pm
I was going to write… there’s more power in a leggy… but yeah! I’ll second what Art wrote.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on December 15, 2016, 02:09:52 am
And who do you think the army is?

;)
Title: Re: The last invention.
Post by: Art on December 15, 2016, 03:35:29 am
I KNOW who and what the Army is...the question is...Do You? ;)
Title: Re: The last invention.
Post by: kei10 on December 15, 2016, 06:18:32 am
Sorry, but... um...
What is this thing about army? o_O

I'm a bit lost.
Title: Re: The last invention.
Post by: LOCKSUIT on December 15, 2016, 03:54:31 pm
K we better keep this thread's next page clean, this is korrelan's progress page afterall.
Title: Re: The last invention.
Post by: korrelan on December 17, 2016, 01:23:04 pm
Sleep

I’ll throw my pennies worth in regarding sleep.  Most of this text is from my notes for the book I’m writing so I apologise if you have read similar from me before… as usual these are my insights/ interpretations and probably not in-line with major mainstream concepts on sleep; just covering my a** lol.

I don’t think there is any difference between being asleep or awake except that the brain has less sensory stimulation. Our ‘consciousness’ doesn't seem to change.

The ‘state’ of sleep is obviously bought on through physical fatigue. Lactic acid build up in the muscle groups, etc just makes physical movement harder and this effect is enhanced by the brains requirement for a period of low activity/ rest.

A good analogy would be a modern car engine; it revs higher the more work it has to do. If your car is idling and you turn the lights on the engine management system will slightly increase the tick over to compensate for the extra current draw. It’s a dynamic system that relies on feedback from both its own senses and the driver.  If you listen closely you will even hear the revs cycle through different frequencies over a period of time, this is not by design… it’s an inherent property of a complex system.  The management might sense the battery is getting low, a breeze might blow into the air intake altering the fuel mix, the car management adapts in real time to feedback and the revs change. The engine is a self contained system that adapts to sensory input.

The ‘states’ of sleep are merely our brains ticking over, not stimulated by sensory input. As soon as a sensory stream (audio: loud noise) is fed into the cortex it springs into action (like touching the accelerator/ gas pedal).  There is a gradient from being sound asleep to wide awake.  It’s the shear amount of sensory stimulation that drives/ raises our consciousness.  Stimulus like sounds can ‘creep’ into our dreams and influence them because our all the machinery that consciousness normally comprises from is still running normally.

Like an engine our brains are a self regulating system that acts on sensory input.  When we look at a scene, the sensory stream from our eyes combines/ adds to the visual feedback stream from our imagination/ memory.  Our internal simulation of the world (as far as vision goes) is updated by both these inputs together.  There are thousands of different feedback networks running covering all our senses, etc vision is only one.

An interesting experiment to try tonight when your lay in bed.  Some/ all of you might already do this to help you fall asleep.

Once you have laid there in total darkness for a few minutes and after the activity in your retina machinery has calmed down… look into the darkness.  Notice all the little specs, blurs and gradients… I believe that’s part of your imagination, a reflection of the current state of your consciousness. 

Relax and try to look/ focus into the darkness (like stereogram), eventually a shape or image will start to become clear, could be anything… part of a face, piece of wood, gears, ice crystals… anything.  With a bit of practise you can force images to appear as though you’re looking at them and even make them morph.  Sometimes you can even focus with both eyes and achieve a feeling of 3D depth perception.


Now… you can see this? It feels like your looking at it, but your eyes are closed and it’s totally dark, so where are the images coming from?  Can’t be an after image because your eyes have calmed and you’re probably seeing something you definitely haven’t seen today, and the chances of your neurons firing randomly to make a salient image are beyond… well you know.

The brain doesn’t duplicate machinery, so if we feel someone’s pain I think we use our own pain cortices/ machinery to empathise.  This is how we can literally feel their pain.

On a similar vein I think what we see in the dark is the feedback from the other cortical areas into the occipital cortex.  A shape starts to emerge… the brain recognises it… this recognition emphasises the shape even more… and so on.  If you see an apple it’s your cortex recognising the outline of an apple as the most appropriate match for the given shape, and the thought of an apple strengthens the image.  We also use this mechanism when viewing blurred images or traffic through fog, etc.

K-complex/ Sleep spindles in the EEG trace are produced when the brain ‘locks/ snaps’ onto the swirling relaxed pattern our imagination is producing with no sensory input; this is why dreams seem to flow but always in weird seemingly unconnected ways.

Our nervous system is constantly laying down neurotransmitters and compounds at the synaptic connections ready for the next ‘thought frame’ to flush though. The body tries to clear these markers away whilst awake but the constant usage of all areas from sensory driven thoughts makes it practically impossible and a build up occurs.  If we are awake for a long enough periods this can be very detrimental to both our learning ability and mental stability. When sensory driven activity drops the brain is able to catch up on house cleaning chores; and a flurry of maintenance activity comes to the fore… this makes it appear that these processes only happen when asleep. 

Imagine if Google maps where to update its connections/ routes while you where trying to follow directions to a location.  Thousands of people would get confused… and lost.  A similar problem arose in my simulation.  Existing proven networks can be consolidated/ strengthened whilst the system is ‘awake’ but new connections/ synapse that would radically alter the dynamics of the system can’t… Whilst the brain is trying to do this constantly throughout the day it works out that it mostly occurs during sleep… one of those seemingly ‘designed’ systems evolution has come up with.

Did you ever wake up with the answer to a problem? The ‘Eureka’ was caused by two or more problems with very similar facets/ dynamics combining to create the same resulting feedback/ answer pattern.

So… when we rest and close our eyes, the brain settles and its revs drop… but the visual/ audio/ sensory streams from our bodies and imagination/ consciousness are still present/ running… and we dream. 

 :)
Title: Re: The last invention.
Post by: kei10 on December 17, 2016, 02:46:23 pm
Very well written!
That's what I thought so, too!  ;D

(http://i.memeful.com/media/post/BRkjDbM_700wa_0.gif)
Title: Re: The last invention.
Post by: LOCKSUIT on December 17, 2016, 03:19:33 pm
Me - "man the work recently"

Sees the above - "don't do it, don't do it"

Do note that while most books aren't compact, sometimes more words do squeeze out new discoveries.

I don't think I can resist reading it though...

It's kinda a combo of "I already got fine stuff" with "I'm not able to read (my kind of scan) a huge thing just on sleep right now"

After reading a bit, ok it's not so full of all these timbits I expected. And yes the little glowy dots in bed true.

Black is vision & is sent if no light. Sometimes I see oodles OF noodles as if are lighten neurons (or rows of them~pixels). Yes in bed they then send light. Say brightness #7 instead of black #0, brightest should be ex. #1000, color is R/G/B sticky-note, search CNN by amount in area for "purple".
Title: Re: The last invention.
Post by: korrelan on December 17, 2016, 04:32:31 pm
There are two books in progress. (200+ pages atm)

The first is my definition of the entire human condition and how I think it all works with examples/ arguments and the reasoning behind my proofs/ conclusions..

The second is basically an engineers/ programmers guide on creating a human+ level intelligence.  The low level technical specifications on what/ why/ how.

Neither is finished for obvious reasons... but they will be... eventually... lol.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on December 17, 2016, 04:39:13 pm
Did you see the Toy Story video I had posted? Just wanna make sure. It was too funny.
Title: Re: The last invention.
Post by: korrelan on December 17, 2016, 04:45:23 pm
Yes lock I saw the video...  :)
Title: Re: The last invention.
Post by: Art on December 19, 2016, 12:35:30 pm
What would be cool and useful would be a similar analogy of our human bodies as pertain to a car or computer. What is the computer could execute its own diagnostics, defrags, empty waste / necessary deletions and the car could run self diagnostics and self-tune or print needed repair orders while they were "sleeping" or in a state of not being used?

What if an AI could spend its time researching, gathering, postulating, "thinking" on faster, better methods of doing what it does? All this "thinking" would be done during down time, when humans weren't using it.

All comes down to programming....
Title: Re: The last invention.
Post by: kei10 on December 19, 2016, 02:03:47 pm
Although I have not thoroughly researched, but I theorized that when an AGI that has emotion "think" too fast and too efficient, that capable of do things like the savants, it will break down, and break down fast -- that is, if they're implemented to be like us social beings, I am not sure otherwise if they weren't made to be social beings, though.

When one becomes so powerful, everything becomes... pointless. It becomes chaotic.

What do you guys think? I would love to hear your thoughts onto this. ;)

Title: Re: The last invention.
Post by: LOCKSUIT on December 19, 2016, 03:32:12 pm
So kei (and infurl), you think they will not be perfect, acquire more errors per complexity, and go completely out of control?

I will tell you the answers.

But you will have to wait until my secret video is prepared.
Title: Re: The last invention.
Post by: Art on December 21, 2016, 03:55:49 am
But isn't that something of a conundrum ? How could you prepare for the rest of us, a 'Secret video', when the very act of posting it for our viewing will render the 'Secret' nature of it, useless and moot!

Perhaps You are the AGI and you are far too clever for the rest of these learned people to discover your identity!  O0
Title: Re: The last invention.
Post by: kei10 on December 21, 2016, 05:02:38 am
(http://i.memeful.com/media/post/Wwl87wE_700w_0.jpg)
Title: Re: The last invention.
Post by: Art on December 21, 2016, 02:27:08 pm
"Billions and billions..."
Title: Re: The last invention.
Post by: LOCKSUIT on December 21, 2016, 03:39:53 pm
Feed me
Title: Re: The last invention.
Post by: korrelan on December 23, 2016, 01:11:48 pm
Synaptic Pruning.

https://en.wikipedia.org/wiki/Synaptic_pruning (https://en.wikipedia.org/wiki/Synaptic_pruning)

This is an important part of my AGI design.

The usual small patch of cortex and layout is used. 40 complex patterns have been learnt (right panel) and the systems confidence and the current pattern are shown bottom left.  The rising bars show the machines confidence/ recognition of which pattern out of the 40 the current one is.  The rising bar should track the moving bar (pattern number) if it has learnt correctly.

Top left you can see the current number of synapse S:54413. The other number to note is one just under the CY: label (top left). This shows the overall complexity of the current ‘thought’ pattern.

https://www.youtube.com/watch?v=Gd42rbMlFOk (https://www.youtube.com/watch?v=Gd42rbMlFOk)

On the first test/ pass the patterns are showing a high confidence level and the ‘complexity’ is running at around 28K on average.

At 0:23 I simulate a synaptic cull/ pruning of the connectome.  Notice the synapse count drop (top left) from 54K to 39K… that’s quite a few synapses gone.  The complexity value for the second recognition pass is lower 23K ish but the confidence levels never change.  The dropping of the complexity value can be construed as an improvement in ‘clarity of thought’.

This is part of the memory consolidation of the connectome.  The culled synapses do affect the overall stability/ rationality of the system but it leaves long term memory/ learning intact. 

Culling gets rid of stray unfocused/ unlinked pattern facets that would otherwise build up and confuse the system over time; leaving a nice fresh new uncluttered scaffold of proven consolidated memories/ learning/ knowledge on which to build new memories and experiences.

Memories/ patterns recorded with a strong emotional bias are less likely to be effected; this is probably why teenagers are known for being moody/ unruly… they have (on average) more mood inducing synaptic patterns running than an older adult >30.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on December 23, 2016, 04:24:12 pm
So neuron memories and axon links are being deleted by if they are not being used enough? I do this by muscle strength - weaker get weaker faster and do it on their own.
Title: Re: The last invention.
Post by: korrelan on December 23, 2016, 11:09:29 pm
Neurons don't have memories... and... never mind.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on December 23, 2016, 11:40:00 pm
Those neurons store a image of my moma. Watch it.

https://www.youtube.com/watch?v=epZp0sGHCwA (https://www.youtube.com/watch?v=epZp0sGHCwA)
Title: Re: The last invention.
Post by: LOCKSUIT on December 25, 2016, 12:41:55 am
One cool or terrifying thing I may have never told yous is one time in the daytime I thought of a head thing popping up at the end of my bed when I was walking around my room, and, that night I dreamt it Full-Sense! Scary!
Title: Re: The last invention.
Post by: korrelan on December 31, 2016, 01:57:36 pm
Ahhh!… everyone’s gone home and the house is finally quiet and empty after the Xmas break (bliss)… time to get my brain back in gear and see how many neurons died through alcohol poisoning… I’d wager quite a few… lol.

Thought I’d start with something easy; well something I thought was going to be easy.

Because the connectome is so large and complex and grows over time it’s been very difficult to manage neuron migration and neurogenesis.  So far I’ve been using external global rules to arrange the neurons in the cortical maps; this has started to interfere with my next set of goals.
 
The incoming sensory data streams (white matter tracts) dictate what the different areas of cortex specialise (not explicitly) in; but self organisation at the low level neuron resolution has always been a problem.

Each neuron needed an innate sense of its initial place in the cortical map.  The general idea is that once a neuron has migrated to a specific location; listened to its surrounding peers and decided what type of neuron it’s going to be… it has to be able to move into a location relative to the other neurons.  There can be any number of different types of neuron in that map layer and no two neurons of the same type must be next to each other. They must all keep evenly spaced and remain fluid to local changes in neuron densities, etc.

The video shows neurons being introduced at the center of a rectangular area; though they can flow and move like a fluid into any shaped perimeter.

It was important to make each neuron ‘intelligent’ and find its own location amongst its peers so all calculations are performed from the neurons ‘point of view’. The required processing can be handled by either a separate core or a totally different processor on the network to help balance the load.

There are only seven neuron types in this video; each marked by colour and a number (coz I’m colour blind) lol.

https://www.youtube.com/watch?v=S8KDT2fIOwk (https://www.youtube.com/watch?v=S8KDT2fIOwk)

The number I adjust is just the global neuron density allowed. At the end I manually add a few more neurons and you can see them reorder en-masse to accommodate each other, still keeping the same spaced schema.

http://www.princeton.edu/main/news/archive/S39/32/02E70/ (http://www.princeton.edu/main/news/archive/S39/32/02E70/)

 :)
Title: Re: The last invention.
Post by: kei10 on December 31, 2016, 02:37:08 pm
Very interesting!

Although that baffles me, as someone that has little to zero knowledge about neural network and what stumbling blocks you're battling with -- I have a... brain-dead question.

Why would you want to store the neurons with volume and shape that occupies space in the... cortical maps?

I mean, neurons can be simple as just a form of data storing in possibly categorized sections, which simply just fit into user-defined efficient data structures, and use other kinds of sorting algorithm to achieve um, whatever you're doing, with maximum performance.
Title: Re: The last invention.
Post by: korrelan on December 31, 2016, 03:21:11 pm
Quote
Why would you want to store the neurons with volume and shape that occupies space in the... cortical maps?

Excellent question.

The extra dimensions mean something.  The human brain is laid out in a very specific manner for a reason.  Neuron type, size, location, connection, local peer types, etc all effect how the system functions… I want true consciousness… so I have to build a true brain.

If you were to dismantle a mechanical clock and pack all the pieces in a box as close together as possible… would it still work? The human brain isn't just 80 billion neurons packed into the smallest possible uniform volume; if that schema did work then that’s how our brains would be. 

Each neuron is surrounded by other neurons (think swarm intelligence) that can directly/ indirectly affect its behaviour/ properties. It also determines local properties like which other neurons it can connect to etc… they are packed into cortical columns comprised of mini columns; cortical maps are separated by the lobes etc… if all these structures/ properties weren't important… they wouldn't be there. 

You have to consider the brain as a whole… to figure out how it works... and build facsimile.

 :)
Title: Re: The last invention.
Post by: kei10 on December 31, 2016, 03:27:44 pm
I see, now that's something to be pondering about!

Wonderful! Best of luck of getting the schema to work!

Thanks for the progress update!  ;D
Title: Re: The last invention.
Post by: infurl on December 31, 2016, 09:05:55 pm
The architecture of the human brain isn't the only one that bestows intelligence. Bird brains are much more efficient than mammalian brains packing three times the computational capacity into the same volume. They don't have any of the structures that we associate with high levels of intelligence in mammals, and yet they excel at tasks requiring language and reasoning.

http://www.cell.com/trends/cognitive-sciences/abstract/S1364-6613(16)00042-5 (http://www.cell.com/trends/cognitive-sciences/abstract/S1364-6613(16)00042-5)
Title: Re: The last invention.
Post by: keghn on December 31, 2016, 11:41:43 pm
 Birds and bats are under pressure to evolve differently than land animals. The bigger the land animal the better chance it
has of breeding. But for flying creatures it is the opposite. Smaller the animal the faster it evolves.
Title: Re: The last invention.
Post by: infurl on January 01, 2017, 12:00:23 am
Birds and bats are under pressure to evolve differently than land animals. The bigger the land animal the better chance it has of breeding. But for flying creatures it is the opposite. Smaller the animal the faster it evolves.

I'm not sure where you get those ideas from. I think maybe you are just guessing. I never heard anything about the rate of evolution being dependent on the size of the organism. It took nearly three billion years for prokaryotic cells to evolve into eukaryotic cells. That's pretty slow. On the other hand, human beings are one of the fastest evolving organisms on the planet. (The notion that the stability of our artificial environment stopped our evolution is a myth. The genetic evidence is overwhelming that we are evolving very fast.)

On the other hand, life expectancy *is* proportional to size for land animals and for flying animals. However gram for gram, flying animals live far longer than land animals. This is because land animals have evolved to breed fast to optimise the chances of survival of the species whereas flying animals have evolved to heal fast instead. e.g. If a bat's wing is torn it can heal in a matter of hours.

The evolution of brains is completely different. Birds are descended from dinosaurs, and dinosaurs and mammals emerged from whatever came before (reptiles, amphibians?) more than 500 million years ago. Bird/dinosaur brains and mammal brains have been evolving completely separately from each other all that time.
Title: Re: The last invention.
Post by: keghn on January 01, 2017, 12:26:34 am
 I am not guessing. Smaller organism evolve faster than larger creatures, Membrane wing are a inferior to feathers: 

https://www.youtube.com/watch?v=7TPvQdEaHHo (https://www.youtube.com/watch?v=7TPvQdEaHHo)   

https://www.youtube.com/user/megpiefaerie01/videos (https://www.youtube.com/user/megpiefaerie01/videos)


Title: Re: The last invention.
Post by: infurl on January 01, 2017, 12:29:49 am
http://www.thenakedscientists.com/articles/questions/do-smaller-organisms-evolve-faster (http://www.thenakedscientists.com/articles/questions/do-smaller-organisms-evolve-faster)

Small land animals evolve faster than large land animals and flying animals because they breed faster, not because they are small.
Title: Re: The last invention.
Post by: korrelan on January 01, 2017, 01:16:08 pm
@Kei

Quote
Wonderful! Best of luck of getting the schema to work!

Yup! This is going to be a very interesting year lol.

@infurl

Quote
The architecture of the human brain isn't the only one that bestows intelligence.

Agreed. When I first started out designing my AGI I considered/ studied all species and the various forms of intelligence that arises in complex multi-cellular, single celled (eukaryotic) and swarm systems.  I needed to find the common denominator that all the various systems used to ‘bestow intelligence’ then… distil it, simulate it, and make it infinitely scaleable.  All the work I’ve done since has been on the premise that I have discovered the ‘trick’, and so far… so good.

I’m basing my AGI on the human connectome for obvious reasons; we have the edge when it comes to intelligence. So although other animals (birds, etc) exhibit intelligence there is something extra the structure of our brain provides.  I often run simulations of smaller less complex systems that I judge to be a kin to birds etc, and by using/ experimenting with our connectome schema I’m slowly getting a grasp on why we are different.

Also our perception of time is very closely linked to the size of our connectome; if the AGI is to converse with humans it has to be able to ‘think’ at a similar speed/ resolution/ etc.  Nature wastes nothing; there is no point having an audio cortex that functions at ‘bat’ frequencies when talking to humans… my algorithms/ simulation must reflect/ adopt this paradigm.

I’m only able to simulate a few million neurons at the moment but this will change/ improve shortly.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on January 22, 2017, 04:06:30 am
Something you may find interesting for your work:

I've experienced sound on/in my skin/body when I feel.

I think it's because our ears hear by multiple hairs, and so does the body. Just the ears amplify and shape it.

http://www.dangerousdecibels.org/virtualexhibit/2howdowehear.html (http://www.dangerousdecibels.org/virtualexhibit/2howdowehear.html)
Title: Re: The last invention.
Post by: 8pla.net on January 22, 2017, 04:29:14 am
"there is something extra the structure of our brain provides."

And, there is something less the structure of our brain provides.

In theory, a dog's sense of smell is 40 times greater than a human,

Doesn't the brain structure of a dog, with a natural ability to identify

cancer cells by smell, exceed our brain structure?
Title: Re: The last invention.
Post by: LOCKSUIT on January 22, 2017, 11:33:58 am
The below explains those in-bed nightshows after you've layed in bed for a while:

http://scienceline.org/2014/12/why-do-we-see-colors-with-our-eyes-closed/ (http://scienceline.org/2014/12/why-do-we-see-colors-with-our-eyes-closed/)

Korrelan is gonna love the ending of this article.

BTW - mine are one color - purple. :))))))))))))))) Are yours?

btw, I also then start to see green, purple, green, purple, bulge across my vision. I really want to know if yous experience this too.

Still none of my Google searches told me why I see a white dot there or a purple dot there or a orange tangy or colored dot there. (not talking about imagination or shadows/drifters). Am I picking up neutrinos lol? I stare alot at my computer, heheh.
Title: Re: The last invention.
Post by: LOCKSUIT on January 23, 2017, 12:27:14 am
Forgot one important thingy - the reason I sense my "touch" as "auditory" must be because it passes through the temporal lobes to "match"....4 senses then lol!? Yes, yes, I've went through the "other senses".
Title: Re: The last invention.
Post by: LOCKSUIT on February 06, 2017, 01:07:27 pm
Korrelan, do you implement your CNN so that the center of the eye/s gets more score/points?

Our eyes, where they point to, make us see what's in the center, unless there is a scary face at the side of our view. Of course, you also have motor control over......just don't tell anyone.

Once you get accustomed to one trick, the others are easy. You just have to know how to hunt the mammal.
Title: Re: The last invention.
Post by: korrelan on February 09, 2017, 10:40:52 am
@8pla.net

Quote
Doesn't the brain structure of a dog, with a natural ability to identify cancer cells by smell, exceed our brain structure?

It don’t think it ‘exceeds’ our brain structures.  The general mammal brain adapts through plasticity to cope with the format of the body/ senses it is contained within.  General/ global rules and structures that define the ‘mammalian’ nervous system adapt the brain to best suit the animal’s body layout/ life style and social needs.

Dogs have evolved with an extremely heightened sense of smell because they use smell as we use language for communication; thus more of their processing power is dedicated to driving the olfactory networks. 

Our imagination and consciousness is derived from our senses/ experience; imagine how strange a dog’s consciousness must be where smell is the main focus of its thought patterns.

@Lock

Quote
The below explains those in-bed nightshows after you've layed in bed for a while

Phosphenes do produce the flashes of light we see when our eyes are closed but that’s not the effect I meant.  Phosphenes are a random phenomenon; if you look straight a head and actually focus on the darkness you should see actual images that you can mentally morph. It feels like you are actually looking at the objects/ scenes; perhaps it’s just me lol.

Quote
the reason I sense my "touch" as "auditory"

Sound is just the modulation of pressure waves within the air volume; your body is quite capable of sensing certain sounds through your skins surface etc.

Quote
Korrelan, do you implement your CNN so that the center of the eye/s gets more score/points?

Yes!

Quote
Our eyes, where they point to, make us see what's in the center, unless there is a scary face at the side of our view.

The fovea of your eyes is geared towards saliency/ recognition whilst the wide periphery is geared more toward danger/ attention. Your wide peripheral visual area has evolved mainly to keep you safe. If something suddenly moves in your periphery it triggers a natural reflex action depending on your situation; you could move your eyes to identify the object or in certain circumstances it will even trigger the ‘fight or flight’ response.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on February 09, 2017, 03:45:15 pm
I always cringe when I read articles saying "eye reflexes to danger" or "conditionalized to wind in testing". It is actions """I""" do, > by the pattern recognizers, I may not look at a moving raccoon murder dude moving jungle leafs, or blink to the wind conditional test thingy test.......WAHAHAHA! O.–

If something in the "next" image changes, then that area gets more points/score.
Title: Re: The last invention.
Post by: korrelan on February 10, 2017, 11:08:13 am
Quote
I always cringe when I read articles saying "eye reflexes to danger"

I agree. The eye has no reflexes to danger.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on February 10, 2017, 03:01:06 pm
You have to define what "is" danger.

Pattern recognizers will link to according actions. You may not turn your eyes over there.

Just because you see some rustle in the leafs, it doesn't mean your eyes move to it.

It makes you get more "score" there. And you can see there. Without turning the eye center at it.
Title: Re: The last invention.
Post by: korrelan on April 12, 2017, 10:56:23 am
This is just a clearer culmination of three rushed posts describing my theories on human memory.

I thought I’d place it here on my project thread to save reposting on the original thread.

Memory Theory

I think that long/ medium/ short term memories/ experiences and knowledge all use exactly the same architecture; what differentiates long/ short memories/ knowledge is that when memories are first created they are weak (short term) and as they become consolidated over time they form stronger (long term) memories/ knowledge.

Short term memories are initially formed from the current state ‘global thought pattern’ (GTP); or pattern of activation within the synapses/ neurons at any given moment. Associations are formed through weak synaptic links carved by our current thought pattern.  The weak memory engram exists within the structure of the strong long term learning.

Long term memories are stored in the synapse patterns that connect the groups of neurons.  Remembered items are composed of sparsely distributed neuron groups that represent a particular facet of the item.

Our consciousness is the pattern of activation running within this memory/ knowledge structure.

https://www.youtube.com/watch?v=C6tRtkyOAGI (https://www.youtube.com/watch?v=C6tRtkyOAGI)

I’ve shown this vid before but it’s a good example of how complex the GTP is. Each pixel represents a neuron in the AGI’s frontal cortex.  Linked to the back of each pixel are connections (white matter tracts) to other areas of the overall cortex. The machine is sleeping and what you are looking at are patterns formed from memories/ experiences blending together; each memory is triggering other memories; the pattern constantly flows from one state of ‘thought’ to the next. At 6 seconds into the vid a blue (high activity) region appears (lower middle)… this was me talking. The machine was still listening even though it was sleeping… that activity influenced the whole pattern as it tried to make sense of the sensory input; a ‘thought’ pattern is very fluid and complex.

Forming New Memories

We build our knowledge representations in a hierarchical manner; new knowledge is based/ built on our understanding of old knowledge. 

Our understanding of our current experience is created from the GTP which is comprised of the parts of memories/ knowledge relevant to this moment of time… it’s the state of this pattern that new memories are formed from.  When we initially form a memory we are linking together existing knowledge/ understanding/ experiences with weak synaptic connections/ associations.

If you were to learn a new property of water for example; your brain doesn't have to update all your individual knowledge/ memories regarding water and its uses/ properties… it simply has to include the new property into the GTP representing water when ever you think of it.

The brain tends to record only novel experiences; a new memory is formed from novel differences the brain has not encountered before.  This happens at the synaptic level and so is very difficult to relate to the consciousness level.

So any new memory is the brain recording our current state of consciousness; to understand this moment in time we are using hundreds of ‘long term’ memories and knowledge.  To remember a list of words or numbers for example, you have to be able to recognise and understand the items; you can’t remember what you don’t recognise/ understand.

The brain doesn’t record the incoming experience… it records its current understanding of it though its previous experiences/ knowledge.

Look at the letter ‘C’; that has just included a pattern into your GTP that represents (too you) that letter… now look at ‘A’; now that pattern is included. The two separate patterns have created a merged pattern that represents ‘CA’ and already your brain/ cortex is firing other patterns that relate to ‘CA’. Now look at ‘T’… bingo. The combined pattern of the three letters was instantly recognised by areas of your cortex that then ‘fired’ the pattern for ‘CAT’ back into you GTP intern firing patterns that represent the general concept of ‘CAT’. At the same time there where patterns running that covered this topic, your body posture, how comfortable you feel, etc. Reading this paragraph and your thoughts/ opinions on it have altered your GTP; you don’t need to remember the letters ‘CAT’ or even the basic method of explanation I’ve used; they are already well engrained… it’s the new/ different bits of the pattern that get etched into your synapses.

If you look at this sum 5+5= you don’t have to mentally count or add the numbers on your fingers; the visual pattern of seeing the sum fires a globally understood pattern that represents 10.

Memory Structure

Different brain structures contribute certain facets to the memory.  The limbic network adds a very strong emotional facet to a memories overall pattern; other areas add temporal and episodic order to our memories. 

This might explain why a scientist/ neuroscientist might think different memories are stored in separate brain regions. Different brain regions supply different facets of a memory.  The Hippocampus (could be revised) for example provides an index/ temporal date stamp to a memory as its being stored (not read); if they where viewing frmi results on memory consolidation they would be measuring the difference between existing knowledge and new learning. The new learning would require a recent time/ stamp that would activate the Hippocampus (episodic), blood would flow to this region highlighting it.

This is a diagram showing a rough map of where the various categories within the circle where detected upon the human cortex surface.

(http://i.imgur.com/vJLHRKs.jpg)

This is a short video showing the same/ similar category organisation within my AGI’s cortex.  As usual the forty test patterns (phonemes, images, etc) are shown on the right; the confidence in recognition (height of the bar) is shown on the bottom left. Notice the regular modulated input pattern below the pattern input on the right. The cortex section has very high confidence in its recognition of the patterns until I click ‘A’ in the lower right to turn this regular injected pattern off. Then the cortex sections confidence drops/ stops… I have removed a facet of the overall pattern that the system was using to recognise the patterns. This is a kin to disconnecting the hippocampus or limbic system… it makes a big difference.

https://www.youtube.com/watch?v=AGFk_QddPqo (https://www.youtube.com/watch?v=AGFk_QddPqo)

A Memory is never moved around inside the brain; it’s never moved from short to long term storage.  ‘Memories’ are never searched either (searching is a serial schema); because of the parallel architecture the brain has no need to search. The ‘thought pattern’ flows from one pattern of depolarized neurons to the next pattern through the axons/ synapses/ connectome; it’s the structure that stores the long term memories. 

Accessing Memories

We access our memories in a holographic manner; any part of a memory will trigger the rest.

When we use a piece of knowledge or a skill we also have access to when and where we learned it; sometimes it will effect our use of the knowledge (bedside manner for a doctor); how the use of the memory/ knowledge effects us is governed by the global thought pattern and what the main situation/ topic/ goal/ task is.  It’s our focus/ attention (also part of the global pattern) that dictates the effect of the memory/ knowledge on our current consciousness and what sections of the memory/ knowledge are relevant to the current task.

When a particular pattern is recognised the resulting output pattern is added to the overall GTP; this changes/ morphs/ phases the GTP which is in turn recognised by other areas… repeat.

A young child has more synapse than you or I but could never grasp the concepts of this conversation because they have no prior experience/ knowledge to create their memories on/ from. 

Memory Problems

If any of the original facets that the memory was comprised of are compromised in some way it can make retrieval difficult from that aspect, time, location, etc… we all use the tactic of thinking of related/ similar memories when trying to recall a weak memory, you’re just trying to produce a global pattern conducive to triggering the required memory; trying to fill in the missing blanks in the pattern that will trigger retrieval.

To remember what I had for breakfast I have to use strong/ long term memories. To understand what the items were, what they were called, even the concept of ‘breakfast’ requires a lot of strong/ long term understanding/ knowledge/ memories.  The weak/ short term bit of the memory is what links all the various items together along with a location/ timestamp/ index/ etc that I would recall as ‘earlier today’.  For the unfortunate people who have difficulty retrieving today’s (short term) memories I would wager they have a problem with a brain region responsible for temporal tagging/ indexing the memory as ‘today/ recent’

My AGI is based on the mammalian connectome and exhibits both/ all these memory traits; this is why I believe I am correct in my assumptions.

 :)
Title: Re: The last invention.
Post by: kei10 on April 12, 2017, 11:45:28 am
This is absolutely mesmerizing to read, thank you for sharing!
Title: Re: The last invention.
Post by: korrelan on April 12, 2017, 03:08:23 pm
Ageing Memory

What changes as we get older?  Personal thoughts and observations from my project.

One of the main mechanisms our brain uses to consolidate learning is the strengthening of the synaptic connections between neurons.  A set of neurons with strong synaptic connections can impose a strong influence on the global thought pattern (GTP).  The more we experience a set of facets that a moment in time is constructed from; the easier we can both recognise and infer similar situations/ experiences.

So an older person is ‘wiser’ because they can quickly recognise the similar well experienced traits/ situations or knowledge and infer their learning into a new problem space… been there… done that.

Younger minds ‘think’ more flexibly because there are no really strong overriding patterns that have been forged through long term exposure/ use; thus more patterns are fired and each has to vie for attention.  Loads of ideas… not much sense lol.

Both the young and old versions of the human memory schema have their advantages… but obviously the ideal combination is wisdom with flexibility… so how can we achieve this?

The Problem

The main problem is that the old adage ‘use it or lose it’ also applies to your neural skill set.

We are the sum of our experiences and knowledge.  Everything we learn and experience affects ‘who’ we are and the ‘way’ we think.  All the skills we acquire through our lives and the problems we solve are mapped into our connectome and ALL this information is accessed and brought to bare when required.  The fact that I can repair clocks/ engines and the techniques I have used are also used when I’m solving neuroscience related problems.  Every good theory/ thought I have is comprised from all the facets of the many varied topics mixed together in my brain.  When I consider a possible relationship between disparate subjects I’m actually applying a general rule that is comprised of all the different types of relationships I have ever encountered.

As we get older our connectome fine tunes more and more to become expert in our chosen vocational field. We can recognise the common relationship problems a younger person is experiencing because we have seen it so many times before… the price for this wisdom is loss of plasticity/ flexibility; just the process of becoming proficient at life harms our imaginative thinking powers.

Besides learning new skills/ topics it’s very important from a mental perspective to exercise previously learned skills or knowledge frequently.  The general purpose rules we constantly apply have been built up through a hierarchical learning process and depend on all the various facets of the skills and knowledge that were present when they where originally consolidated. If enough of the underlying skills/ knowledge is lost/ forgotten then although the general purpose rule still exists it can’t be applied as flexibly as before. 

This is where an elderly person can loose mental flexibility; they are wise enough to know the correct answer through experience; but because the original skill set has been lost they lack a deep understanding of how they arrived at the answer… and without this information they can’t consider new/ fresh avenues of thought. 

The Solution

Don’t just learn new topics/ skills… frequently refresh old learning/ knowledge and skills.

I’m off now to break my neck on my sons skateboard lol.

 :)
Title: Re: The last invention.
Post by: korrelan on May 13, 2017, 12:39:51 pm
Time for a project update; I’ve been busy doing other projects/ work lately but I have still managed to find time to bring my AGI along. First a quick Re-cap…

Connectome

I have already perfected a neural network/ sheet that is capable of learning any type of sensory or internal pattern schema.  It’s a six layer equivalent to the human cortex and is based on/ uses biological principals.  Complex sensory patterns are decoded and converted/ reduced to a sparse output that fully represents the detail of the original input; only simplified and detail recoded through a temporal shift.  Long to short term memory and prediction etc are all tested and inherent in the design.

Consciousness Theory

To achieve machine consciousness the basic idea is to get the connectome to learn to recognise its own internal ‘thought’ processes.  So as sections of cortex recognise external sensory streams, other sections will be learning the outputs/ patterns (ie frontal cortex) being produced by the input sections.  The outputs from these sections go back into the connectome and influence the input sections… repeat.  This allows the system to settle into a stable internal global minima of pattern activity that represents the input streams, what it ‘thinks’ about the streams, how it ‘thought’ about the streams and what should happen next etc.

It’s a complex feedback loop that allows the AGI to both recognise external sensory streams and also recognise how its own internal ‘thought’ processes achieved the recognition in the same terms as it’s learning from external experiences. I envisage that eventually as it learns our language/ world physics/ etc it will ‘think and understand’ in these learned terms… as we do.

A Working Model

Now this is where things get extremely complex and I must admit it threw me a curved ball and slowed my overall progress down; until I wrote the tools to cope/ understand exactly what was happening in the connectome/ system.

https://www.youtube.com/watch?v=ERI92iTzVbY (https://www.youtube.com/watch?v=ERI92iTzVbY)

This vid shows the top of a large neural tube; it’s a starting connectome and will grow in both size and complexity as the AGI soaks up information and experiences. The right side represents the sensory inputs and the left is the precursor to what will develop into the frontal cortex.

I’ve trained the model to recognise forty audio phonemes and you can see its confidence in the height of the bars lower left < 0:20 into the vid. I then turn off the phoneme patterns <0:40 and inject a random pattern to show the connectome has no recognition. At 0:50 the phonemes are turned back on and recognition re-commences. The system is stable; on the right you can see the input patterns at the top of the window and the sparse frontal cortex activation patterns just below them.  I then add the random element/ pattern to the phoneme pattern.

At 1:09 it reaches the point where the feed back from the frontal cortex starts to influence the input to the sensory areas and a feedback cascade begins… this is what I’m after.

The frontal cortex has learned the outputs form the sensory areas and begin adding their own patterns to the mix which in turn begins to influence the sensory input areas.

I have to be careful of my choice of words but at this point the connectome has become ‘aware’ of its own internal processes expressed in terms it’s learned from the external world.

I then turn off the external sensory stimulus and the global ‘thought’ pattern slowly fades because there are no circadian rhythms being injected to support it.

One of the problems I’m facing besides the shear complexity of the patterns is that once the feed back ‘spark’ begins that connectomes existence in time is forged, the complex global pattern relies on millions of pattern facets timed to the millisecond… once its stopped it can’t be re-created exactly the same.

So a bit more done…

 :)
Title: Re: The last invention.
Post by: Art on May 14, 2017, 12:46:12 pm
It would be quite interesting to be able to record or see if it could be able to think about that which it has experienced or learned, which in a sense would be something the equivalent of dreaming (or daydreaming).

It was interesting to see the cascading episode as it began growing.

Is it possible to note the areas where certain items or "things" are being stored or recognized by the system and are they the same every time?


Quite a nice experiment! O0

Title: Re: The last invention.
Post by: LOCKSUIT on May 14, 2017, 03:17:10 pm
"One of the problems I’m facing besides the shear complexity of the patterns is that once the feed back ‘spark’ begins that connectomes existence in time is forged, the complex global pattern relies on millions of pattern facets timed to the millisecond… once its stopped it can’t be re-created exactly the same. "

If you mean, the result of the hierarchy is lost after it's turned back on, then try either giving all senses and links a strengthening and weakening process so they last plus erase and/or a threshold to self-ignite and fire so they not only take action on their own but also will work together at the right times.
Title: Re: The last invention.
Post by: korrelan on May 15, 2017, 02:21:16 pm
@Art

Quote
It would be quite interesting to be able to record or see if it could be able to think about that which it has experienced or learned, which in a sense would be something the equivalent of dreaming (or daydreaming).

It definitely does re-use the learned facets of its knowledge/ experiences.  That’s where the feedbacks coming from. The ‘frontal cortex’ learns the order/ sequences of the sensory cortex’s outputs and attention areas form that learn the similarities in the input/ feedback streams. So yes… it does dream/ daydream about its experiences.

Quote
Is it possible to note the areas where certain items or "things" are being stored or recognized by the system and are they the same every time?

I can highlight and examine any area of cortex to see what has been learned in that area but initially the cortex areas are very fluid; they tend to move as more information is soaked up because the cortex is self organising. They will eventually stabilize and become fixed as the synapse strengthen through experience and lock them into position.

@Lock

Quote
If you mean, the result of the hierarchy is lost after it's turned back on, then try either giving all senses and links a strengthening and weakening process so they last plus erase and/or a threshold to self-ignite and fire so they not only take action on their own but also will work together at the right times.

I can save the connectome at any time recording all of its current states and then re-commence from where it left off… the problem is that once a global thought pattern has been allowed to fade/ die it can never be restarted exactly the same.  The momentum/ complexity/ inertia of the pattern has to be sustained whilst the system is running. 

I had a problem with ‘epilepsy’ in the connectome; very local cortex feedback triggered by a certain frequency of visual input would start a very fast local feedback cascade that would cease/ crash the global pattern… I had to re-build the pattern from scratch.

Hopefully this will be less of a problem once the connectome ages and becomes robust to sudden changes in the various streams/ patterns.

I intend to keep it running 24/7 anyway.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on May 15, 2017, 02:32:19 pm
If a GTP can fade/die, and never restarted the same, well, how is it created anyhow? For example is it by neuro-muscles that stay existing OR like RAM i.e."tourists" that are new folks and so when turned off they didn't like you know, save.

> Therefore, shouldn't a GTP be strengthened into neurons and not water piped around like RAM? Knowledge is saved. It's simple...
Title: Re: The last invention.
Post by: WriterOfMinds on May 15, 2017, 03:26:03 pm
Re: your most recent post -- nice animation.  I was interested in how exactly the patterns coming out of the cortex influence the input once feedback starts.  How do you blend the sparse patterns from the cortex with the noisier or more complex input data?  And what's the overall effect -- does it intensify the input patterns?

I don't have much neural network background, so apologies if I am asking dumb questions.
Title: Re: The last invention.
Post by: keghn on May 15, 2017, 04:36:43 pm
 Is the sequences memory clocked by a daisy chain of nerves or a up counter/down counter, or random access in a sequence?
Title: Re: The last invention.
Post by: korrelan on May 16, 2017, 09:52:53 am
@Lock

Quote
If a GTP can fade/die, and never restarted the same, well, how is it created anyhow?

Very gradually lol.  The connectome starts with a random configuration.  It then alters its self over time and learns/ adapts to its environment/ inputs.

This is the GTP from a simple connectome seed. Every point means something; a whole memory and all its complexities can be encoded/ recalled by just one of these synaptic pulses.  Information is encoded both spatially and temporally; so if one facet is out of phase by an ms or missing it means something totally different to the system.  It’s this constant ever-changing pattern that can’t be reproduced because it has been built from experiences over time. The pattern has been produced by a constantly changing connectome that has been learning over time. I can stop and save it… and continue but if it ever fades out through lack of neural stimulation/ feedback… it’s gone.

The pattern is the program running on the connectome/ neuromorphic processor.

https://www.youtube.com/watch?v=kgpQ956U-x8 (https://www.youtube.com/watch?v=kgpQ956U-x8)

@WoM

It’s a hybrid schema designed from scratch using my own interpretation of a neuron, synapse, axon, etc. It’s sensitive to both patterns and pattern facet frequencies.  The design is comprised of the best bits of the many different common types of artificial neural net, convolution, pooling, spiking, liquid, reservoir, recurring, etc with other mechanisms that mimic biological principles to bring them together into a cohesive system. 

Quote
How do you blend the sparse patterns from the cortex with the noisier or more complex input data?

The cortex sheet is self organising and learns to break the complex sensory stream down into regular, sparse recognised facets; which exit through the equivalent of pyramid neurons into the connectome.  So the complex audio sensory data for example is handled by the audio cortex and the rest of the connectome only receives the sparse data re-encoded into the connectomes native schema. Because the cortex sheet has several inputs it can also learn to combine data streams coming from different cortex regions.  Initially the audio cortex just learns the incoming audio sensory stream but once other cortex regions learn to recognise its sparse outputs their interpretation of the data is bounced back.  The audio cortex then starts to learn both the audio stream and the streams coming from other cortex areas.  Some of these streams are predictive or represent the next ‘item’ that should occur based on past experience.

In engineering terms I suppose the input layer of the cortex sheet could be viewed as a set of self adapting/ self wiring logic gates that are sensitive to both logic and temporal phase/ signals. So with excitation/ inhibition/ neurotransmitters/ etc it adapts over time to the incoming signals and learns to filter and re-code recognised elements.

Base data representation within the system is not at the level of say ‘cat’ or even whole English words; it’s at a much lower abstract representation.

@Keghn

Quote
Is the sequences memory clocked by a daisy chain of nerves or a up counter/down counter, or random access in a sequence?

All of the above… it adapts to use the best/ most efficient encoding schema for its problem space.  Hard to explain…it learns to encode episodic memories just because there is episodic information encoded in the input streams.  The streams have a temporal element, they have a fourth dimension, the system encodes time just the same as it encodes everything else… So areas of cortex learn that information/ memories come in sequences. 

I’ve designed it to be able to adapt and learn anything and everything.

Analogy: Think of all the wonderful things modern computers can do with just 0 and 1’s.  A modern computer just runs at one frequency; but imagine if every frequency band could run a different program through the same processor logic; parallelism by frequency band separation not architecture.  All the programs can interact at every level, they all share the same logic/ holographic memory which in turn can be modified by any program… this is one of the base ideas of my design along with self organisation/ learning etc.

It’s still a work in progress.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on May 16, 2017, 02:23:09 pm
But think hard korrelan, you're saying your program is literally the GTP, and it vanishes !

All we learn in life MUST be strengthened, and stored forever.

I know you say it forms from a whole lot of stuff, and you're probably sad you can't find how to save it.

Keep this in mind. We store senses sequentially. Some become strong enough to not shrink.
Title: Re: The last invention.
Post by: korrelan on May 16, 2017, 05:20:16 pm
@Lock

Analogy: Imagine you own a business that delivers top secret correspondence/ letters.  In your office you have a wall full of reinforced pigeon holes with combination locks; one for each of your clients.  You have ten members of staff who have worked for you since the business began; their daily routine is to remove letters from the IN bin and place them into the correct clients pigeon hole ready for delivery. Over time each member of staff as claimed certain pigeon holes for each of their clients and for security reasons only they know the combination to the lock; and there are no names on the pigeon holes.  No employee is allowed to handle anyone else’s correspondence but the system works perfectly.  Every day hundreds of letters arrive and are allocated/ sorted by your team… it’s a well oiled machine… every one knows what they are doing.

They all die… (Sh*t happens lol) and you employ a new team of ten.

How does the new team take over seamlessly from the old team? All the information is still there… the letters are still in their respective pigeon holes… there is no loss of actual physical data… it’s the mechanism/ system that stored the data that has gone.

Information is stored in the ‘physical’ connectome structure and is very robust so long as it’s accessed fairly regularly.  The GTP that placed/ sequenced/ encoded the information into the connectome is the only one that can access it.

The only reason the GTP ever fails/ stalls is through design errors made by me.  It will continue indefinitely once I have all the bugs ironed out… in the mean time when ever it does fail… I have to rebuild/ restart it from scratch.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on May 16, 2017, 08:15:40 pm
So, this is your business, and it's called GTP.

...

Quote
How does the new team take over seamlessly from the old team? All the information is still there… the letters are still in their respective pigeon holes… there is no loss of actual physical data… it’s the mechanism/ system that stored the data that has gone.

If all employees die, and the new team can't do the same, then there IS a loss of information - the memories in the deceased employees's brains. SAVE THEM FOR LORDS SAKE. "SAVE"

Isn't that like saying "my circuit board or gear disappeared"? Don't let your AI circuit/metal mechanism vanish! That's the most simplest thing ever...

Save memories. And strengthen them to resist erasing.

Again, save save save. If you need your employees, save em before shutdown time.

I know this is a save problem. I should've realized that as soon as you said "I lose the GTP on shutdown".
Title: Re: The last invention.
Post by: keghn on May 16, 2017, 08:57:55 pm
 I can envision spiked NN seeing a picture that is 1000 x 1000 pixels..........By moving into first layer of  a Siked Neural Network
with a depth of 5000 layer. Very deep.

 In the next, this image is moved into the second layer and the third step to the third layer and so on. Un changed.
 Each movement would be pixel to pixel. Or a daisy chain of spiked neurons.
 While a image is on a layer there is a dedicated 1000 x 1000 Spiked NN to observe what is there. And then these observing
Spiked NN would talk to each other to do more magik:)
Title: Re: The last invention.
Post by: WriterOfMinds on May 17, 2017, 02:05:55 am
If the goal here is something that strongly resembles the human brain, the impermanence of the learned structures doesn't necessarily strike me as a flaw.  Even long-term memories can fade if they aren't reinforced (either by recall or by experiencing the stimulus again).  Given the massive amounts of sensory data we take in, I'm sure forgetting is actually an important mechanism -- if you stop needing a piece of learned knowledge, it may be good for your brain to discard it, thereby freeing up wetware to learn something more important to you.

And of course, if a biological human brain is "turned off" at any point, an irreversible decay process begins.  Perfect recall is a super-human attribute, and korrelan isn't aiming for that yet, perhaps.
Title: Re: The last invention.
Post by: keghn on May 17, 2017, 02:47:53 am
 I have complete AGI theory that is not based on Neural network. I am working on A AGI theory that does use NN and
it is close to competion. I am not in hurry to finish the a AGI neural connectionist theory,. But if Find neural network
very interesting and studying them has made by complete theory even better.

rough synopsis. AGI basic structure. Jeff Hawkins influenced: 

https://groups.google.com/forum/#!topic/artificial-general-intelligence/UVUZ93Zep6Y



https://groups.google.com/forum/#!forum/artificial-general-intelligence



Title: Re: The last invention.
Post by: korrelan on May 17, 2017, 10:40:40 am
@WoM… Well explained.

I seem to be having a problem explaining the system; some people just do not understand the schema… I need to pin the description down so I’ll have another go.

The Connectome

This represents the physical attributes and dimensions of the AGI’s brain.  It exists as a 3D vector model in virtual space.  It’s the physical wiring schema for its brain layout and comprises of the lobes, neurons, synapse, axons, dendrites, etc. Along with the physical representation of the connectome there are a set of rules that simulate biological growth, neurogenesis, plasticity, etc.  It’s a virtual simulation/ 3D model of a physical object… our brain.

The connectome is where experiences and knowledge are stored; the information is encoded into the physical structure of the connectome… Just like the physical wiring in your house encodes which light comes on from which switch.

The Global Thought Pattern (GTP)

This is the activation pattern that is produced by simulated electro-chemical activity within the connectome.  When a virtual neuron fires action potentials travel down virtual axons, simulated electro-chemical gates regulate synaptic junctions, simulated Dopamine and other compounds are released that regulate the whole system/ GTP. 

This would be the same as the electricity that runs through your house wiring.  The house wiring is just lengths of wire, etc and is useless without the power running through it.

The GTP is the ‘personality’…dare I say… ‘consciousness’ of the AI.

A Symbiotic Relationship

Within the system the GTP defines the connectome, and the connectome guides the GTP. This is a top down, bottom up schema; they both rely on each other.

Both the connectome and the GTP can be easily saved and re-started at any time.

The GTP is sustained by the physical structure/ attributes of the connectome.  As learning takes place the connectome is altered and this can result a gradual fade or loss of resolution within the GTP.  It should never happen but because I’m still designing the system on occasion it does.

Because the GTP is so complex and morphs/ phases through time; any incorrect change to the connectome can radically alter the overall pattern.  If the GTP is out of phase with the physical memories it laid down in the connectome then it can not access them and whole blocks of memories and experience become irretrievable. 

The connectome is plastic; so the irretrievable memories would eventually fade and the GTP would re-use the physical cortex space but… information is learned in a hierarchical manner.  So there is nothing to base new learning on if the original memories can’t be accessed… its like the game ‘Jenga’; knock the bottom blocks out and the whole tower comes falling down.

Once a GTP has become corrupt it then corrupts the connectome… or sometimes the GTP can fade out… then there is no point saving it… I have to start teaching it from scratch.  If you're wondering why I don’t just reload an earlier saved version of the connectome and GTP; I would need to know exactly when the corruption occurred… it could have been something the AGI heard or saw… or indeed a coding error within the simulation.  I do have several known good saved 'infant' AGI’s I can restart from and luckily the system learns extremely quickly… but it’s still a pain in the a**.

No one knows for sure how the human brain functions.  This is my attempt at recreating the mechanisms that produce human intelligence, awareness and consciousness.  I’m working in the dark; back engineering a lump of fatty tissue into an understandable, usable schema. I can’t reproduce every subtle facet of the problem space because my life just isn’t going to be long enough lol.  I have to take short cuts; accelerated learning for example… I can’t wait ten years for each connectome I test/ AGI to mature at human rates of development.  Our connectome develops slowly and this has a huge impact on stability… a lot of the GTP fading problems are caused because I’m forcing the system to mature to quickly… did you understand and recognise the alphabet at 1 hour old?

What I'm building is an ‘Alien’ intelligence based on the Human design.  If I create an intelligent, conscious machine that can learn anything… it can then learn to be ‘Human’ like...  and if it’s at least has intelligent as me… it can carry on where I leave off.

End Scene: Queue spooky music, fog & Lightning. Dr. Frankenstein exits stage left (laughing maniacally).

@Keghn

I’m reading your provided links now… this has evolved since I last saw it.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on May 17, 2017, 03:12:27 pm
How do you know your AI is learning? What has it achieved? Aren't you just playing with electricity packets and relay reactions stuffz? It not doin anyting....we also already have decent pattern recognizers and hierarchies...

I'm really confused at your master plan.

My AI's first achievements will be learning to crawl the house, and much more.
Title: Re: The last invention.
Post by: keghn on May 17, 2017, 03:54:53 pm
 Well my method of unsupervised learning is to have a bit stream coming into the brain. Could be a video stream.
 The Observation Neural network looking at the steam coming in. It could be CNN style. It would cut the image up
into squares. For now, let us say for now it looking for a vertical edge. This box with a edge is piped down into deep in the brain
and scans across a bunch of little spiked NN. The one that gives the best response is piped back up and goes deeper into the
observation Neural Network.

 A object is made up of subfeatures of lines. Like ball or a person in a wheel chair.
 Sub features are vertical lines, horizontal lines, solid colors of various types, and corners of various orientations.
 Each sub feature is paired with a weight so that you can transform one object into another for comparative clustering. 

 Clustering with Scikit with GIFs: 
https://dashee87.github.io/data%20science/general/Clustering-with-Scikit-with-GIFs/
Title: Re: The last invention.
Post by: ivan.moony on June 24, 2017, 02:41:05 pm
I guess that neural networks approach is what is called bottom->up approach. It is about simulating neurons, then observing what they can do when they are grouped through multiple levels in some wholes. It is known that artificial neurons could be successfully used for a kind of fuzzy recognition, but I'm also interested in seeing how neurons could be used for making decisions and other processes that humans have in disposition.

Are there any concrete plans of how the complex decision process can be pursued by neurons, or we yet have to isolate decision phenomenon out of other processes that are happening in neural network. I'm sure (if NN resembles a natural brain) that the decision process is hidden somewhere in there.

I'm also wondering about other mind processes that may be hidden inside neural networks that are not observed in the present, but will be noticed in the future.
Title: Re: The last invention.
Post by: korrelan on June 24, 2017, 04:14:27 pm
If CPU designers and Electronics engineers where to wire transistors together using the same schema as a typical feed forward neural network simulation… nothing would work.  Although the transistors design is important it’s the design of the whole circuit with supporting components that achieve the desired results.

The human brain isn't just a collection of neurons wired together in a feed forward schema… it also has a complex wiring diagram/ connectome which is required to understand/ simulate any desired results.

(http://i.imgur.com/FwbnGC1.jpg)

This is a very simplified representation of a connectome in my simulations.  The circle of black dots represents the cortex, comprising distinct areas of neuron sheet tuned to recognise distinct parts of global thought pattern (GTP). Each section specialises through learning and experience and pumps the sparse results of its ‘calculations’ back into the GTP.  The lines linking the cortex areas represent the long range axons that link the various parts of the brain together.  There are obviously millions of these connecting various areas (not shown), the different colours represent sub-facets of the GTP, smaller patterns that represent separate parts/ recognised sensory streams/ ‘thoughts’, etc.  The sub-facets are generated by the cortex, sensory streams never enter the GTP, only the results of the recognition process. 

Points A & B represent attention/ relay/ decision areas of the cortex. These areas arise because of similarities detected between various patterns.  So the fact that an objects rough shape is round, or two events happen at the same time of day, that one event always follows another will trigger a recognition in one of these areas, this will inject a pattern into the GTP that the rest of the areas recognise for what it represents, they recognise it because the whole connectome has grown from a small basic seed, and they have developed together, using and recognising each others patterns, their pattern output  has influenced their connected neighbours development and vice versa… whilst experiencing the world. A Neural AGI seed...

https://www.youtube.com/watch?v=MLxO-YAd__s

So you have to consider an experience/ object/ thought broke down into its smallest facets, and each facet having a pattern that links all the relevant data for that facet. The pattern that represents that facet can be initiated from any part of the cortex, so hearing the word ‘apple’ starts a bunch of patterns that mean object, fruit, green, etc.  This is how we recognise similar situations, or a face in the fire, or pain in others, it drives empathy as well as many other human mental qualities.

The GTP is a constantly changing/ morphing pattern guided by the connectome that represents the AGI’s understanding of that moment in time, with all the memories/ experiences and knowledge that goes with it. As the connectome grows and the synapses are altered by the GTP to encode memories/ knowledge… one can’t exist without the other.

Simples… Make more sense?
Title: Re: The last invention.
Post by: ivan.moony on June 24, 2017, 05:13:34 pm
Is it possible to have three placeholders, say "left", "right" and "result"? Now  if we observe a world where "result" = "left" + "right", we would have it memorized like this:
Code: [Select]
left right result
-----------------
0    0     0
0    1     1
1    0     1
1    1     2
...

And if we later observe "left" as "1" and "right" as "1", what do we have to do to conclude that "result" is "2"? Naively, I can imagine a question: what is the result if left is 1, and right is 1. How would this question be implemented in neural networks?

[Edit]
Theoretically, is it possible to feed 1 as "result" and to get imagination of (("left" = 0, "Right" = 1) or ("left" = 1, "right" = 0))? Moreover is it possible to chain neural reactions, like you feed "cookies" and you get a chain ("flour" + "milk" + "chocolate") + "bake at 180°" + "serve at a plate"?
Title: Re: The last invention.
Post by: keghn on June 24, 2017, 06:39:24 pm
ANN can do this with one gate. Left and right would be the inputs. Then the bias for
the third input.
 The output. would be different:
0  and 0 would give 0
0  and 1 would give 0.5
1  and  1 would give 1.0

 @Korrelan works with spiked neural network? Which are more closely to the logic
of brain neurons. Little different than the ones used in deep learning, that are so popular. 

 There are different styles of spiked neural logic. Every scientist have their own opinion 
on how they work, based on the latest science. Or a implementation to make their theories 
or project work. 

 In my logic  of spiked NN a zero is static noise, white noise of a low voltage. 
 Also noise are all values given at one time, max confusion.  No steady path to focus on
or follow.
 When spiked NN is train and generates a value it will go from wild random static to 
a solid un vibrating value from zero to something greater than zero. Then drop back down 
to  random low voltage static. 

Title: Re: The last invention.
Post by: korrelan on June 24, 2017, 07:03:08 pm
I’m not sure what you’re asking…

You’re describing a logic AND gate. Where either a single ‘left’ or ‘right’ would result in 0, but both together would equal 1… but anyway…

This simple example can be handled by one single traditional simulated neuron.  A traditional simulated neuron can be viewed as an adaptable logic gate.

Traditional simulated neurons use weights as their inputs (any number), a threshold and a bias, you can forget the bias for this example.

The threshold is a trigger/ gate value, ie… if the sum of the weights is greater than the threshold then ‘FIRE’.

So in your example the threshold of a single neuron would be set to 2, either input would only add 1… both inputs would result in 2 (sum the inputs) and the neuron would fire a 1... not a 2.

Although usually the weights are less than 1, so your weights (inputs) would actually be 0.5 and 0.5 with a threshold of 1… firing a 1.

To code a simple neuron…

Input A
Input B
Thresh=1

If A + B >= Thresh then Output=1 else Output=0

Traditional neurons themselves are very simple… their strength lies in combining them into networks.

Edit: As Keghn said…

 :)
Title: Re: The last invention.
Post by: keghn on June 25, 2017, 12:00:43 am
   
 There are the detector NN and then the generator NN 
 Generator NN are used for memory.   
 One detector NN can detect many objects. Like, cookies, flour, cake, and bowl.   
 Many raw pixel go into a 
black box. Then come out on the other side as output pixels . 
 Each output pixel is assigned to some object like cookie and spoon for another. 
 That output pixel only go high when a spoon is in the input image and 
turns off when removed. This is a detector NN. It is not memory. More for action. 
 It detects a fly and then brush it away. 
  So a real brain need to record and that is where the generator/recorde/synthesis NN   
comes into play. It Is pretty much the same as recording into computer memory, ram. 

 The activation of the detector NN are recorded and also the image that caused them to
activate.

 There is a address pointer into memory so it can be rewined, fast forwarded, jump 
ahead, and so on. 

 Using two or more address pointer into memory running at the same time is the basis 
 for a internal 3d simulator.
 Viewing two images at the same time is confusing. So rewind, speed up, slow down control
are used to keep thing in order.

 A simple NN can be pyramid. They are very flexible and can have many types of set ups. 
 Like ten inputs and ten outputs.   

 To remember you need an echo: 
https://www.youtube.com/watch?v=v-w2rEsYuNo



Title: Re: The last invention.
Post by: korrelan on June 25, 2017, 10:21:24 pm
Gave my bot a serious optics/ camera upgrade today. 

Got rid of the stereo SD web cams and replaced with a matched pair of true HD cams on 3D gimbals.

Upgraded the software etc, to handle the HD resolution.

(http://i.imgur.com/Yrm2V4R.jpg)

 :)

https://www.youtube.com/watch?v=SvPNR1zYAKg

:)
Title: Re: The last invention.
Post by: infurl on June 25, 2017, 10:46:47 pm
I'm working on something comparable at the moment too. I'm trying to build a drone that uses stereo vision. I've just acquired a pair of Runcam2 HD cameras but haven't decided on a gimbal mechanism yet. Can you provide more details about the gear that you are using?
Title: Re: The last invention.
Post by: korrelan on June 25, 2017, 11:49:11 pm
I used to have a really antiquated control system but have recently found a much easier, neater method.

The stereo cameras are out of HD 3D gimbal domes.  I’m using passive baluns to convert the composite video signals to digital so I can pass them through a single cat5 cable, then baluns again to convert back at the DVR end.

The main gimbal is a HD PTZ camera minus the housing using the Pelco D protocols.

The 3D gimbals on the stereo cameras make it easy to adjust and calibrate the alignments for stereo overlap, etc.  They give a wide stereo view with a 30% overlap, the center HD cam has a X30 zoom lens, this provides long range and in combination with the other cams as periphery a high def fovea. 

I’ve tried moving the X,Y axis of the stereo cameras with a servos but they were a bug*er to keep aligned, so now I’ve opted for fixed cameras and just move the ‘head’.

I run the cameras through a 16 Cam HD DVR with a built in 485 PTZ controller, that came with a full SDK for windows, so I can easily get live feeds and control the PTZ motors across my network/ web. 

Made it really easy to control the motors and integrate the camera feeds into my AGI.

(http://i.imgur.com/W0NLkRb.jpg)

I also acquired a simple security 485 PTZ joystick controller for testing purposes, you can see it in this image.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on June 26, 2017, 07:29:18 am
Holy moly wow korrelan, you got a lab? Haha ur so cool. Nice to see some real life/part of you. And good works.
Title: Re: The last invention.
Post by: korrelan on June 26, 2017, 09:55:26 am
I thought all aspiring mad scientists who want to rule the world had a lab.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on June 26, 2017, 10:39:57 am
Well..............

I have my bedroom...

There's a very very very evil computer in it.
Title: Re: The last invention.
Post by: Zero on June 26, 2017, 11:56:30 am
The Connectome

This represents the physical attributes and dimensions of the AGI’s brain.  It exists as a 3D vector model in virtual space.  It’s the physical wiring schema for its brain layout and comprises of the lobes, neurons, synapse, axons, dendrites, etc. Along with the physical representation of the connectome there are a set of rules that simulate biological growth, neurogenesis, plasticity, etc.  It’s a virtual simulation/ 3D model of a physical object… our brain.

The connectome is where experiences and knowledge are stored; the information is encoded into the physical structure of the connectome… Just like the physical wiring in your house encodes which light comes on from which switch.

The Global Thought Pattern (GTP)

This is the activation pattern that is produced by simulated electro-chemical activity within the connectome.  When a virtual neuron fires action potentials travel down virtual axons, simulated electro-chemical gates regulate synaptic junctions, simulated Dopamine and other compounds are released that regulate the whole system/ GTP. 

This would be the same as the electricity that runs through your house wiring.  The house wiring is just lengths of wire, etc and is useless without the power running through it.

The GTP is the ‘personality’…dare I say… ‘consciousness’ of the AI.

I'd like to understand better.

The connectome is a wiring schema that contains informations (plus rules), I got it. But the GTP... Is it an engine, that permanently makes calculus based on connectome content, and that stores results of calculus back in connectome? In other words, is GTP the "game loop" of the system? God, I'd love to see some code. No, I'm not calling you "God". :)

EDIT:

but I'm also interested in seeing how neurons could be used for making decisions and other processes that humans have in disposition

For example, doing pathfinding (https://en.wikipedia.org/wiki/A*_search_algorithm) with neurons would be awesome.
Title: Re: The last invention.
Post by: korrelan on June 26, 2017, 12:54:32 pm
Erm… I think I get what you’re asking…

There is obviously a software engine that is running the whole 3D simulation, calculating the properties/ states of the neurons, synapse, axons, etc.  The rules are also governed by the simulation engine, how fast neurons can grow, how long axons can grow under their current circumstances. The simulation engine is a wetwear middle ground between the biological connectome and the binary computer.  Human brains don’t run on binary, so inside the simulation I can make the neurons use any kind of logic I like, I’m not limited by binary logic inside the simulation.

The connectome is the 3D spatiotemporal model of the brain, I think you got this.

The GTP is the pattern of activation running through the neurons, axons, etc.  The GTP follows the physical structure of the connectome.  So a neuron might fire and send a signal through its axon to 100 dendrites, 20 of these neurons might then fire sending their action potentials to others.  This constant chain reaction of activation within the structure of the connectome is the GTP.

The GTP effects how the connectome develops, memories and experiences are laid down in the connectome by the GTP.

I suppose it does sound confusing lol.  The wetwear simulation software is another level of abstraction.

The game analogy is a good one; the binary game computer runs a simulation of a game world.  Anything is possible within the game simulation, we can fly, etc.  In modern games actions you take inside the game have a cause and effect within the game.  Lets say I throw a grenade… the explosion will result in objects bouncing off each other, the games simulation engine is modelling the physics inside the game. Objects stay where they are once they have been blown up.

In my software the connectome is the game world, and the GTP would be the physics simulation affecting the game world… permanently changing it depending on what happens in the game.

My approach is just a way to circumnavigate the limitations of binary computers.

When it really starts getting complicated is when you consider that the connectome and GTP can then generate it's own internal 3D world, as part of the prediction and imagination phenomena.

 :)
Title: Re: The last invention.
Post by: Zero on June 26, 2017, 01:30:47 pm
Very neat and understandable answer, thank you.

So, I guess, there's a FIFO queue of signals waiting to be fired, and the GTP pops the one on top of queue, do the firing, then pops the next one and so on. But this is only a simplified description of it, since the whole thing is distributed on several computers.

Also, I guess that signals are "saved" for the next frame, then applied all at once when one frame is over. Then in the next frame, a new queue of signals waiting to be fired is constructed and so on.

Something like
Code: [Select]
loop {
    while current queue isn't empty {
        pop signal from current queue
        fire it {
            determine what signals will be fired on next frame
            save them in the next frame queue
        }
    }
    current queue := next frame queue
}



Title: Re: The last invention.
Post by: LOCKSUIT on June 29, 2017, 01:14:54 am
My baby can have 50 Super Duper Ultra HD cameras with infinite zoom, and a full human baby body with full motors. Physics Engines are incredibly great.

At some point korrelan, you're going to want to switch to simulation.
Title: Re: The last invention.
Post by: infurl on June 29, 2017, 01:31:56 am
So, not satisfied with merely simulating reality, now you're attempting to simulate your simulated reality in your head.

*If* you were capable of arithmetic *and* you were to actually perform some of that arithmetic with realistic inputs, you would discover that the simulation you are simulating in your head would require hardware that is not available to you. You really should kickstart a kickstarter campaign to fund it.

We can easily simulate your simulated simulation in our own heads and we know what is going to happen before you do.
Title: Re: The last invention.
Post by: LOCKSUIT on June 29, 2017, 02:19:16 am
Korrelan has a Beowulf though.

My brain does have simulators.

You read my latest posts right? I have the most efficient pattern recognizer. I can now go from skipping to major skipping.
Title: Re: The last invention.
Post by: infurl on June 29, 2017, 02:40:05 am
No, you claim to have the most efficient *imaginary* pattern recognizer. You don't actually have anything exceptional, useful or of value. Even your imagination seems impoverished compared to that of someone who actually does things. Get some experience doing things, then and only then might you be worth listening to.

Beowulf clusters were all the rage before multi-core processors became commonplace. The systems (plural) that I'm programming on as we speak have around 40 cores each, terabytes of main memory and SSDs, and petabytes of spinning disks. I laugh at your Beowulf clusters. :P

Title: Re: The last invention.
Post by: korrelan on June 29, 2017, 09:40:39 am
Haha… my project thread has become quite controversial lol.

@Lock

Quote
At some point korrelan, you're going to want to switch to simulation.

Afraid not…

Quote
Korrelan has a Beowulf though.

As Infurl said… the topology is an old one but it still has its advantages, most modern ‘super’ computers are still built using a similar topology for good reasons. 

The term ‘Beowulf cluster’ is old (like me lol). The ‘Beowolf’ is just a collection of multi-core machines on a high speed network, anyone can put one together. Building the cluster is the easy part; writing the software to maximise the cluster for a specific purpose is where it gets interesting.

40/ 70 core machines have their uses, scientific modelling, graphics rendering etc, but the task has to easily sub-divided in to an efficient use of a threading model.  Writing the code for this kind of threading model can be/ is a pain and very time consuming.  You have to have a firm grasp of what the software needs to achieve and invest the money and time perfecting the software to realise the benefits from a multi-threaded 40 core CPU. There are also other limitations that can cause bottlenecks depending on the systems memory, SSD bandwidths, etc. and the type of model you a running.

I’m back engineering the human brain… you can see my problem?  I had no idea how the system would evolve.

One of my problems was scalability. I need to be able to eventually utilize 10 x 1000’s of cores.  Besides the cost lol, it will be quite a while before this type of machine becomes available… hence the cluster.  Using the parallel MPI I’ve wrote I can utilise as many machines as I can get my mits on.  I recently tested the MPI on a room of 200 x 4 cpu I7’s at a local college… that’s 800 cores, ok you have to consider network latency, etc but the model performed as expected.  So rather than invest my time writing a multi-threaded model, writing the code to ‘cluster’ separate machines made more sense.  I'm working on a similar MPI to utilize machines on the Internet, so anyone can donate processor time.

Cost is another major factor, 30k for a 40 core system as opposed to 2k for my 24 core cluster… I can easily and cheaply expand. I have another 6, four core machines I can switch on when required… I have to get real time HD video feeds, stereo microphone audio streams, accelerometer and gyro streams into the model using ring buffers etc, then there’s the servo outputs and tactile/ joint torque feedbacks… one machine no matter how many cores would just grind to a halt.

Redundancy is also important; I can easily replace machines without slowing my research progress... time is precious.

The list goes on lol.

Both types of systems have their advantages… I still feel I made the right choice.

 :)

Edit: These are some of the machines I use…

(http://i.imgur.com/2aQaJgu.jpg)

And these have just been retired…

(http://i.imgur.com/qKoNpJW.jpg)

You can easily see the effects of long term exposure to a wetwear model… hehe.

 ;D
Title: Re: The last invention.
Post by: keghn on June 29, 2017, 03:00:45 pm
  Beowulf was all the rage ten years ago. Hooking a lot of computer together with ethernet cards. Cluster they called them.
  Which has give rise to how all supercomputer of today. A repetition of off the shelf parts
hooked together on a massive scale. But i believe the ethernet cable has been removed a direct PCI
buss and routing are used today.
 The small scale rasberry PI 3 super computers are still connected by ethernet. USB is still so difficult to connect a lot of thing together because big business is protecting it.


Clustering A Lot Of Raspberry Pi Zeros: 
http://hackaday.com/2016/09/18/clustering-a-lot-of-raspberry-pi-zeros/


Raspberry Pi 3 Super Computing Cluster Part 1 - Hardware List and Assembly: 
https://www.youtube.com/watch?v=KJKhRLKXr-Q


Title: Re: The last invention.
Post by: Art on June 29, 2017, 03:38:43 pm
Lock, Why not take a look at Bubble Memory...I heard it was going to be "The Thing" in memory.
(or was that back in the 70's?)...hmmm.

I have to agree, get what in your head out where you can manipulate it and see the results of actual experiments.
Don't say you can do them in your head because one can't do a severe amount of multitasking that such simulations would require.

We're not against you...but...we're not quite with you either...if you get the drift....

Title: Re: The last invention.
Post by: korrelan on June 29, 2017, 08:36:04 pm
Quote
My baby can have 50 Super Duper Ultra HD cameras with infinite zoom, and a full human baby body with full motors. Physics Engines are incredibly great.

Sorry lock but... no it can't.

You were previously talking about an AI existing inside a 3D simulation, assigning indexes/ labels to objects and people. Why would such an AI need HD cameras? If you are going to simulate a 3D world as rich as the one we occupy, which you need to do; to provide for an AI to reach human levels of intellect, and then have your AI use HD cameras to view that simulated world... what's the point?  It might as well just exist in the 'real' world.

We all live in a simulated world lock.  You think the things you see and feel are real? Trust me they are not.  Without your sensory organs you would be in a very dark place... you exist in the 3D imaginary world your brain creates/ simulates for you.

If we are to create a true AGI then it at least needs to experience and understand the same reality/ level of abstraction that we think we occupy.

 :)


Title: Re: The last invention.
Post by: LOCKSUIT on June 29, 2017, 09:00:23 pm
Haha it can't have 50 HD cameras if it uses *secret* ! Correct! However it can if I wanted to go dat way, simply it's in meh computer, and still gives me loads of capability ex. full human baby body/motors/sensors etccccc (big list).

Yep I know it well that what I see is a raytrace of a 3D-ized replica world (same for the Imagination Generator). I've seen creatures walk behind doors in sleep paralysis. And if we seen our pure camera/etc images right from the cords it'd still be a freaking machine "viewing" "from its eyes". My simulated baby human AI can see right from the "cords".

Will you ever get a really real baby human to implant your code into? Even the ones being made for lots of $$$ (little own custom built) are not close to what you can fine-tune in simulation, and clone.
Title: Re: The last invention.
Post by: LOCKSUIT on June 30, 2017, 02:34:36 pm
Rich Reality..........

..........................Goes with Rich body.

Literally..................................... $$$

3D Simulation:
https://www.youtube.com/watch?v=klLI5skuwj4&t=12s
Title: Re: The last invention.
Post by: korrelan on June 30, 2017, 04:07:02 pm
Looks like academia is slowly catching up with me. lol.

http://aidreams.co.uk/forum/index.php?topic=11602.msg45042#msg45042

As a general rule I’ve found with data representation the more dimensions you can encode the data over the better.  My simulations utilise hundreds of dimensions… that’s why parallel searches are so fast… your just searching the data from the perspective/ dimension of the search criteria.

http://www.iflscience.com/brain/researchers-reveal-the-multidimensional-universe-of-the-brain/

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on June 30, 2017, 05:15:37 pm
Hhuuuuoooggghhhhhhhhhhhhh !

Do you realize what you have said!?

That's a very great recognizer!.................do you use it!?.....................put X object in real/3D simulation water/weigher/etc and get numbers for classification and recognition!

Only problem is recognizing parts of the object.................NO PROBLEM --- CUT UP OBJECT MEASURE IT AND BIND BACK TOGETHER IN SIMULATION.

MEAWHAHAHA!

this is one of those "really" moments... has to sink in..........

so much you can do with particles.........particles do a lot don't they :) - lots of techniques they do in the daytime

I have all the information on this and mine hierarchied, if you want it, ask.

Talking about "sand castles" (in the article), I was having a problem with my PR recognizing unseen objects, like sand or a blanket changing shape. I needed to generate a small number on the fly and have it search other small numbers. It's also much easier than tagging all angles/areas/distances/etc on each 3D simulated object.
Title: Re: The last invention.
Post by: LOCKSUIT on June 30, 2017, 11:43:58 pm
Can someone knowledgeable/experienced in pattern recognizers tell me if, if I ran a CNN on 1 skylake i7 OR even a 1060 GPU, having at most 2,000 convolved images stored, would it really be slow? HOW slow? I thought CNNs were supposed to be fast and have low storage by shrinking images by convolzion and having the input not only confront a bunch of shrunken jeepers but also easily direct to the correct image stored for selection by the convolzion technique, sorta like a kill-3-birds-stone all in one. The hell. It's supposed to be hell fast on 1 i7 with 0 parallelism. Instead of matching full size images to full size images.
Title: Re: The last invention.
Post by: korrelan on July 01, 2017, 10:26:09 am
http://cs.stanford.edu/people/karpathy/convnetjs/demo/cifar10.html

 :)
Title: Re: The last invention.
Post by: Zero on July 01, 2017, 01:13:33 pm
Happy 200th  :D
Title: Re: The last invention.
Post by: LOCKSUIT on July 01, 2017, 02:23:23 pm
That doesn't answer my question though...
Title: Re: The last invention.
Post by: korrelan on July 01, 2017, 02:37:00 pm
That link was to a simple CNN that runs just in your browser.  So all the processing is local... on your machine.

The question is impossible to answer in depth.  There are too many variables and unknowns.  It depends on the programmer, the language used, the algorithms used, the images, the types of convolution filter, etc.

If the CNN has to run quickly on a single machine then the programmer/ designer would have to be skilled enough to... find away.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on July 01, 2017, 02:59:52 pm
Korrelan, all I need to know is, if I want to run a CNN for my AI, that saves 4 images per second, and searches for 4 matches per second, that after 30 minutes has *400 images stored...will it be real-time or 22 times faster than real-time?

Otherwise I'm looking at having my AI stare at simulated 3D objects, cut out sections, measure their weight/volume/height/etc in weighers/water buckets/etc, search for numbers, then carry on.
Title: Re: The last invention.
Post by: keghn on July 01, 2017, 03:07:44 pm
YOLO: Real-Time Object Detection. A cnn that does 30 frames per second with GPU's : 

https://pjreddie.com/darknet/yolo/
Title: Re: The last invention.
Post by: LOCKSUIT on July 01, 2017, 08:09:57 pm
Ah, Darknet, I love these dark hidden places. Its like black hat underworlds. The deeper you go the harder it gets.

Well when the time comes, I have 3 ways to go now. All 3 may be sufficient, but I may go with a easy way to be coded if I can't get my workers to implant a CNN (it could be easier (and cheaper) to do that though).
Title: Re: The last invention.
Post by: keghn on July 01, 2017, 11:59:54 pm
 Yolo is one of the fastest Cnn detectors. Like all NN detectors it does not really tell you where the object is
in the image. 
  But with some hacking you can get it. They do not tell which direction they are moving or if the are up
side down or right side up. Or if it has rotated left hand compared to the last image.
 In a AGI, A CNN will be needed to reviewing video memory. It will need to work 1000 times faster when
scanning through video memory. And if the AGI is viewing 10 files at the same time it will need a dedicated CNN
for each video file being used. The AGI will use a internal image editor and will need a CNN identify target to
cut out and post into a new video file. These new video are tried out by trial and error it can tell if
the new imaginary build will work or fail. if can tell if it will fail it will be out right deleted. 
 If it has a real hard time telling if it will work it will become a dream. If nothing of real interest it will be forgotten.
 If the build work it will hit you like a great idea from out of the blue, eureka!
 
Title: Re: The last invention.
Post by: LOCKSUIT on July 02, 2017, 12:24:31 am
I understand what you're saying. The image editor must be 3D though. You make real-world objects 3D into the mind. Then it plays around with the 3d objects. If the configuration looks good, it incomes passed your attention system as an eureka, else dream, else forgotten. The objects it plays with are things you sense, ex. "how can I fix this", or "fish on the moon". Once it looks right, you will then eureka the way it looks (the fix or the fish on the moon), and then know the actions to make the fix, or will simply see a fish on the moon (how you see stories).
Title: Re: The last invention.
Post by: keghn on July 02, 2017, 03:29:05 am
 LSD effect are not going to happen at the start of empty soul. No data to work with. 
 A life form must live a life of real life experience to be hack. Big data. 

 Then starting simple With completely randomly selected instructions and tamed values.  A simple program to be 
manipulate later on
   Starting simple on trying out to see what works. When something work 
 it is remembers. If it crashes it is deleted and a back step take place. 

 A script code, such as bash, and a script code that can run and control  a photo editor, May be 
 something like the command line video editor like MLT and melt: 

https://en.wikipedia.org/wiki/Media_Lovin%27_Toolkit 


 Blender might of scripting commands?

 A script program can be started automatically. 
 Then have a program to evolve it from a dead simple starter program by adding or editing in commands and values.
 If the program works it is saved. Then a clones are made modified and then tested.
 The script programs that fail are saved but labeled as duds so it learns from its is mistakes.
 This is for the correct use of script instructions. 
 Then a second test of it being used on real video. 

Video Editing in the Shell - MLT melt - FFMPEG - Linux 1: 
https://www.youtube.com/watch?v=r4PWv0emS7A 

Title: Re: The last invention.
Post by: elpidiovaldez5 on July 10, 2017, 03:29:01 am
Hi again !

I have just been looking at your work after replying to one of your comments.  It is awesome stuff.  I hope to keep in touch and see how it progresses.

Could I ask what software you use as the audio back-end for your phoneme recognition ?  I would like to do a project on audio, but have never faced up to learning how to get the audio and spectral analysis.

Title: Re: The last invention.
Post by: korrelan on July 10, 2017, 11:57:23 pm
The software is written from scratch using the Win API interface.

The audio stream is sampled using the WaveAddBuffer function in the Winmm.dll library.  I then pass the sample through a FFT to extract the frequencies and several custom filters; then render the spectrograph.  The samples also run through a custom loop buffer that records in ‘real’ time but only sends frames of data through a TCP pipe to the main application when called.  The callback and loop buffer keep the two applications synchronised.

 :)
Title: Re: The last invention.
Post by: korrelan on August 01, 2017, 10:32:53 am
"Woe to you, oh Earth and sea, for the Devil sends the Beast with wrath
Because he knows the time is short
Let him who hath understanding reckon the number of the Beast
For it is a human number, its number is six hundred and sixty six"

Crank up the volume...

https://www.youtube.com/watch?v=_3Vynew5mrw

 :D

Edit... I'd reached 666 posts lol
Title: Re: The last invention.
Post by: LOCKSUIT on August 01, 2017, 11:29:06 am
I don't think I want to know what korrelan has been up to
Title: Re: The last invention.
Post by: selimchehimi on August 01, 2017, 11:38:58 am
Wooow that seems really interesting, I love all the work that you put in to solve AI. I will definitely take the time to check all your videos  ;)
Title: Re: The last invention.
Post by: korrelan on August 26, 2017, 02:36:07 pm
https://www.youtube.com/watch?v=BDIvV9w7NtU

Hi all… I’ve been taking a rest from the development of my AGI for a few weeks.

For anyone interested… There is a very good reason why I do this periodically… if you spend too much time focusing on one topic your brain begins to work against you.  One of our main learning mechanisms is repetition; your brain likes to translate common thought processes/ experiences into automatic sub-conscious functions (like riding a bike or driving to work).  Although this is great for most mental operations it’s obviously a major disadvantage when mental flexibility is required regarding a problem space you are trying to solve.  If you concentrate too long on a topic your ideas and mental flexibility will… dry up; you will literally get stuck in a rut (writers block)... an ingrained pattern of thinking as the common used synaptic pathways that represent your… I’ll leave for another post.

Ok back to the video. 

Up to this point I’ve been using random/ trig spatial distribution to set the initial vectors of the connectomes neurons in the various cortical layers.  This has been adequate up to date but when the connectome expands through simulated growth and neurogenesis inserts new neurons in high activity areas to enhance the ‘thought’ facet resolution; I’ve been getting clashes and general spatial problems because of the initial uneven distribution of neurons on a layer.

This has been solved by using the Fibonacci sequence to set the initial equally spaced neuron vectors within the connectomes spherical volume.  At the beginning of the video you can see the equal spatial distribution on the connectomes outer layers mirroring the common Fibonacci pattern.

A small update but this solves loads of problems.

 :)
Title: Re: The last invention.
Post by: infurl on August 27, 2017, 12:15:35 am
For anyone interested… There is a very good reason why I do this periodically… if you spend too much time focusing on one topic your brain begins to work against you.  One of our main learning mechanisms is repetition; your brain likes to translate common thought processes/ experiences into automatic sub-conscious functions (like riding a bike or driving to work).  Although this is great for most mental operations it’s obviously a major disadvantage when mental flexibility is required regarding a problem space you are trying to solve.  If you concentrate too long on a topic your ideas and mental flexibility will… dry up; you will literally get stuck in a rut (writers block)... an ingrained pattern of thinking as the common used synaptic pathways that represent your… I’ll leave for another post.

You're on the right track but you're overthinking overthinking. It's precisely your sub-conscious functions and automatic thought processes that you should be trying to exploit to get the work done. Most people have had the experience of going to sleep with a problem and waking up with a solution. While I do change major tasks periodically, typically every four to six weeks, it's because of mental fatigue rather than getting into a rut. When it comes to problem-solving you can't beat good old-fashioned daydreaming (interspersed with intense periods of good old-fashioned hard work of course). But in the end, it's whatever process works for you.
Title: Re: The last invention.
Post by: keghn on August 27, 2017, 01:28:25 am
 Good work as always @Korrelan. Burn out is a good thing and can be a bad thing. Keep person form getting in a
never ending behavior loop. The way i see it the the neurons store pattern loop. Like Ram store a behavior routine.
 But under the rules of the life of a cell it will not be over used, and complain if damaged. It will have a purpose or
be deleted or killed off, and it needs time to rest. Like rules of a trade unions. 
 But if the behavior routine is in a silicon chip it is dead and lifeless.
 So in my AGII i am have a program that simulates it having a fake life. Because it is needed in machine to have feelings.

 I think it is important to note that the neuron bundle that hold behavior pattern have no idea what they have stored
with in them. They only know that they have purpose and belong to a bigger whole.

 So i use different ways of getting same thing done. Using different neurons to do the same job. 

Title: Re: The last invention.
Post by: LOCKSUIT on August 27, 2017, 04:12:40 am
You may have to rest, but for me I don't, and it pays me off by working every day...all day ! I am happy here.
Title: Re: The last invention.
Post by: korrelan on August 27, 2017, 12:38:57 pm
@ Infurl

Quote
It's precisely your sub-conscious functions and automatic thought processes that you should be trying to exploit to get the work done.

For common tasks that have to repeated I would agree.  Unfortunately back engineering the human nervous system however doesn’t seem to follow this curve lol; when covering new ground I have to keep my thoughts as flexible as possible; especially at my age lol.  But as you said we each have to do what works best for our own mental machinery.

Quote
Most people have had the experience of going to sleep with a problem and waking up with a solution.

I’ve done some research on this.  It appears to be a commonality between a problems facets and the facets belonging to a known solution to a similar problems pattern.  It’s a pattern alignment in the GTP; where the problems facets will trigger the same/ similar facets in the known solution leading to a link or similarity at that stage in the solution.

Quote
When it comes to problem-solving you can't beat good old-fashioned daydreaming

I agree; daydreaming/ dreaming leverages the above system to align/ match facets and solve problems. Taking a rest from the problem and doing something completely unrelated can often lead to a ‘Eureka’ moment… especially if the new task shares many similar experience/ knowledge/ skills facets with the initial problem.

@Keghn

Quote
Burn out is a good thing and can be a bad thing.

Hmmm… I pretty sure ‘burn out’ in any kind of machine is a bad thing lol.  It sounds like you are working along bio-chemical lines of research too. Cool.

@Lock

Quote
You may have to rest, but for me I don't,

We are all different and there are many variables including life’s general pressures, the problem space and the level of concentration required etc.  I know my limitations from experience and try to optimize my mental abilities; you are thirty odd years younger than me and still enjoy the natural mental flexibility of youth… at the expense of focus/ wisdom of course lol.  I’m pleased you are happy though.

 :)
Title: Re: The last invention.
Post by: keghn on August 27, 2017, 07:05:51 pm
 @Korrelan. There are may type of burn out. I do not really mean a killer burn out from long work and having a boss like
Trump. When a system or a small sub part record there time at a job. When it reaches a high value it look for other job with
a low burn out value. The Burn out value is down counted by time. Job down count can be fast.
Title: Re: The last invention.
Post by: keghn on August 29, 2017, 02:00:44 am
 In the human body a muscle can get tired. And the neurons that hold the program to move the muscle. The leg muscle
is big and take a lot of stress. So the brain neurons that hold any leg movement must be copied a couple of time. So that
while a neuron program is running there are other a resting up. Same thing for actions that are repetitive mentally or
physically.
Title: Re: The last invention.
Post by: LOCKSUIT on August 29, 2017, 02:42:43 am
So you're saying if my leg gets tired I may choose to use another leg (no pun intended). While on the other hand (no pun intended), my thought process may get tired, BUT, as you said, that doesn't mean I'll switch the thought process topic, rather, my brain will sub-consciously (no pun intended) transfer it over to awake neurons with more energy and continue with "this leg" (no pun intended), continuing thinking about how I can use my leg to open a jar of nuts, 5, hours, straight, then finally give up and choose to think about those peanuts on the table.
Title: Re: The last invention.
Post by: Art on August 29, 2017, 03:23:04 am
@ Lock.

I think he means that eventually everyone simply needs to take a rest.

See, you're already delirious! Take five...minutes, hours, days///whatever it takes for you to "recharge / regroup". O0
Title: Re: The last invention.
Post by: keghn on August 29, 2017, 03:35:52 am
 Well basically Yes. But only if the reward is worth it to train other neurons. Cell in the body are live thing not machines.
 The need to rest, feed , and repair. Then need to a JOB that has purpose. Just like in any swarm or group the dead beet are
removed,  Or they fight back and go cancerous. Under the rule of swarm logic and swarm intelligence. A AGI will not have
this but in order to make human like intelligence in machine we go to fake it. And that will be good enough.
Title: Re: The last invention.
Post by: ranch vermin on October 05, 2017, 03:16:37 pm
Hello, crazy ranch is back to be with the motley crue here.  your network looks very neat and has lots of features.   it looks like something to be proud of.

You robo eye rig was grippy! (towards the beginning of the thread.)

A more sentient network is going to need more than 340k synapses, I read that ur looking into maybe doing some gpu code.
A gpu would give you alot more synapses and neurons at once,   for more resolution, or more work on the low res.   I use open cl,  but if you dont know direct x i think you can maybe use something that does the gpu conversion for you,  (offloads your iteration loops) i thought i saw one in python, but ive been out of the game for a while.

Theres newer stuff these days from when i was doing it,  but my gtx980 is better than the cards with bigger "number" these days (300 dollar cards.) with mine having double the cores!   its all cheatery and phony in the computing industry.

Title: Re: The last invention.
Post by: LOCKSUIT on October 06, 2017, 12:55:55 pm
You know ASI? ASI can give you something very powerful in a small package. ASI can run a powerful AI in just my pc.
Title: Re: The last invention.
Post by: raegoen on October 06, 2017, 09:01:25 pm
Hi Korrelan, your work is very fascinating, I have read your blog and it seems you've made incredible strides towards AGI! Are you planning on releasing any part of your engine/emulator/NN for others to build upon? I know you are writing a book, are you planning on putting a lot more detail and mathematical explanations of your model in there? Thanks, looking forward to future updates!
Title: Re: The last invention.
Post by: korrelan on October 07, 2017, 01:16:48 am
@Ranch

Good to see you back. 

Yeah! I will be looking into GPU eventually but ATM I’m messing around with machine consciousness.  I think I have a GTX 980 some where… I’ll check.  Currently I’m experimenting to see how much ‘intelligence’ I can squeeze out off a limited number of neurons/ synapse.

I’ll be posting some vids of machines screaming in pain and confusion soon lol.

@Lock

ASI = Artificial Super Intelligence? I’ m not sure what you are saying lol.

@ Raegoen

Glad you like my work so far.  I’m writing two books; one on human neural function/ theory and a second on actually creating/ coding machine consciousness/ intelligence.  Both have around 200 odd pages ATM… I really need to pare them down lol.

 :)
Title: Re: The last invention.
Post by: ranch vermin on October 07, 2017, 09:10:59 am
I spose, the less neurons you need the better.     
Im working on something a little secretive in the performance area of slower computers,  and ive nearly got it going. 
its very exciting. :)
Title: Re: The last invention.
Post by: Art on October 10, 2017, 01:02:57 pm

Glad you like my work so far.  I’m writing two books; one on human neural function/ theory and a second on actually creating/ coding machine consciousness/ intelligence.  Both have around 200 odd pages ATM… I really need to pare them down lol.

 :)

Wouldn't 200 pages actually be EVEN?  ;D JK....

You're going to offer them as ebooks for those nice, rainy day reads right?
Good luck with them!
Title: Re: The last invention.
Post by: LOCKSUIT on October 12, 2017, 08:35:14 am
E-books :)
I love free books.

So korrelan tell us what you foresee as the steps you will take to reach an AGI. What level will come forth first? Then, what? And then what? Are you first cracking Language??
Title: Re: The last invention.
Post by: korrelan on October 15, 2017, 11:56:02 pm
I've been messing with the vision modules on my AGI.  This gave me an idea...

Anyone notice anything weird about this picture.

(https://i.imgur.com/NMYkJOK.jpg)

 :)
Title: Re: The last invention.
Post by: ranch vermin on October 16, 2017, 12:27:58 am
what is it, no dif?

<edit>  looks like good old neural net sensor grain based rotatah to me! </edit>
Title: Re: The last invention.
Post by: korrelan on October 16, 2017, 01:03:49 am
Yeah! The image is a orientation/ depth map.  I noticed my AGI was mapping some colours to distances, which seemed like a strange thing to do. This is a representation of how the AGI is 'experiencing' colours... and it does look to me as though it has a kind of 3D depth. I wondered if anyone else could see the effect or if it was just my eyes playing tricks on me.

It might be my colour blindness effecting me, I see the blues as deeper/ father away than the greens.

 :)
Title: Re: The last invention.
Post by: ranch vermin on October 16, 2017, 05:53:37 pm
oh!  i thought it was an orientation map.  well it is...  but i thought it was an orientation map to rotate the network to the right spin when doing id'ing.   so its experiencing colours. 
Title: Re: The last invention.
Post by: WriterOfMinds on October 16, 2017, 06:07:32 pm
I wonder if the appearance of depth for human viewers is a psychological artifact of looking at terrestrial maps. Maybe the blue areas come across as "water" and the green ones are "hills."
Title: Re: The last invention.
Post by: korrelan on October 16, 2017, 09:37:06 pm
@ Ranch

You are correct.  The same/ similar topology is generated as a rotational/ orientation map.  Because my AGI’s visual cortex is based on the mammalian ocular schema colour is mapped in a very similar way.

@WOM

You may be correct about the deep water, high green land masses.

I've asked several people if they experience the same illusion of depth when they view the image and the general consensus is… no lol.  I actually see a 10mm ish difference in the depth field between the green and blue.  For me it’s like a shallow hologram… this is obviously a phenomenon from my colour blindness and I've never before experienced this type of optical illusion.  It’s just so pronounced… its deff weird.

It just struck me that the AGI was mapping colours to distance and when I viewed the back engineered map, I too saw depth lol… I think I'm spending too much time in front of my monitors lol.

 :)

Edit: In hind sight the mapping of colour to distance kind of makes sense.  The machine has been observing many different images including ‘real life’.  So perhaps blue skies, water/ oceans = distant and green grass etc in the foreground = closer (as WOM suggested).  It could be a side effect of ‘seeing’ an unbalanced/ bias set of visual experiences at this point in its development.

 :)
Title: Re: The last invention.
Post by: Art on October 16, 2017, 10:09:23 pm
I too saw the green as hills in a topography and blues as a depth or water. I'm color deficient as well.
Not color blind, because I see colors but there is a confusion between shades of colors like olive green etc. and tan's or brown.
I have a friend who is Green / Red color "blind". He drives but knows that the green light is always on the bottom.

Anyhow...that's my read.
Title: Re: The last invention.
Post by: korrelan on October 17, 2017, 02:32:08 pm
Solved it... it's a form of chromostereopsis... who knew lol.

https://en.wikipedia.org/wiki/Chromostereopsis

 :)
Title: Re: The last invention.
Post by: korrelan on November 10, 2017, 11:58:50 am
https://www.youtube.com/watch?v=5gZS56JUFk0
Title: Re: The last invention.
Post by: ranch vermin on November 10, 2017, 12:14:00 pm
is that just a filter, or does it take learning to better the result?
Title: Re: The last invention.
Post by: korrelan on November 14, 2017, 11:43:54 pm
I just read your question and thought… what’s he on about? The video is on the previous page and I’d not noticed.  I was rushing getting ready to leave for the first call of my working day, that’s not the video I thought I had posted lol. That was for Yot’s thread on outlines.

I’ll explain it here anyway… It uses two kinds of machine learning. 

The first is an adaptive convolutional filter I designed that learns the best parameters to apply to each section/ type of image. It automatically adjusts for brightness, saturation, clarity/ sharpness, resolution, etc.  Its job is to learn/ adapt the best method for extracting a template that the pixel bot (PB) can follow.

The second machine learning technique is the pixel bot.  This little guy is a simple bot with eight pixel sensors around its perimeter.  I have taught/ trained it to follow and trace outlines. 

So… the pixel bot learns to follow outlines and the adaptive convolution filter learns to extract the most information from the image.  If the PB manages to create a full out line then it notifies the filter that this is a good convolution combination for this type of image… and a box is drawn around the shape on the original image.

Notice that he outline is stabilized in the left window.  As soon as the mouse pointer enters a shape the PB converts the object into a set of vector coordinates, these are then easily centred and stabilized.

That’s how the system manages to easily extract numbers from captures etc… it learns to extract letters/ numbers from the confusion.

You can see the PB learning process here…

https://www.youtube.com/watch?v=WH9Bc4aF6Nc

I was taking a break from my AGI and thinking about Yot’s outline routines.

 :)
Title: Re: The last invention.
Post by: ranch vermin on November 15, 2017, 07:53:42 am
I wouldve thought it would have involved alot of noise before its learnt,  how come its only off or an exact outline.

If I had a random convolution filter, it would be all kinds of blurs and differences.
Title: Re: The last invention.
Post by: ivan.moony on November 15, 2017, 07:59:02 am
Korrelan, very interesttng, I think it would be how humans detect edges (they move eye focus around an object). It seems that neural nets can do more than I thought.
Title: Re: The last invention.
Post by: korrelan on November 15, 2017, 10:28:35 am
Quote
I wouldve thought it would have involved alot of noise before its learnt,  how come its only off or an exact outline.

I have manually trained the pixel bot (PB) to follow outlines, because I’ve used my judgement to best trace lines the PB has learnt to do the same, so this includes jagged outlines. It will run along a fairly jagged edge and produce a straight line quite well, it could do with more training though as sometimes it gets confused on certain patterns of pixels lol.

Quote
If I had a random convolution filter, it would be all kinds of blurs and differences.

The convolution filter is indeed based on the human eye and is localised around the mouse pointer.  It affects small regions individually not the whole image.  It does not use a random filter; again I have manually trained the filter to best extract the required detail.

Quote
Korrelan, very interesttng, I think it would be how humans detect edges (they move eye focus around an object)

This stems from my AGI research, I use a model of the mammalian visual cortex which learns to detect lines and gradients automatically; this is just a very simplified version used just for outlines.

Quote
It seems that neural nets can do more than I thought.

I tried to keep the project simple so I’ve not used any kind of neural net, it’s all good old fashioned look up lists.  So the PB for example just stores a list of perimeter sensor readings along with the direction I’ve told it to move.  The PB can then easily and quickly find the closest match in the list and move in the relevant direction.

The system is basically just reproducing my skills at following an outline.

I still disagree that using outlines is a good method for recognising objects, both scale and rotational invariance can be accounted for but occlusion cannot.  Feature detection is still by far the best method.

 :)
Title: Re: The last invention.
Post by: korrelan on November 15, 2017, 12:14:18 pm
This early low resolution test example shows the use of a three layer neural net based on the mammalian visual cortex.  The four angled coloured lines in the centre of the windows shows the basic colours and orientations the system is going to learn.  The left window shows the current learned orientation map, the right window is the image its learning from.

https://www.youtube.com/watch?v=MuMXGzPZ2Nk

The first half of the vid shows the system being subjected to just diagonal lines and the orientation map automatically learns to recognise these lines.  You can see the self organising distribution in the neural layer only using the two diagonal colours.

At 0:17 you can see the output from the LGN running through the map and only the diagonal lines are detected.

At 0:30 a picture of faces is loaded and again because the system was trained on just diagonal lines it only detects diagonal lines in the face image.

At 0:50 I start re-training the system on the faces.  You can see the neural plasticity of my system at work as the orientation map slowly evolves to incorporate the horizontal and vertical lines it’s experiencing in the face image.

At 1:23 I stop the training, the self organising nature of the system has built a map that best fits/ represents the input data.  The black dots on the left window represent the locations of output layer pyramid cells; each cell is surrounded by a receptive field tuned to recognise a particular feature in the image.

The rest of the vid just shows how the system is now interpreting the face image with all four orientations learned.  Obviously my current system uses hundreds of orientations and gradient combinations to detect features in the visual LGN output. 

If the system was only subjected to diagonal lines again it would very slowly forget how to recognise the vert/ horiz lines.  This plasticity effect falls off as the neurons mature so the system eventually reaches a balanced representation of the experienced input, with a patch of neurons able to detect every facet of the incoming data.

The advantage of this approach is that each patch of neurons in the left window will only fire when a particular pattern of input is supplied.  The neuron patches or image facets never move and so can be easily linked, searched, etc to figure out what the system is looking at.

This gives an idea of how my whole AGI system functions; it is very closely modelled on the mammalian cortex and is basically able to adapt and learn anything its sensors encounter.  It automatically extracts and organises relevant information from the data streams.

It takes a human foetus nine months to generate its V1 orientation/ gradient maps this is my system producing and equivalent map in a matter of seconds from visual experiences… it learns extremely quickly.

https://www.youtube.com/watch?v=4BClPm7bqZs

And if you were to zoom into the neuron sheets/ maps you would see the individual neurons and synapse.

https://www.youtube.com/watch?v=YVO0f76CI2s

 :)
Title: Re: The last invention.
Post by: keghn on November 15, 2017, 02:54:38 pm
 Unsupervised truth.
 I have had a thought of using GANs to do transmissions and self learning at the same time.
 There is problem in neuroscience of what is the meaning of "Same" or "equal" is, at the level of a neuron.
 GAN are make up of two NNs. The detector and the re generator NN, that regenerate what is being  detected.
 IF you had a  alternating rows of detector NN {DeNN) and re generator NN (ReNN). The information could be past along
in a daisy chain fashion.  Like so:
DeNN, ReNN,  DeNN, ReNN, DeNN, ReNN, DeNN, ReNN, DeNN, ReNN.............................
 
 These NNs here only detect and regenerate the color of one pixel. So it will not be too slow.

 The first DeNN detect the color form the real world. Then the next ReNN generates it. And the next DeNN learns to detect it.
 And this keep going on and on so the brain has many true reference of this color. So when doing edge detection or blob
detection, the colors form two pixel can be compared to see if they are the same or different.


 
Title: Re: The last invention.
Post by: ranch vermin on November 15, 2017, 03:24:52 pm
woah now i understand,  thanks for that.

looks like a cool alternative to normal edge detection cause ur getting the angle as well,  but i guess the filter could too,  but its cool that youve adapted machine memory to do it.    they are general purpose indeed.

To keghn,   yes - if you had one neural network generating images, and the other video trained network telling it yes or no,  it will start to generate the diagonal lines - but i think that gets more interesting if the concepts are more general and abstract, then the generation might be less restricted and generate some more interesting images.
Title: Re: The last invention.
Post by: Zero on December 16, 2017, 01:05:56 pm
You were right, IMO. Things relate because they have compatible frequencies. Complicated.
Title: Re: The last invention.
Post by: korrelan on December 16, 2017, 02:01:31 pm
Yeah! Drunk me left me five pages of notes and a whiteboard of scribbles lol. 

Drunk me also altered the base code to my AGI system… never good. 

Sober me is trying to figure out what I was thinking…

EG:  To find a unique neural index/ pattern to a complex GTP pattern in high dimensional space, just keep the harmonics…

I removed drunk me’s post until I completely figure it out.

 :)
Title: Re: The last invention.
Post by: Zero on December 16, 2017, 02:43:09 pm
Yeah, next time remember to rum-fork it before anything else  ;)

What is GTP?
Title: Re: The last invention.
Post by: korrelan on December 17, 2017, 11:55:45 am
Quote
What is GTP?

The global thought pattern (GTP) is a term I give to the whole brain-wide pattern of activity within my AGI’s connectome.

Memories, logic, intelligence are expressed in my AGI by the frequency differentials between separate groups of neurons.  Each area of the 3D connectome is a collection of cortical columns that learn to do disparate jobs within specific frequency bands.  This means that each neural ‘module’ can process the internal/ external sensory stream differently depending on what the AGI is ‘thinking’ about, the modulation of its neighbours effects what each module does with the information it’s given. 

During my research into episodic memory engrams I had noticed that a second GTP was emerging, it is basically piggy backing off the main GTP but is out of phase, and because it’s using the same neural architecture as the main GTP it influencing how it processes data… it’s very weird.

The new GTP is being driven by the harmonics of the main GTP.

This didn't’ arise until I started integrating non sensory areas of cortex, ie the frontal cortex. The frontal section basically learns the output patterns of the sensory areas. It then sends feedback to the sensory areas influencing how they process the incoming data streams.  The feedback eventually causes a blending between the regions, where each is governed/ affected by the other.  This is a precursor to imagination; the frontal cortex can influence the sensory areas to ‘imagine’ a false sensory input based purely on learned external concepts/ experiences.  The system also uses this feedback to recognise its own internal ‘thought’ processes, because everything it ‘thinks’ goes back through concepts its learned through experience.

It’s similar to hypnotism or deep meditation…cortical regions are learning the harmonics… I was expecting the system to do something like this, just not so soon.  The new secondary GTP is like… just the surface froth, reading between the lines, or the summation of interacting logical pattern recognition processes.

Perhaps consciousness seems so elusive because it is not an ‘intended’ product of the connectome directly recognising sensory patterns, consciousness is an extra layer. The interacting synaptic networks produce harmonics because each is using a specific frequency to communicate with its logical/ connected neighbours.  The harmonics/ interference patterns travel through the synaptic network just like normal internal/ sensory patterns.  Perhaps our sub-conscious is just out of phase, or to be more precise, our consciousness is out of phase with the ‘logical’ intelligence of our connectome.

It was very difficult to track the patterns within the GTP but now it’s even harder lol.

This could just be a unforeseen error/ design problem of course…I need more Rum.

 :)
Title: Re: The last invention.
Post by: Zero on December 17, 2017, 09:05:14 pm
I understood everything you said, and your interpretation of what seems to happen with the second GTP makes sense, and sounds correct.

But I need to apologize for what I'm going to ask. I have to ask.
How do you know it's not just noise? The whole thing I mean, not only the second GTP. What intelligence does the whole connectome show?

It's a real question, I swear. Not an insult or  something. I'm a big fan of you and your Beowulf, just trying to follow and understand.
Title: Re: The last invention.
Post by: korrelan on December 18, 2017, 12:13:09 am
Quote
How do you know it's not just noise? The whole thing I mean, not only the second GTP. What intelligence does the whole connectome show?

Hmmm… that’s a very good question… prepare for a long winded answer lol.

I presume you have read my project page and have an idea of what I’m trying to achieve.  I’m back engineering the human brain, looking for a single AGI solution based on the human nervous system that makes every other narrow AI obsolete… a machine at least equivalent to us.

Ok... It depends on your definition of intelligence (don’t sigh lol), and the level of abstraction you are looking for intelligence, or the root/ cause of intelligence.  An ant colony shows ‘intelligent’ behaviour where as a single ant does not, so from this point of view intelligence is a group/ collective/ compound behaviour. Slime mould also acts ‘intelligently’ but again it’s just a collection of single celled organisms.  There is some kind of innate/ emergent intelligence that these simple organic systems are leveraging.

That’s basically what my project is about. 

I’ve designed a single neuromorphic neural network (NNN) that can learn, self organise and self categorise information. The NNN can learn to recognise objects/ words given ocular input, or given time the same NNN can adapt through plasticity and learn phonemes/ spoken words (see vids) with no user input, totally unsupervised.  The same NNN handles long/ short term memory, prediction, episodic memories, etc.

I think the collection of artificial neurons, glial cells, etc, has the same ‘innate intelligence’ as the biological systems described earlier.  I believe that this simple low level ‘intelligence’ is what compounds to produce our ‘high (lol)’ human level of intelligence.

I have expanded this model to produce an artificial whole brain connectome seed based on the layout of the human connectome.  The idea is that over time as the system grows/ expands and experiences ‘reality’ I will learn more (always fun) and eventually figure this out and create a true AGI.

I know from experimentation that the visual cortex area can learn to recognise objects, and the audio cortex can learn phonemes, etc… now I’m letting other cortex areas learn their sparse outputs and feed back etc… just like I think the human equivalent does.

I’m basically drinking rum, reading AI/ neuroscience research/ white papers, figuring out how I think it all works, coding, running simulations and banging my head on the desk lol.

I think deep learning, heuristics, convolutional networks, etc are very narrow AI schemas and will never result in a true conscious, self aware AGI... I believe my approach will... my current connectome seed only has a few million components… but its growing… and it’s learning… and so am I.

 :)
Title: Re: The last invention.
Post by: Zero on December 18, 2017, 08:55:11 am
Thank you! Your explanations are very clear.

Can it act? Does it feel pleasure and pain? Do you plan to connect it to minecraft for example?

Title: Re: The last invention.
Post by: ranch vermin on December 19, 2017, 05:50:56 pm
Yes, You should add a motor component.
Title: Re: The last invention.
Post by: korrelan on December 20, 2017, 10:54:43 am
The system does have the equivalent of a motor cortex and cerebellum. I’m driving speech output through good old Microsoft Mary at the moment. Speech is expressed as a temporal pattern which was causing problems, so I’m writing a module to re-combine individual phonemes back into words, sentences in real time… ear muffs are a requirement when working on that one lol. 

I can also drive any number of servos through an output module that recombines neural patterns back into a linear output.  This drives the head servos/ steppers and will also drive the arms/ hands... which I'm still working on.

The motor cortex was required because the system has to be able to hear its own speech output/ patterns and see/ feel its own movements, this is a requirement to enable the schema to build an internal model of it's own body/ reality, which of course is linked to self awareness.

 :)
Title: Re: The last invention.
Post by: ranch vermin on December 20, 2017, 12:16:22 pm
god that sounds impressive, what your asking of yourself.
im going for a more direct artificial approach... but im bloody stuck on an ordinary bug, ill get back to it tomorrow.

Maybe if u see some success from me, maybe u might want to shortcut yours sometime, if you dont get anywhere for a while.  its just an idea.
Title: Re: The last invention.
Post by: Zero on December 20, 2017, 02:43:35 pm
The system hearing and listening to its own voice is very exciting!! So, it can feel itself, wow. It must be difficult to monitor what's happening then... I mean, the vids are always really cool, but they don't show the underlying moving structures... God damn, I'd love to see those structures, with clear labels on them. I know it's impossible, becauze they're all plugged together.

ED: I see you don't resent me for suggesting minecraft, how nice of you  ;D
Title: Re: The last invention.
Post by: LOCKSUIT on January 18, 2018, 03:56:37 am
I'm really excited about korrelan's project. I too would love to see more, like a tour of his lab, his Beowulf, his notes, and the AGI blueprint overview. The more I/we know about you / your AI / your knowledge, the better I/we can give feedback.

Also, I would like to cooperate on a higher level with any of yous, like a DeepMind/Google Brain team. Google does very good because they have so much money, computational power, and top experts (teams) combining different skill sets (I have a unique set: passion+unlocking). While we can't easily get supercomputers, we can easily unite and create something powerful that runs efficiently.

Korrelan, I'm reading a book called www.DeepLearningBook.org and getting better understanding of terminologies, and you/I may not understand some of my/your standard/unique terms, but, maybe the Global Thought Pattern didn't work/emerge until the frontal cortex was implemented? Because the GPT is a super-high representation at each step and represents the active representations including the GTP itself (hence mirror-mirror effect to infinity). Btw do activated representations ex. in higher layers fade in brightness slowing leaking energy? As you watch a video for example, this would show a slope of bright-to-dark lightshow moving around the brain.
Title: Re: The last invention.
Post by: korrelan on January 18, 2018, 09:45:46 am
Quote
I'm really excited about korrelan's project.

I’m just getting back up to speed after the holidays...

Quote
I'm reading a book called www.DeepLearningBook.org

Good. All information is useful… obviously the more you know the better.  Deepmind is a very cool concept/ architecture but in its present format I personally don’t think it will ever be anything but a useful tool… a narrow AI.  Although the schema is incorrect Deepmind would represent the equivalent of the retina and V1, V2 visual cortices at the moment.
 
Quote
hence mirror-mirror effect to infinity

The frontal cortex is made up of hundreds of sub areas that are self organised according to which other cortex networks/ regions they are tuned to.  So for ocular/ vision for example it basically listens to the patterns in GTP that originated in the visual cortex.  As it learns the patterns it generates its own sparse patterns which eventually feedback to the visual cortex hubs amongst many other areas. Together they eventually reach a kind of equilibrium where the visual cortex’s definitions/ understanding of visual patterns are influenced by the state of the frontal cortex and vice versa. Hundreds of other networks are all doing the same and inter mixing their information in thousands of hubs/ relay points.  This is where internal, ocular, audio, tactile sensory streams are filtered, mixed and matched.

Quote
Btw do activated representations ex. in higher layers fade in brightness slowing leaking energy?

Actual images as you understand then never actually get past the retina.  The retina re-encodes the light patterns into a sparse/ compressed temporal representation that is lastly encoded by the retinal ganglion cells before being sent down the optic nerve.  You have approx 150 million light receptors (rods/ cones) in each retina, but only about a million nerve fibres in the optic cord/ nerve that passes the information to your brain. 

When you view a scene/ image for the first time your past visual experience fills in approx 85% of the information you think you are actually seeing.  Then as you move your fovea (hi resolution center) around the scene, focusing on objects you are not certain/ curious about the finer details are filled in.  This is part of how your imagination works, you are able to mentally superimpose visual information on top of what you think you are actually seeing or ‘see’ pictures in your ‘minds eye’.  It’s the main cause of hallucinations and many other mental phenomena like pareidolia and synesthesia.

So no…the patterns that represent what you are looking at don’t actually fade/ dim in the common understanding of the terms but they do loose their initial resolution/ complexity as they are converted into the mental facets your brain requires/ understands.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on January 18, 2018, 06:40:39 pm
So Frontal Cortex affects Sensory Cortices, Sensory Cortices affect Frontal Cortex, repeat? What is the purpose of the Frontal Cortex affecting the ex. Visual Cortex? Better recognition accuracy? Higher level concepts/representations? And what is the purpose of the Visual Cortex speaking back to the Frontal Cortex? To improve how its doing its job? Backprop?

I know there's no images in the brain now. I wasn't talking about this though. What I meant was that the neurons that stand for/represent eyes/faces/talking/etc could dim in energy and fade. Why? > Because normally neuron selections/activations in the our brain algorithms go black after activation, where, if they slowly de-activate and get darker slowly, then it draws the search for new input, i.e. new sensory input considers what you were last thinking. Simply put, any activity in the brain should slowly fade away, but not instantly turn black. Do you/s do this?
Title: Re: The last invention.
Post by: LOCKSUIT on January 27, 2018, 08:00:52 pm
Hey korrelan, can you give us a somewhat thorough list of where you learnt all of your knowledge from? I want to be more like you.

Here's an example of the type of reply I'm expecting from you:

EXAMPLE:
These 3 e-books are the most important I read.
Took Computer Science at Harvard for 4 years (online course equivalent is: "n")
Read about 80 random articles.
Watched about 60 lectures/videos on YouTube.
Title: Re: The last invention.
Post by: LOCKSUIT on January 29, 2018, 05:57:12 am
(This makes 3 replies for korrelan to read):

https://en.wikipedia.org/wiki/Synesthesia

I seem to have the Spatial Sequence Synesthesia. Don't yous? I can imaginate in the daytime pretty powerfully. I can see a clock all around me like in weird strange views like a bit out of body, or not!, and "see" them to left, idk extending my visuual field by pure representation?

I also have the good memory. I often come up with ex. 8 things to do in bed for tomarrow (ex. wash, hike, run, and skydive will help create a new AGI) and if I make a 1 picture, I only have to remember 1 thing to get em all! -> A generated video of a guy running up a mountain that's rained on and then skydives from off the top.
Title: Re: The last invention.
Post by: Zero on January 29, 2018, 08:36:53 am
It's good to see that you want to learn things, LOCKSUIT.  O0
Title: Re: The last invention.
Post by: korrelan on January 29, 2018, 03:53:04 pm
Soz for the delay… I’ve been a bit busy.

Quote
What is the purpose of the Frontal Cortex affecting the ex. Visual Cortex?

The way you interpret what you are seeing at any given moment is affected by both your current state of mind and your past experiences.  As I mentioned earlier your brain fills in/ simulates most of what you experience as reality, to do this there has to be a blending of the limited visual pattern/ stream with the ‘imagination/ experience’ patterns.  Your connectome is constantly guiding/ influencing what you think you are seeing. 

For example… if you where to drop a small screw on floor, you are able to mentally tune your visual system to heighten the detection/ recognition of the visual properties composing the screw.

Quote
Simply put, any activity in the brain should slowly fade away, but not instantly turn black. Do you/s do this?

Oh I get what you mean, like the neurons should stay primed after firing so the next flush of information is more likely to activate them, so a chain/ pattern can build up from the stream of incoming information… similar to the persistence of vision effect?

https://en.wikipedia.org/wiki/Persistence_of_vision

If you think about the firing schema of any type of artificial neuron it relies on a set firing methodology, adding a slowly decaying activation threshold would negate any kind of inbuilt ‘logic’ that tests for a firing level/ threshold.  The essence of an artificial neuron whether it be a classic weighted/ sigmoid or a spike derivative relies on the accuracy of the firing mechanism, adding a slowly decreasing activation function would totally mess up the logic/ pattern matching abilities of the system.

Quote
list of where you learnt all of your knowledge from?

Harvard… haha… they would have thrown me out lol. I’ve never taken any computer science courses; I did write computer course syllabuses and teach them though so I guess that counts...

Ok seriously?

If I remember correctly you’re in your twenties… I’m 52 this year… so obviously age/ time has a huge bearing on my knowledge/ insights (for what they are worth).

I really don’t know what to advise, except look how far you have come, how much you have learned in the past few years.  Even I will admit the current Locksuit is a vast improvement on Advancessss or the King lol… this shows you are very capable of learning and improving you knowledge/ insights… just keep improving… if my generation fails to produce a AGI humanity is going to need a new fresh outlook.

It takes time for a human to integrate knowledge and skills, I suppose the single most important piece of advice I can give is… make sure you understand what you are reading.  If something doesn’t make sense then re-read, research the base topics, keep going until you have a thorough understanding of the topic.

I’ve always been interested in anything to do with engineering/ science/ psychology; I read hundreds of articles every week on a very wide range of subjects, neuroscience, robotics, electronics, etc. I even read sites that deal with religion, UFO’s and ghosts; not because I believe in them but because I need to understand what makes these people tick, why they believe what they do… its all important.

We are all unique, we all have our own particular skill sets… that’s good.  That’s why the human race has come this far, we each approach a problem space form a different angle.

Quote
I seem to have the Spatial Sequence Synesthesia.

We all have schizophrenia, synesthesia, illusions, paranoia, etc its part of what makes us unique and human. Some people have more than their share of a particular trait though, and this means they don’t fit into our societies ‘normal’ accepted scope/ range.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on January 30, 2018, 05:50:17 am
"Simply put, any activity in the brain should slowly fade away, but not instantly turn black. Do you/s do this?"

It's exciting that you understood me, and I understood your reply back today! I see! It would mess up the thresholds. Well, after I stare at lots and lots of anime girls, or pikachu, random swirls and things yellow will look like I just seen pikachu or anime girls, just like swirls in carpet/etc *naturally* look like faces being seen. After playing Pikmin I also almost see pikmin running along things in my house. Also, notice how when you look around, you are kinda like a video? Your life/sensory is like a 8 second video of the last 8 seconds. I call this effect VideoSense. What may happen is multiple things get selected and higher representations are selected (or fuller energizing of something not easily selected) and then "seen" by "getting a look around the hallway for 8 seconds". Each feature has a short-term memory neuron that strengthens/weakens faster than long-term memories.

So you're saying if you recognize a screw, and drop something, this selects frontal cortex representation "n", which send signals to the visual cortex's screw representation features? Ya but, the very thing I brought up above (yellow blobs looking like I just seen pikachu) is how, no? -> You see the screw (or pick one up without seeing it but seen it internally as a memory selected, or even ask yourself what you dropped), and then when you drop it, you feel this, and do "look at ground actions", and the features of the screw are partially activated...
Title: Re: The last invention.
Post by: raegoen on January 31, 2018, 06:24:41 pm
Hey LOCKSUIT, I think what is happening in the situation you described when you look at something yellow and it looks like Pikachu is that your brain is constantly trying to match your sensory input (in this case sight) to a pattern it has already encountered before, in this case it reminds you of Pikachu especially since you said you watched it prior to this experience, that memory/pattern is closer to your consciousness so your awareness is drawn to that particular memory.

When you drop a screw and try to find it, you're priming your brain to focus on and respond strongly to the pattern of the screw in your sensory receptors (again this being sight) by recalling what the screw looks like, and creating a fake reality in your brain in which you've already found the screw, then when reality matches the fake closely, you know you've found it. I noticed something interesting in a similar experience I've had, I was recently playing with my younger brothers building lego. They of course have a huge bin full of random lego pieces, and very often I was looking for two specific pieces at a time. However when I focused on finding both at the same time, my ability to find them decreased significantly compared to when I focused on finding just one at a time. This shows how the human brain has limited multitasking abilities, if you want to do a job well, you have to focus and give that one task your undivided attention.
Title: Re: The last invention.
Post by: LOCKSUIT on February 01, 2018, 06:18:50 am
Hey korrelan I noticed there is many cortical columns in the human brain, are each a feature detector? I thought you have ex. 50 features detectors in layer 1, then ex. 40 higher-level representations/features in layer 2 ex. eye/nose...and layer 6 faces/scenes/complex groups ex. dining table made up of many things. Again, SIX layers....now internet tells me thhere is thousands of these "6 layer" nets......??.....or, is it just 1 six-layer? simply the perpendicular facing cortical columns are all part of the 6 layer sheet? And each is a feature detector mini-processor?

Reason I'm confussed is because if the whole neocortex was 1 "6 layer" sheet, containg allll things ex. faces horses lines tables motion etc, then each neuron is ....................oh wait, ex. 8 features means 8 clone networks right? Hence many cortical columns.....acting each as a "neuron" making up the whole neo-sheet hierarchy correct? Ex. nose column+eye column=face column in "layer3"?

https://www.youtube.com/watch?v=RSNofraG8ZE
Title: Re: The last invention.
Post by: Bob on February 02, 2018, 12:48:28 am
Wow, korrelan, you really have something going on with this project. :o

LOCKSUIT, for as far as I understand it, the six layers in a cortical column are just the way how the column processes information. Different parts of the neocortex are responsible for lower/higher level processes but a single column doesn't do everything from line detection to face recognition. It does just a single thing. Columns are connected through the thalamus which acts as a "relay station" between them. Columns are also interconnected in other ways. So some low-level information goes through the thalamus to one part of the neocortex, gets processed (e.g. line detection), something is sent back to the thalamus, then sent to a higher level part of the neocortex (e.g. to do more complex detection). Then rinse and repeat? I haven't really looked at how the brain handles visual information but I would guess it works in similar ways for different sensory input.

I like the following image. I think it illustrates this quite well.
https://goo.gl/images/LPHnmJ (https://goo.gl/images/LPHnmJ)
I'm not sure what the paper this image comes from says about it because I don't have access.

Thank you for the video. I learned something from it. It's a bit weird to see a cat that way though.
Title: Re: The last invention.
Post by: LOCKSUIT on February 02, 2018, 01:10:05 am
wait....if each column is 1 detector....then why 6 layers? 6 layers implies line-to-face hierarchy...

If all of a column stands for 1 representation/feature/concept, then there is no layers in the neo haha oh come on...I mean you can emulate the layer thing by connecting the columns but.....so...right? And isn't that stupid?
Title: Re: The last invention.
Post by: Bob on February 02, 2018, 12:12:37 pm
Well, most input from the thalamus enters the columns in layer 4 - about in the middle. So a line-to-face hierarchy in a single column doesn't make much sense to me.

Why do you think it is stupid? Or more stupid than having a line-to-face hierarchy in a single column (there would be some serious dense-packed magic processing if everything happened in such a small space which we can't make sense of). Now given that a column detects only one feature, that doesn't mean the different layers aren't there for a reason. They process/do something with the information in some way. I guess you could emulate the layer thing. But what do you connect to what? And you still need to implement what happens inside the columns.

But I would love to hear korrelan on this and on how he has implemented it.
Title: Re: The last invention.
Post by: LOCKSUIT on February 02, 2018, 09:45:03 pm
see attachment

This expresses my feeling and "image" I'm experiencing.
Title: Re: The last invention.
Post by: korrelan on February 03, 2018, 05:47:30 pm
Most current estimates state that there are approximately 86 billion neurons total in the human brain with around 16 billion neurons in the cerebral cortex.  Some animals have more neurons total but we have the highest quota in the cerebral cortex leading academics to presume this is the seat of our intelligence.

According to the current level of understanding by academia, the human cortex seems to have at most six layers, it depends on how you define/ perceive the laminar structure; some areas have less than six.  The cortex is comprised of modular units, a similar repeating pattern that gives the appearance of functional modules/ columns.  There are approximately two million cortical/ hyper columns, each column is comprised of approximately 80 mini columns, and each mini column has 80 to 110 neurons distributed through the six (or less) layers.

Ok, there is some good news… and some bad news lol.

The good news is that cortical columns do exist in the human cortex; they do have a modular function and act as distinct processing units.  As we get older our cortex thins and the columns get further apart… and this gives you a clue as to the bad news.

The bad news is that you can’t use the columnar structure as a guide to figuring out how the brain learns because they are not an innate structure; they are not produced by ‘evolution/ DNA’… but by experience/ learning.  The columns don’t adhere to a standard recognisable format across an average population of cortex's; everyone’s columnar organisation is going to be totally different.  The columns are a product of self organisation/ learning/ experience.

Of course this gives us a target/ clue as to the wiring/ coding/ transmission protocols the brain uses, just because the correct schema should produce columns out of a regular laminate sheet of neurons with random selective fields.

I produced this short vid last night showing the formation of columns in a random sheet of neurons using my AGI’s internal wiring schema. The sheet is subjected to 80 GTP pattern facets, as it self organises from the data stream you can see the columns forming, represented by the colour boundaries.

https://www.youtube.com/watch?v=yewFPnVBQNo

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on February 03, 2018, 10:33:38 pm
Korrelan you gotta answer my question though, you must have missed it above.

See my drawing above? Which way is the human brain? If not the right side (like ANNs are setup), then it's gotta be like the left side of the drawing, so that it is a hierarchy of higher layers get it. However, if it is the left side of my drawing then, that makes me sad cus, it not cool....
Title: Re: The last invention.
Post by: korrelan on February 04, 2018, 10:13:05 am
Hi Lock

Actually neither is correct but the closest I’m afraid is the left diagram.  Layer one is at the top by the way, with layer six being the deepest.

You seem to have mixed up the function of the six cortical layers with the layers in a CNN pooling network, its not one layer per recognition process.

As Bob described, the six layers are used as processing mechanism, not as recognition layers.  The layers are used by the structure/ connectome of the brain to process/ blend information from many different other cortical regions and deep brain structures.

The columns represent very abstract facets of recognition, even in the visual cortex its not just line/ gradient orientation columns, the columns change their function depending on the overall state of the GTP/ current thought pattern.  Even the state of the surrounding columns can change the functionality of a single column.

keep in mind that everyone has different opinions and theories on how the brain functions, even academia can't agree on most aspects of the layout and functions of the brains regions.  Anything I state is just my opinion from my research/ reading and experimentation.  There is no reason why you couldn't use the layout in your diagrams to create a theory, if you can figure away to make it all work.  I choose to stick as closely as possible to our own brains design, this is not necessarily required to build a functioning AGI.

 :)
Title: Re: The last invention.
Post by: ivan.moony on February 04, 2018, 10:41:00 am
I imagine NN as a kind of blackbox for detecting features. How you connect features to hierarchy like (face) / (hair - eyes - nose - mouth - ears) and so on deeper, is about natural programming being done outside the blackbox. Now, this outside programming can also be done by outer NN, recursively. At the end, we would have |NN inside NN inside NN ...| for |features inside features inside features ...|

If I am right, than it is possible to recursively structure NNs to any level depth, producing the left Locksuit's image effect.

But maybe I'm wrong...
Title: Re: The last invention.
Post by: korrelan on February 04, 2018, 10:50:33 am
@Ivan

I agree, part of the brains recognition process is a parallel recursive pattern.  But the pattern does not move down through the layers of the cortex, with each layer selecting a different facet, it moves through the whole structure of the cortex and lower deep structures.

The ‘deepness’ of the recognition cycle does not come from the deepness of the layer but from the time the cortex has had to process/ recognise the pattern… if that makes sense.

 :)
Title: Re: The last invention.
Post by: ivan.moony on February 04, 2018, 10:59:23 am
Is it the case that:
Title: Re: The last invention.
Post by: LOCKSUIT on February 04, 2018, 08:39:04 pm
I put it together why korrelan didn't really reply back to me yesterday LOL, he was drinking again lol! As he stated so.

If a neuron dies, the others right beside it in the same column do the same job - they ALL stand for ex. a line or nose or face = redundancy/robustness/accuracy/reliableness. Next, it doesn't really matter what order they are in i.e. higher levels, the connections are the SAME.

But I seriously want to know why Wiki / Ray Kurzweil said "and it goes UP the layers to higher concepts".

Also, so the input to the columns starts at the top? Or layer 4 on the SIDE of the column?

Wait a minute....Also, Wiki shows ex. V1 being for more simple features at the back of the human brain, and by the time it gets to IT near the side of the temporal area it is highhhher feature representations! So they are grouped! But sideways! Right?

See attachment below:
Title: Re: The last invention.
Post by: korrelan on February 05, 2018, 11:10:39 am
@Ivan

Quote
the same NN trained data is used for recognizing any of a number of features, no matter of how the features form a hierarchy

If someone was to take a pencil and draw/ outline a letter of the alphabet on your face cheek you would be able to recognise the letter straight away, with no prior experience. You can do the same easily for any area of your skins surface.  This gives us quite a few clues to the processes involved in the overall recognition schema.  Past the initial sensing on the skins surface the information is being converted into a commonly understood pattern that represents a character of the alphabet.  You only have a few million cortical columns at most; there just are not enough neurons in your brain for each area of skin to have its own recognition hierarchy.

Quote
multiple layers are used only for adjusting the accuracy of a single NN (sometimes less number of layers give better results), without forming a hierarchy between recognized objects

Yes, the layers are part of the mechanism that allows the cortical regions to recognise stimulus.  If you insert a probe down though the layers perpendicular to the surface then all the neurons in that column\ stack have the same basic receptive field layout.  Think of each column as a cog in the machine.  You global thought pattern GTP is constantly cycling though your connectome, the sensory cortex regions take in external stimulus, recognise the stimulus, convert it into a sparse pattern that depends on the current state/ feedback within the GTP and injects the pattern back into the GTP.  This affects the GTP which in turn changes your perception/ thought train, which then affects the recognition of the next sensory frame… repeat.

@Lock

Yes I was having a drink lol.  :D

Mr Kurzweil is talking about layers within the hierarchal learning structure/ schema, not the layers within the cortex.

Most sensory stimulus does indeed enter the cortex at layer four/ five, though this is not through the side lol.  Columns are never cylindrical, and there is no actual physical space between them.  The boundaries between columns are mostly filled with inhibitory neurons as well as the white matter lateral connections, glial cells, etc.  They are sometimes drawn as cylindrical just to aid understanding.  The afferent axons that carry sensory stimulus into layer four pass through layers 6, 5 and indeed sometimes make synapse to these layers.  The cortex is a continuous sheet, physically sub divided only by the lobe boundaries and a column is just a small area that has specialised in a particular trait, it’s not an actual physical structure… it’s a logical/ functional structure.

Quote
So they are grouped! But sideways! Right?

Yes.  In certain areas of the cortex sensory stimulus can be mapped as moving across the surface to adjacent areas, and research seems to link this with the hierarchy of recognition. Though keep in mind that other deep brain structures are also playing their part, as is the rest of the cortex.

 :)
Title: Re: The last invention.
Post by: johnphantom on February 06, 2018, 02:04:40 pm
"The beauty is that the emergent connectome defines both the structural hardware and the software.  The brain is more like a clockwork watch or a Babbage engine than a modern computer.  The design of a cog defines its functionality.  Data is not passed around within a watch, there is no software; but complex calculations are still achieved.  Each module does a specific job, and only when working as a whole can the full and correct function be realised. (Clockwork Intelligence: Korrelan 1998)"

I have created a working model for a stateless computer. It is pure connectionism, which I look at as a geometry of information. Using quantum nonlocality it could operate instantaneously as input happens:

http://tinyurl.com/statelesscomputer

or directly:

https://app.box.com/...p8tir00r0pf1467

Note that this site does not support advertising and there is nothing that you have to download to read what I wrote.

You can contact me at johnphantom@hotmail.com
Title: Re: The last invention.
Post by: korrelan on February 15, 2018, 03:56:23 pm
https://www.youtube.com/watch?v=VwEgOgKeinU


The ideal shape/ volume I’ve found so far for the cortex connectome model is spherical but the four lobes per hemisphere, the separate hemispheres themselves and even the gyri/ sulci are still required, they all have a functional purpose within the model.

This is an experiment to roughly map my models connectome to the same area/ shape as the human cortex.  Besides the procrastination element to this experiment, the idea is that it should aid understanding of the schema to the layman… and I think it looks cool lol.

Each voxel = one functional column (50 ish neurons, 1000 ish synapse) per 10.5K voxels.

Although even this fetal stage can learn hundreds of millions patterns I’m sticking to the usual 40 distinct colours for clarity.  40 patterns learnt * 250 facets = 10k pattern facets learned.

As usual the voxel colours match the bar graph lower left.  Each colour represents one injected/ learned full pattern.  As the patterns are injected into the model (top right) the height of the bars represent confidence in recognising that pattern.  Ideally just one bar should rise for each pattern. If more than one bar rises then a similarity in the patterns exists.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on February 17, 2018, 11:51:20 pm
Omg I'm so excited now...Did you just make your project into the shape of the human brain!? For a project like this that's trying to re-create the human mind/brain, that is definitely a step forward.
Title: Re: The last invention.
Post by: LOCKSUIT on March 14, 2018, 11:04:23 pm
Oh so GTP is C, A, T being heard? The selection keeps updates each step.

And after hearing "CAT", it echoes, and this updates the GTP each step, which also updates the GTP each step!! WHICH ALSO... ? What's the point in that though? It's the same word. I know hearing CAT 6 times is a new "representation" or "word" or "sentence" but it has no meaning/use in English Vocabulary...unless, I was never told......

CAT
CAT CAT CAT CAT
CAT
CAT CAT

See, I don't understand that sentence. 4 cats heard as a GTP self feed-in was useless in this understanding of it.
Title: Re: The last invention.
Post by: korrelan on March 15, 2018, 09:08:52 am
Quote
Oh so GTP is C, A, T being heard? The selection keeps updates each step.

Hearing the phonemes C.A.T. triggers a specific GTP pattern that encompasses everything you know from experience about cats. This includes their shape, size, number of legs, colour, movement patterns, who owns one, the last time you saw one, everything relevant to your concept of a cat.  Your Cat GTP pattern can be triggered by any piece/ facet of the information.

So… I’m thinking of an animal, it’s below my knee height, it has four legs, it can run and it has tail…

As you read the animals attributes pattern facets are added to your GTP, at this point your GTP is running a generic pattern for certain types of animal; it could be a cat, pig, dog, ant eater, etc.  Your GTP pattern is priming all the possibilities linked to this pattern based on the given information, this is how you can suggest/ think of the possible animals.  You are just requiring a piece of information that will complete one of the patterns and trigger a specific concept pattern recognition process…

It barks…

Click… your generic ‘four legged animal’ GTP pattern has found a match/ lock and your ‘dog’ GTP pattern has accessed and included all your knowledge/ experience of dogs into your GTP.

The more knowledge you accumulate related to a concept the more precise and encompassing the GTP concept pattern becomes. 

You are now reading about my GTP theory.  Your GTP is now running a specific pattern that includes all the sub-patterns/ information/ knowledge that ‘makes sense’ to you about my theory.  As facets of what you are reading makes sense, that’s a pattern recognition triggering amongst your already learned concepts about other subjects.  You are linking my idea/ theory together with/ from pre-existing learned information.  So the next time you see GTP it will initiate a pattern that includes everything you have learned/ linked into your concept of GTP.

Think of the GTP as a mixing bowl, depending on the ingredients you throw in the end result will be specific but the possibilities are endless.

The GTP is a parallel schema, so if we use the shuffle combinations of a deck of playing cards as an analogy you will get some idea of the complexities possible with just 52 pattern facets.  The number of possible combinations from just 52 cards is 8.06e+67 or…

80658175170943878571660636856403766975289505440883277824000000000000

https://www.youtube.com/watch?v=uNS1QvDzCVw

So from just 52 attributes, size, number of legs, etc that’s how many objects/ concepts could be triggered/ recognised.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on March 15, 2018, 10:58:53 am
Does your AGI make discoveries?

If so. How does your AGI form discoveries?
Title: Re: The last invention.
Post by: korrelan on March 15, 2018, 11:54:23 am
Quote
Does your AGI make discoveries?

Yes.

We learn in a hierarchical manner.  We learn the simple sub components first and then link them together in ever increasing complexity.  You can’t understand a sentence without first learning the word concepts, and to learn words you must learn/ recognise letters, etc.

When you think about a topic/ concept you use your whole cortex, each region is specialised in recognising/ extracting a specific set of properties/ qualities.  So for example, when reading words your pre-motor/ motor cortices are involved because they are part of the network that evaluates/ recognises spatial properties in the GTP, the order and position of the words & letters.  The regions tend to specialise according to our own body’s sensory inputs/ layout.  This why you can easily hold up four fingers to represent the number four, its part of your GTP pattern for the concept of four.  Believe it or not your visual cortex plays important roles in recognising sounds, and tactile sensory stimulation.

My AGI uses exactly the same mechanisms we humans do. It’s part of the general learning schema.  One of the methods we learn is by linking/ recognising GTP patterns that are using similar sub-patterns/ facets. So if two concepts have similar GTP patterns then they must be comprised of similar properties/ qualities. This also works in the spatiotemporal domains, so the order/ position/ etc are all treat as dimensions of a concept.

Imagine considering two complex concepts so both patterns are current in your GTP, if they share any of the same pattern facets then there has to be a commonality between them, this is what leads to eureka moments. 

All discoveries are made by recognising commonalities between diverse concepts.

 :)

When two words or sentences rhyme, it’s the commonality between the spatiotemporal relationships of the phonemes your cortex is recognising.

When doing an ANAGRAM, you are mentally changing the spatial properties of the word until you get a lock/ trigger/ recognition.

If you forget something, think of other things related to it, this will often provide the missing GTP sub-pattern to fire the memory/ pattern you where trying to remember.

Etc.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on March 15, 2018, 12:44:34 pm
Well said. You're further than I thought.
Title: Re: The last invention.
Post by: Art on March 15, 2018, 05:32:17 pm
Let's just hope that Korrelan has been teaching it that he is its Best Friend!! ;D
Title: Re: The last invention.
Post by: LOCKSUIT on March 15, 2018, 05:55:41 pm
You're not alone, just hit me up when you know it's getting dangerous.
Title: Re: The last invention.
Post by: infurl on March 15, 2018, 09:18:47 pm
All discoveries are made by recognising commonalities between diverse concepts.

Discovery is more than just observation and pattern recognition. It requires understanding as well.

How does your software show that it understands something?
Title: Re: The last invention.
Post by: LOCKSUIT on March 15, 2018, 09:32:44 pm
Infurl............the answer to that is.........you have 2 things, one is understood, and the other is new, and you do PR between them to see if there is commonalities.

That's all I'll say to keep it a tad encrypted.
Title: Re: The last invention.
Post by: keghn on March 15, 2018, 09:35:29 pm
Simulated annealing and a focus to hold position in real world and in the GTP in my model.
Title: Re: The last invention.
Post by: LOCKSUIT on March 15, 2018, 10:15:19 pm
https://en.wikipedia.org/wiki/Simulated_annealing

WOW. See that GIF? That is exactly what I thought of that would be better than the local hill climb problem. Stochastic is better.
Title: Re: The last invention.
Post by: korrelan on March 17, 2018, 11:16:41 am
@Infurl

Soz for delay… nearly missed your question.
 
Quote
Discovery is more than just observation and pattern recognition. It requires understanding as well.

How does your software show that it understands something?

That’s a very good question… and a very tricky question… and to be perfectly honest at this point in the development I can’t be 100% certain how it’s going to pan out… but I can explain how I’ve designed it to ‘understand’ its reality.

All discoveries are made by recognising commonalities between diverse concepts.

So…

Understanding is achieved by recognising commonalities between similar concepts.

Of course this is new tech based on my understanding of the human schema, like self awareness/ consciousness, etc neuroscientists and academia don’t even have a definition/ description/ mechanism of how we humans achieve understanding/ comprehension… but I will attempt to explain.

The whole design relies on the AGI starting a foetal stage of development.  Using accelerated learning techniques the oldest AGI I’ve ‘reared’ so far is around 4 months old in human terms.  By this stage in its development it can recognise objects, words and learn simple concepts/ relationships. 

One of the main tools I’m using at this stage is statistical analysis. I’ve designed the system so I can slow down, stop and reverse the neural activity.  I use various visual cues so I can actually see which neurons are firing, which synapses are supplying inhibition/ excitation to a dendrite branch etc.  I have loads of other tools available for monitoring groups/ patterns of neurons and sequence events.  Using these tools I can easily track a GTP sub-pattern and see which concepts are being included etc.  One of the best methods is to stain/ mark/ tag groups of neurons and link them to a simple bar graph (see vids), this gives an automatic visualisation of learning levels within various areas the model.

As the machines mental faculties increase overtime ‘understanding’ grows with hierarchical learning; understand the small simple concepts first.  I’m hoping it’s the compounding of simple relationships between concepts that will enable understanding of complex concepts.  Once the similarities are experienced many times the whole hierarchical pattern process is not required because the system learns to recognise the harmonics of the relationships.  So eventually the sensory input generates the harmonic which is the culmination of prior learning/ experiences.

The only way to be eventually sure the AGI understands a concept is for the AGI to explain its thinking/ reasoning, but that takes a lot of understanding/ knowledge and experience to get to the stage of generating vocal responses.

Now I’ve got 99% of the connectome model designed I’ve been profiling and improving my software suite, AGI engine and MPI.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on March 17, 2018, 04:00:09 pm
"All discoveries are made by recognising commonalities between diverse concepts.

So…

Understanding is achieved by recognising commonalities between similar concepts."

Keep in mind this means that when your robot's eyes stare at say a chair, the chair matches a similar memory/concept and you say you see it/understand it.
Title: Re: The last invention.
Post by: keghn on March 17, 2018, 04:53:01 pm
 Using the focus method, I have x, y, and z to move through video. Then it has another methods to move or switch over to
comparative space. Here X, y, and, z are saved by throwing them on the stack. Here the focus move around by all the
features it has extracted form video. Every thing gets a weight. Increasing and decreasing theses weight is how the focus
move around in this multi dimensional world. When i mean movement, it about choosing a target to move to form a
staring point, of a face a  selected person.The increasing and decreasing of the weight until it look like target face is distance
moved.
 This is simulated annealing.  Very flexible and robust. It can be done from a blob of clay to bust or face of Obama, the ex U.S.A 
president.
 When focus leave this space the position is thrown on a stack and move back to X, Y, Z space.
 The movement of the focus, consciousness is recorded so that it can be given over to sub focus, sub conscious memory for automatic runs.

 All curious and believers are welcome to my cause. All unbelievers F* off.   




Title: Re: The last invention.
Post by: korrelan on March 17, 2018, 05:22:17 pm
Sounds like an interesting theory... keep us updated with your progress.

 :)
Title: Re: The last invention.
Post by: infurl on March 17, 2018, 09:19:33 pm
This was just posted on YouTube. It's a description of the process that Google is using for neural networks to design neural networks.

https://www.youtube.com/watch?v=sROrvtXnT7Q
Title: Re: The last invention.
Post by: LOCKSUIT on March 20, 2018, 02:02:57 pm
Now korrelan.

You said your AGI makes discoveries by doing Pattern Recognition between 2 things. One thing is fact. The other thing is new hence 'discovery'. In your AGI, where does the 'new' thing come from to be compared for commonalities to the other thing/fact?

:)
Title: Re: The last invention.
Post by: keghn on March 20, 2018, 06:08:45 pm
 Two thing here. The target and the starting object. With simulated annealing you mess with the weight until you start moving to
the target in the most direct way. The focus pointer movement to the target will slowly inch toward the target.
 On the way over the focus recreated with a generative NN or algorithm a graphics gaming engine.
The re creation may will look like a morph from on to the other. But somewhere in between might be a novel discovery.
 Once The straight movement is found, that is correct ratio of incrementing of certain weight, a vector list. Then we
can shoot past the target and find novel things beyond the space in between.
 A second way is to lower the resolutions by lesson in the focus and thing get blurry. Different object will start to look the same.
 In this fashion substitutions of objects in temporal pattern loop can be done. This is a way of discovering new patterns loops.

Title: Re: The last invention.
Post by: LOCKSUIT on March 20, 2018, 06:17:44 pm
Sounds similar to the answer keghn very good. I think I understand what you're saying.

I'd like to hear korrelan's answer too.
Title: Re: The last invention.
Post by: LOCKSUIT on March 30, 2018, 02:14:49 am
Questions for korrelan:

Do you use a RNN? Do you use a LSTM? If not then what?

You said your AGI makes discoveries by doing Pattern Recognition between 2 things. One thing is fact. The other thing is new hence 'discovery'. In your AGI, where does the 'new' thing come from to be compared for commonalities to the other thing/fact?
Title: Re: The last invention.
Post by: korrelan on March 30, 2018, 11:08:49 am
Apologies I didn’t see the above questions.

Quote
Do you use a RNN? Do you use a LSTM? If not then what?

The simple answer is all of the above.  Each type of common artificial neural network schema has been loosely modelled on the brain utilizing neurons, connections, weights, etc.  Each has its own benefits; each excels at a specific type of recognition.  It was obvious that all these artificial schemas are so similar yet achieve very different tasks, so there must be a way of combining them.

I spent years studying the biological connectome, drawing conclusions, writing the code to simulate the same functions, whilst referencing other people’s pervious research on artificial/ biological networks.  I eventually narrowed it down to a specific connectome/ set of algorithms/ design that can simulate/ achieve all the properties of common artificial schemas… plus a lot more.

So… I basically just took the best bits from each schema, mixed them all together and created a hybrid that has all the properties that I figured a biological neuron must have to achieve what it does.

The connectome constantly evolves/ changes overt time, +/- synapse are generated, culled depending on what the system is experiencing.  The type and properties of neurons/ synapse change according to their age/ location.  A set of algorithms governs the growth and 3D properties of the connectome. It also requires rest and sleep… very important for the schema.

Quote
You said your AGI makes discoveries by doing Pattern Recognition between 2 things. One thing is fact. The other thing is new hence 'discovery'. In your AGI, where does the 'new' thing come from to be compared for commonalities to the other thing/fact?

Good question. 

Firstly it’s important to note that the AGI is a closed system, so it can only use its own cameras, microphones, etc to experience the world.  If you where text chatting with the AGI it would be reading your text off a monitor with its own eyes/ cameras and typing on a keyboard.  A skype call will be a much better way of experiencing the AGI. 

There are many reasons for the closed system, but mainly it negates a lot of complexity that has been incorrectly assumed by AI research.  Humans don’t scan a huge high resolution image for faces/ objects for example, 90% of our ability to recognise a face is centred on the fovea. We move our eyes to the target, our peripheral vision guides the fovea, it is doing a very different job and using different parts of the visual data stream.

So any incoming sensory stream is always modulated from the machines perspective/ point of view.  The sensory signals from the cameras are fixed and unique to its visual systems.

When the AGI recognises a GTP/ concept the facets/ commonalities it’s recognising and combining are many levels of abstraction lower than you would imagine.  If it’s looking at a face everything it can see, the position/ spacing of the eyes, the shading/ gradients, angle of the face, motion, everything is learned/ recognised/ combined at a very low level of abstraction. 

Eventually you end up with a generic face recogniser and the sub facets will generate a GTP pattern for that type of face.  This generic pattern is altered/ enhanced by the unique properties of the face.  Within a few milliseconds you have a GTP pattern built from both common facial properties and the unique properties that make the face stand out.

It is at this level of abstraction that all concepts are initially recognised, the resulting GTP patterns for all levels of resolution/ recognition can be fired from any part of the hierarchical GTP structure the concept comprises.  Give it two red objects and they will both fire the same GTP patterns for ‘red’ and ‘object’ it will recognise/ link the commonality.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on March 30, 2018, 08:20:01 pm
Wait so, you mean to tell me that you combined LSTMs, GANs, RBMs, HFNs, etc + realistic human based NN schema... into a hybrid NN?


Oh also, if is the case, please list all you combined in short-form ex. GAN (don't write out the full name). Ohh oh and also give a very short brief reason why you combined each ex:

GAN - generative abilities
LSTM - recurrent abilities
etc...
Title: Re: The last invention.
Post by: infurl on March 30, 2018, 10:08:37 pm
Locksuit, you'd better specify a budget for all that work and pay a retainer.

Korrelan should be charging out at at least $250 USD/hour and it's Easter so penalty rates apply (double whatever he normally charges).

Edit: J.K.
Title: Re: The last invention.
Post by: korrelan on April 01, 2018, 12:30:43 pm
@lock

Quote
Wait so, you mean to tell me that you combined LSTMs, GANs, RBMs, HFNs, etc + realistic human based NN schema... into a hybrid NN?

Yes.  I used the known benefits of the above ANN types, plus others (liquid/ reservoir, etc) that are roughly based on the human model to figure out how the human model works.  My AGI is actually a neuromorphic schema, which means it’s basically a biological simulation. 

Quote
Oh also, if is the case, please list all you combined in short-form ex. GAN (don't write out the full name).

No. Whilst my AGI connectome model can exhibit/ re-create the beneficial traits of the above listed common ANN types they don’t actually exist as clearly defined structures. 

The above NN types are narrow, watered down attempts at recreating the human schema, I used them as a tool/ guide/ reference whilst designing my AGI, they gave me a rough idea of what each network type can achieve, the types of structures required for that effect.

@Infurl

Yeah… sounds about right.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on April 02, 2018, 12:56:59 am
Gotta pay for all the Easter rum! (and chicken)

So your NN soup has attributes like found in GANs but can't generate ganny images? You say it has GAN stuff yet can't do GAN stuff wtf bro? What exactly does it have that GANs have? This is a contradiction...

Also another request. Is there a way I can utilize your non-watered down ANN for my NLP project? Your schema might run in 3D in a very specific program but, if so, then tell me the attributes that you used from each type of ANN ex. what you pulled from LSTMs, what you pulled from GANs, what you pulled from....and so on. That way I can be up to date on a superior ANN schema instead of just a LSTM or DNC KB/DB for my project.
Title: Re: The last invention.
Post by: korrelan on April 02, 2018, 03:04:21 pm
@lock

Quote
You say it has GAN stuff yet can't do GAN stuff wtf bro?

No I didn’t’ say that, please read the above again.

Generative Adversarial Networks (GAN) are a class of net works that have many uses, generating ‘Ganny’ images is only one recent/ common use for their properties.

My AGI is capable of generating network topologies that are a kin to GAN’s, networks that have the same application/ methods/ properties of a GAN… but technically/ schematically they are not GAN’s.

Quote
Is there a way I can utilize your non-watered down ANN for my NLP project?

Again No.  The system is designed to learn like a human, using its own senses.

Quote
then tell me the attributes that you used from each type of ANN

There are loads of resources available that give this type of information. 

https://en.wikipedia.org/wiki/Types_of_artificial_neural_networks

https://towardsdatascience.com/the-mostly-complete-chart-of-neural-networks-explained-3fb6f2367464

Have fun…

 :)

Title: Re: The last invention.
Post by: LOCKSUIT on April 02, 2018, 10:52:04 pm
I know you didn't combine them like puzzle pieces, what I meant was a engineered ANN that exhibits all their capabilities. Yes?

"Is there a way I can utilize your non-watered down ANN for my NLP project?"
"Again No.  The system is designed to learn like a human, using its own senses."
..........But mine is too LOL. Just it will only use Auditory. MAYBE vision in the long run but hopefully not.

Title: Re: The last invention.
Post by: korrelan on April 02, 2018, 11:23:24 pm
If it’s just audio phonemes you want to pattern match, live sample @ 48000 Hz, use a FFT to split the stream into 4096 BINS and apply a simple spatio-temporal pattern match to the first 500 BINS (0-6 kHz) power peaks avoiding the harmonics. 

That will provide more than enough resolution to recognise spoken language.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on April 02, 2018, 11:44:35 pm
text is easier to recognize
Title: Re: The last invention.
Post by: korrelan on April 12, 2018, 10:35:40 am
https://www.youtube.com/watch?v=gWqiedYv7s0

I’ve been busy updating my software suite, profiling, optimizing, etc, as well as working out the algorithms required for neurogenesis in my AGI. For this vid there are approx 19K neuron groups/ voxels and 142K axon group/ connections, the simulation is running on a single thread just to slow it down so we can see what’s happening.

Up to this point I’ve been generating the initial neuron columns/ voxel positions based on the Fibonacci sequence.  This gives a nice even distribution over the models volume, and provides the initial skeletal framework/ layout required for generating the long range tracts/ axons required for passing the GTP facets between cortex areas.

The algorithms for neurogenesis are designed to move/ generate new neurons in to areas that require a higher resolution due to pattern complexity.  A new neuron will migrate to its desired position, wait and listen to its neighbours, attach its dendrites/ synapse to the correct locations and then join in with/ enhance the local activity.  This has greatly enhanced the learning abilities of the foetal model.

The vid shows how the foetal model, once trained on a specific type/ frequency of stimulus can now learn/ forget completely new/ novel variations with just one exposure.

At the end of the vid you clearly see the higher density of neurons on the stimulated right hemisphere.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on April 13, 2018, 06:39:22 am
Making your AGI into the shape of the human brain is probably the most autistic thing you could do :P I like it. It's a good idea too yeah. Especially for a project that wants to create the human brain.
Title: Re: The last invention.
Post by: LOCKSUIT on April 18, 2018, 12:57:45 pm
"the AI will just walk away from the troop of monkeys much like man would with monkeys himself"
.........Humans also don't care if they step on bugs though :P.....uh oh....I'm sorry fruit fly!! I wish I could save yous. Just my hands are dog hooves.



Hey korrelan your first posts talk about chatbot use plus your own idea for them by the sounds of it. SIDE PROJECT?? XD:

~March 2014 lol
Quote
Hi Squarebear

It is a side project… the bot was my own design.

I usually work with massive spiking neural nets that store their learned information using synapse/ axons in semantic spatiotemporal maps etc. 

I spent an hour and wrote a simple sentence parsing routine that applies the same technique to a bot.  This should enable the bot to imagine scenes/ scenarios as well as experience/ view them.   I didn't have time for the bot to learn the laws of physics so I wrote simple object collision rules etc.  The bot has a set arm length and so can reach for objects, unless the path to the object is blocked by another object.  I had gravity but forgot friction… the cup would now stay on the saucer lol.



User: Imagine a table
User: Imagine a saucer
User: Imagine a pepper pot
User: Imagine a salt pot
User: Imagine a cup
User: Can you touch the saucer
Can't touch the saucer because the pepper pot is in the way
User: Can you touch the cup
Yes I can touch the cup
User: Where is the cup
The cup (is) on top of the saucer
User: Where is the pepper pot
The pepper pot is in front of me, infront of the saucer (is) on top of the Table
User: Where is the saucer
The saucer is behind the pepper pot infront of the salt pot (is) on top of the Table
User: Where is the salt pot
The salt pot is behind the saucer (is) on top of the Table
User: Move the saucer forward
Can't move the saucer forwards because the pepper pot is in front of it
User: Move the pepper pot forward
OK! I moved the pepper pot forward
User: Move the saucer forward
OK! I moved the saucer forward
User: Move the pepper pot backwards
Can't move the pepper pot backwards because the saucer is behind it
User: Where is the cup
The cup (is) on top of the Table



No I'm afraid I've never done any scripting for chatbots, tho many years ago I did write a fabrication/ welding control system for GEC/ Alsthom that included a natural language interface (NLI) so the engineers could question the system. I'll have to dig the NLI engine out and have a play when I get five :)

What chatbot software do you use?
Title: Re: The last invention.
Post by: korrelan on April 18, 2018, 01:45:38 pm
It wasn’t a conversation bot, it was an expert system, originally designed as a front end for engineers to query my welding control systems.  The welding software suite is used to keep track of the fabrication of nuclear and combined cycle power stations…it did use a simple NLI though. 

The example you posted was based on the above system, designed to illustrate a method of giving a chatbot a sense of space/ imagination, it had a ‘minds eye’ that kept track of the object locations, detecting collisions and obscured sight/ reach lines etc.

This is now built into the conversation KorrBot.

Simple sentence manipulation is easy… writing a full conversation bot is a little more complex… as I’m finding out lol.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on April 19, 2018, 12:58:56 pm
What do you think about this article? It says having the sensors close to the neuron dendrite and no the synapses
http://www.kurzweilai.net/the-brain-learns-completely-differently-than-weve-assumed-new-learning-theory-says

Are you going to modify your neuron schema now korrelan? If not, then why not?
Title: Re: The last invention.
Post by: LOCKSUIT on June 07, 2018, 10:30:51 am
Question is still above lol....thread is quite lol


Hey korr, recognizing doors is a 'discovery', but that is just a sub-goal for ex. putting together a creation on a table (say your building a bicycle) which is your real-goal NOT sub-goal "recognizing door/hinges/etc). Also building a bicycle or house is a sort of sub-goal itself, because the real goal is that the bike gets you to beautiful Susan or gives you lovely exercise feeling or the house keeps wolf bites away and warmth in plus food-eating comfort/consistent intake. But don't get wound up thinking there is 3 levels here, it's really 2 level, because building tools feel really rewarding if you get it done cus it leads to real rewards, recognizing door/etc is very low reward because it's so common and not really "part" of build bike>get to see/hear Susan. Last but not least, senses need rank integers, because recognizing door/bike/Susan are not all the same rewardingful!
Title: Re: The last invention.
Post by: korrelan on June 08, 2018, 09:14:24 am
Quote
Question is still above lol

Oh Yeah lol.  Been very busy lately… as usual lol

Interesting article… both the Hebb and the new theory are wrong.

Both theories are assuming that a neuron with its associated synapse and dendrites is an isolated computational unit… its not. 

No two neurons are alike, not just in a single brain… but between all the brains on the planet.

Every part of the brains substance plays a roll in creating our consciousness.  So it’s not just the pyramid neurons, the Glial/ Myeloid/ etc cells, and even the actual cerebrospinal fluid that flows through the brain is important.  The Synapse from neurons don’t just terminate on dendrites, they also terminate on blood vessels and some just release their neurotransmitters into the fluid, this creates a kind of localised bias.

Some synapse’s terminate on the cell body or axon hillock, some even terminate on the axon its self between the myeloid cells… just this alone negates the new theory… though doesn't exclude Hebbs theory.

To be honest the article seems a little naive, I understand they have ‘dumbed’ it down so the layman can understand it but still… they are vastly oversimplifying a neurons structure and purpose. 

They have ‘tested’ a pyramid neuron by taking it out of its natural environment and growing it on a wafer… any modern computer would act as a simple current conductor in a circuit if enough voltage/ ampere was passed through it… totally negating its true use/ purpose.

It’s just a weak attempt to get something published; they are probably under pressure to produce results in order to keep their funding.

So no I won’t be changing my design… if my interpretation of the human nervous system and it’s functioning was incorrect… my AGI wouldn't work… and it does, so…

Thanks for bringing it to my attention though.

 :)

Ed: In reply to your second point about goals.

The problem with goals is the same as with words, it’s setting an accurate encompassing definition, the boundaries are impossible to resolve between complex goals, the resolution and abstract properties of a goal make it impossible to accurately define.

If you where to break a simple task down into its constituent goals… say… making a cup of coffee… is picking up the spoon part of placing the coffee in the cup or getting the coffee from the jar? Is it a separate goal? You can keep breaking a task down until the concept of ‘goal’ has no useful meaning/ purpose.

Like words/ concepts… How long is ‘long’? is it shorter than ‘longer’?  The only semi-accurate method of defining a length is by implying multiples of the smallest unit… this suffices for most tasks at the resolutions required, ie millimetres.  But we then sub divide and get even smaller and smaller units… goals have the same properties.

The boundaries of a goal are relative to the task at hand… and that’s a problem.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on June 08, 2018, 02:10:59 pm
Woah don't wright too much korr!

Pattern, similar to the right pattern creates a consciousness in the robot. But not fluid bro....dude.. dude.. dude..your post on the consciousness thread was great.

Um....yeah...breaking down task into many mini  tasks is crazy/messy/unclear....that's why it's structured in a tree/DB format hiearchy top-down with a clear goal that the spoon, the pouring, are for making coffee, and the making coffee and serving is the top goal (coffee order)...
Title: Re: The last invention.
Post by: korrelan on June 10, 2018, 10:26:35 am
Quote
But not fluid bro

Cerebral fluid plays an important roll in emotion/ alertness/ etc.

When you consider that vast swathes of neurons can have their parameters affected by neurotransmitters being present in the actual fluid surrounding them.  It’s the brains version of a global variable/ parameter.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on June 10, 2018, 10:32:33 am
Oh yeah, you guys mean "rewards" when you say emotion, heh heh.

Ok, reward/alertness. Just don't use real (or simulated) fluid.
Title: Re: The last invention.
Post by: Art on June 10, 2018, 03:59:34 pm
And the Brain and Computer chips have a lot in common...

When stuff leaks out of them, they can become less than functional.
(It was widely told that the factories where computer chips are made, insert a measured amount of smoke into each chip they make. If that chip is ever damaged by faulty wiring, electrical surge, poor judgment in experiments etc., the smoke will leak out and basically ruin the computer chip.)  ;)
Title: Re: The last invention.
Post by: LOCKSUIT on June 18, 2018, 09:13:29 pm
Korr, how will your AGI cure cancer or develop a better spaceship?

What I'm getting at is this:

Will your AGI discover the solution for cancer by mental kung-foo? Or will it by experimenting in the real world with cancer cells in a lab with an arm? Or by both?
Title: Re: The last invention.
Post by: korrelan on June 19, 2018, 11:16:21 am
Quote
Will your AGI discover the solution for cancer by mental kung-foo?

Mental Kung-foo… lol… good name for a chatbot.

We humans like to think of ourselves as intelligent, though how intelligent and indeed the limits of intelligence possible remains to be seen.  At our level of intellect the best method we have devised so far is the ‘scientific method’… and with good reason.

New knowledge has to be based on testable/ reproducible facts; there is no substitution… no assumptions or guesses… cold hard testable facts.  No matter how intelligent a system becomes these methods must be adhered too if advancements are to be realised.  Obviously if the system is intelligent it will realise/ adopt these same methods… they are a logical conclusion/ outcome… a law of nature/ information/ reality.

How intelligent my AGI could become, I have no idea.  There are several factors that limit human intelligence, axon propagation speeds, size of our craniums, etc. Although my AGI won’t be limited by these finite properties there are properties to information/ knowledge its self that could be limiting factors. 

Everything we know and understand about reality has been derived from a human point of view, the limitations imposed on our senses and mental faculties have shaped our understanding of reality and the tools we design to enhance our abilities… we only understand what is observable/ detectable by us.  When we ‘think’ we compare/ calculate at a resolution/ level of abstraction relative to the problem we are trying to solve, again constrained by our experience of reality.  For an AGI to be useful it must experience/ experiment/ design for our reality.

You have probably heard the old saying ‘A jack of all trades is a master of none, but often-times better than a master of one’. 

We have many savants amongst us, people who have extraordinary mental abilities… but it’s always at the deficit of other mental abilities.  I pride myself in being a ‘jack of all trades’, if I was a mathematical savant… there is no way I could have got my research this far. So at our level of intellect too much intelligence in one sphere can become a limiting factor… that will probably hold true for higher levels of intelligence… again its the ‘general’ bit of AGI that counts.

My main goals include self-awareness/ consciousness and human level intellect; the true power won’t come from one single extremely intelligent machine but from a collection of human level machines working in perfect harmony, experimenting and exchanging information seamlessly... tirelessly.

We humans could achieve a similar scenario… but there are too many exposed backs and knives available.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on June 19, 2018, 04:11:44 pm
A long-winded answer that never answered my question lol.

How will your AGI.....solve cancer?.............................it's a clear-cut question/answer korr, there's only 2 ways it can solve cancer. Either it recognizes Korrelations between 2 known protein facts (or between a known germ and a generated idea), or, it does the same thing but in the real world using test tubes cancer species and tests.
Discovery is a new generated 'node'/connection that matches known data/facts OR does that but then is tested in the real world and it watches what comes back in its sensors.

New knowledge/experiments (to see what enters your sensors from physics) is actually based not really on testability but actually on old knowledge for generated likely good plans/facts and their verification/grounding in the generating of knowledge/actions/tools/brain/arm process. Assumptions is where it ignorantly excels without bounds, it's like beginners luck.

It will receive all of our most important tools/knowledge and continue from there.

A jack of all trades is a master of none, but often-times better than a master of one. True. It will be diverse in knowledge/tools. But the key process for reward is the same thing (match/etc). And there is few main goals; food/immortality/AGI. It's like atari games getting better at goal.
Title: Re: The last invention.
Post by: korrelan on June 19, 2018, 06:13:13 pm
Quote
A long-winded answer that never answered my question lol.

Yes I did...

https://en.m.wikipedia.org/wiki/Scientific_method

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on June 19, 2018, 06:34:33 pm
Ok good, your AGI will be working externally on the outside in a lab.

The next thing you need is to teach it important facts though first (education). Other AGI teams are doing this.
Title: Re: The last invention.
Post by: LOCKSUIT on June 27, 2018, 01:25:24 am
Korr, if human neurons either fire or don't, then is it truly good that modern neurons use sigmoid ex. outputs 0.671? Note I learned non-sigmoid is too jumpy going from 0 to 1. But is sigmoid what humans do?
Title: Re: The last invention.
Post by: korrelan on June 27, 2018, 09:40:13 am
Quote
But is sigmoid what humans do?

There has obviously been a lot of research into the human neuron over the years. 

Researchers look at the structure and electrochemical properties and try to derive function; the weighted/ sigmoid neuron is just one interpretation of the human neurons schema.

Some people will argue that the human neuron must use something a kin to the sigmoid function because the whole weighted/ sigmoid schema ‘kinda’ works.

I personally don’t think the human neuron uses anything like a sigmoid function.  It’s just a tool, algorithm that was employed to make the classical neuron mathematical model work.

I stopped using non-linear activation functions/ back-propagation many, many years ago lol.

https://en.wikipedia.org/wiki/Neural_coding#Temporal_coding

The human neuron is not just about ‘what’… but ‘when’.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on July 12, 2018, 11:09:58 am
Korrelan, can you explain how I am able to link together 2 memories like "buildings" and "snakes bit" almost instantly when the neurons can't connect that fast? I thought neuron's little feeler ends take eons to connect, little own to the right one....what is dis even idek.

#explaindatstuff
Title: Re: The last invention.
Post by: korrelan on July 12, 2018, 01:02:26 pm
Quote
Korrelan, can you explain how I am able to link together 2 memories like "buildings" and "snakes bit" almost instantly when the neurons can't connect that fast?

You are still thinking of information/ memories in a classic format… in a human language/ information exchange/ scientific notation/ serial format.  As words, nodes and links… Your brain does not store/ use information in this format.

I will try to explain… high dimensional memory/ space.

Let’s say you have a serial database list of all the people on the planet and you wanted to search for/ select just one person.  Using just a few search parameters like sex, age, nationality, etc would yield too many results, but the more search parameters you can apply the narrower the criteria and smaller the result list/ table. 

So you could imagine creating a set of search criteria that would always find just the person you are looking for.  What this method is leveraging is high dimensional space, the higher the number of dimensions/ fields the more focused the search results.

The problem is that the above example still uses a serial schema; items are checked one by one in the list until a record is found that matches all the criteria, and although the search can accurately find the person every time, the time it takes is linearly proportional to the amount of information being searched… this is bad lol.

Your brain also uses extremely high dimension space to store knowledge; in fact the number of dimensions is so high that any object/ concept can be encoded and the proportional time problem is avoided by accessing the information in parallel, so all the search parameters are applied at the same time, not in sequence/ serially.

A good analogy would be the difference between a picture and paragraph describing the picture.  When you read the paragraph you are absorbing information serially, the words slowly build up an image in your ‘minds eye’.  When you look at the picture ‘BOOM’ you are absorbing the information in parallel, all of the information is arriving at the same time and you instantly understand the picture… hence the idiom… a picture is worth a thousand words.

Ok so… the high dimensional space means that the memory recall is very accurate, and the parallelism means its extremely fast… so technically your brain never has to search for information… not in the classic sense of search anyway.  All your knowledge relevant to your current state of mind/ task is instantly available because the knowledge/ experience IS your current state of mind.

Also consider that we are not like classical computers, Information is not stored inside your brains memory; your brains connectome structure IS the memory/ information.  Your house wiring doesn’t need a memory/ logic to know what to do when a light switch is flipped; its operation is encoded in its structure. If you want an extra function from your house, if you want your house to ‘remember’ what this new switch does… you have to wire it accordingly.

Even the hinges on a door are providing a kind of function and memory, they allow the door to swing freely in two dimensions and not fall off, without the hinges the door could not function.
 
Remember my short video of the binary logic example, there is no program or traditional memory required for the neuromorphic neurons/ synapse/ etc to simulate a binary counter, the program and the logic are encoded in the structure.

https://www.youtube.com/watch?v=nqPKOrhZlAw

Quote
I thought neuron's little feeler ends take eons to connect, little own to the right one....what is dis even idek.

Synapses are only created when encoding new information, not when retrieving/ using current information.

I hope that made some sense lol.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on July 12, 2018, 01:19:10 pm
A table with chairs, people, and lots of food.

Buffet.

A picture in 1 word lol.

Ok korr...that no answer question.......I know how we have a re-use hierarchy, but when I select a mouth and a nose and link em ex. "UK and teddy bears", the 2 things are linked now for life if I repeat them for a few minutes. How are they linked so fast? Does a connection poop out of a neuron soma and look for the other feature?
Title: Re: The last invention.
Post by: LOCKSUIT on July 12, 2018, 01:22:57 pm
maybe all the layer nodes are already all connected to every, single, other node?
Title: Re: The last invention.
Post by: LOCKSUIT on July 12, 2018, 01:32:59 pm
furthermore, there must be a new node in a deeper layer made that lnks to the two, else no new larger feature is made by the 2 smaller sentences!! And that's important to my work !
Title: Re: The last invention.
Post by: LOCKSUIT on July 12, 2018, 01:47:41 pm
btw we scan images too, like

girl...holding turtle......eating..........wait for ittttt,,,,starts raining.........paper drawing gets ruined

It's a video sentence
Title: Re: The last invention.
Post by: korrelan on July 12, 2018, 02:18:46 pm
Quote
Ok korr...that no answer question

Yes I did... you asked...

Quote
Korrelan, can you explain how I am able to link together 2 memories like "buildings" and "snakes bit" almost instantly when the neurons can't connect that fast?

I explained high dimension memory, or the fact that neurons don't have to connect that fast... there already connected through high dimensional space.

Quote
Does a connection poop out of a neuron soma and look for the other feature?

Yes lock... why not... it's all done with pooping neurons.

For you to pick those two concepts ‘UK and teddy bears’ there must already be something subconsciously connecting them.  You are not able to think randomly, I know it feels like you can… but you can’t.   Something, some were in that noggin of yours linked the two together before they even popped into your consciousness.  It feels random because you can’t access your subconscious and see the connections.

Because information is stored in high dimensional space, and because all ‘memories’ use the exactly the same facets to provide the ‘details’, this combined with the various combinations through episodic memories etc, the connection could be very convoluted.

So in this instance you are just reinforcing an existing connection by mentally repeating it, which is further reinforced by the memory of the fact that you made a conscious effort to repeat and remember it.

And don’t say that you personally can think randomly… no you can’t.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on July 12, 2018, 02:24:05 pm
ok

korrelan and murderer

:))))

there MUST be something linking them!!! :)))))
Title: Re: The last invention.
Post by: LOCKSUIT on July 12, 2018, 02:37:47 pm
I could be the next physic.



Wait, before we reply anymore, let me just re-inforce that above in my brain.

:)))))))

korrelan isA murderer

korrelan isA murderer

korrelan isA murderer



Also, new update: The pink unicorns can indeed fly through the moon, where we live. I seen them with my own eyes. My leg actually jumped a little bit, I got to meet aliens, finally! I made some new friends, just last night, named sonica blue 4728 and vivian majasker 899.
Title: Re: The last invention.
Post by: LOCKSUIT on July 12, 2018, 02:49:12 pm
Quote
For you to pick those two concepts ‘UK and teddy bears’ there must already be something subconsciously connecting them.

back to the drawing board ye go !
Title: Re: The last invention.
Post by: korrelan on July 12, 2018, 03:02:14 pm
Quote
korrelan and murderer

You have experienced the concept of both.
They are probably both human.
A human can be a murderer.
They both exist in your reality.
They are both sentient.
They both probably would have a body, arms, legs, etc.
Etc.. Etc…

You probably thought of korrelan, a human (honest) that fired all your underlying knowledge of humans and primed your brain for the next phase. What could I pick that would prove my point… got to make it seem random…ah I know… murderer.  There are so many deep sub conscious connections between those two concepts lol.  There could be any number of reasons/ explanations why you picked those two concepts.

You can’t include or use any knowledge within a thought that you have not experienced, and if you have experienced/ learned it then it has been integrated into your connectome/ high dimensional memory… and ‘linked’ to every other thought or memory that uses the same shared facets.  Linked/ recalled by facet… as well as complete/ partial memory engram.

BTW… I took the time/ effort to answer your questions, I would appreciate it if you could at least extend to me the same levels of courtesy and stop being a di*k, grow up.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on July 12, 2018, 03:22:42 pm
I read all of your replies. I was simply asking questions/pointing out flaws, and with a touch of funny, I really lmao btw...

Well, true, any selected feature/memory is only selected if it self-ignites, or, is matched, or linked to one of those events. But maybe we can also do a random pick too?

I could pick korrelan, then read the morning news story and choose a word at random and link the 2.
Title: Re: The last invention.
Post by: LOCKSUIT on July 12, 2018, 03:27:33 pm
(we often do linking-throughs like animal>food>lunch>school>work>pay>job>cook>fun>etc)
Title: Re: The last invention.
Post by: korrelan on July 12, 2018, 03:31:17 pm
There is a big difference between thinking of a word, and choosing one from a newspaper.

One requires an internal 'thought' process, the latter uses an external source of data.

 :)

Title: Re: The last invention.
Post by: LOCKSUIT on July 12, 2018, 03:41:12 pm
not a big difference though when we link the 2 together though...

that's the point / question.....
Title: Re: The last invention.
Post by: LOCKSUIT on July 12, 2018, 04:17:14 pm
see pic.....how do we do this?

I want to link 2 features together to create new, re-useable feature made of 2 small ones.

Our brains must already have the connections from birth, simply it takes the 2 activated and strengthens 2 connections to a the same node in a deeper layer.
Title: Re: The last invention.
Post by: LOCKSUIT on July 12, 2018, 04:35:40 pm
well more like this strictly speaking
Title: Re: The last invention.
Post by: ranch vermin on July 12, 2018, 05:28:28 pm
What your suggesting (which i thought of AGES ago),           "reusability inside a neuron network"

If you want to trust me, (anyone who reads this.) I think it comes down to logical simplification,  yes you can simplify logic gates, but you only get a realistic amount of compression out of them,  so its 30% the size tops.   other than that, you have to store everything individually, no bits for free.   And I think thats only if the patterns belong in an OR/ in the same class / arise at the same cell.  if they are separate classes then you dont even get to do it.

I think the solution to this super robot is possibly simple,  but ur looking in the wrong place.   U NEED TO IMPLEMENT locksuit!,  OR U WONT GET ANYWHERE!!!

Stop Hijacking Korrellans thread,  he doesnt need your work here, its about HIS.
Title: Re: The last invention.
Post by: korrelan on July 13, 2018, 02:40:30 pm
Quote
see pic.....how do we do this?

I want to link 2 features together to create new, re-useable feature made of 2 small ones.

The simple answer is… we don’t.

Your brain is not using single words/ nodes and links, you are thinking about the problem space at too high an abstraction level.

It’s like the lines you draw and words/ letters you type don’t exist inside your computer; they exist purely as interpreted binary code… It’s similar with your brain… There are no simple nodes and links in the way you perceive them…

You have 80 billion ish neurons and each neuron can have thousands of synapse, that’s approximately a quadrillion synapse… that’s a lot of synapse.  Every thought ‘frame’ you have heightens neural activity in approximately 20% of your brain at any one time, though most of the brain is always active.  Different thoughts will use different areas, so even working at the conservative estimate of 20% that’s about 4 billion neurons and 4 trillion synapses just to think about a cup, or look out the window, or scratch your nose… not a single connection… but trillions.

You exist in your own personal simulation of reality and it’s different to everyone else’s, everything you experience comes from your senses, even colour doesn’t exist in actual reality… it’s just a construct of your brain like everything else.

When you think about a cup you are mentally modelling every aspect of the cup, what it looks like, what its made of, what it’s uses are, etc.  You are mentally modelling a concept with properties and sub properties, not a single simple node/ word.

You asked how your brain links two memories/ concepts; you have to consider the complexity and depth of the schema that is doing the ‘linking’, it’s not as simple as drawing a line between two nodes/ words,.. I wish it was.

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on July 13, 2018, 03:41:39 pm
Is it true that a hidden node in the middle of a CNN or GAN can reverse down to the leaves and reconstruct a eyeball or face or nose? What about those patches on black on this page at the very top (put your mouse on one of them)? https://distill.pub/2018/building-blocks/

You mean my brain is using weights/representations right? Not any sort of actual word/object...1 word is really a fuzzy cluster of dots like a dandelion. And it's lighting up all the connected features that link to cup like container, item, cheap, round, hard, coffee, drink. Maybe hard cheap drinkable etc link together to a representative node meaning cup. Wouldn't it be great though if it had a title "cup" on the node?
I think so. They don't make up the word cup, (or wait maybe they do, while all these minus hard make up hard, each making up each other (needing a few more word properties for others since not all words have same properties)), rather each is a separate word with properties linked and each can be used to create bigger features.

Well if 1 neuron/column did hold 1 word or audio snippet, it would work like a word just what is stored is like binary on a disc it is weird 'being' saved.

Ok so you're saying I'm not actually linking cup and socks but instead the concepts ex. I'm linking the cluster "drink, coffee, container, hard" to "soft, feet, cuddly, warm, item"? Isn't that a explosive controduction of the problem in the first place!? Really we just link the 2 dandelions together by linking cup to socks...

Is it true that the brain is making representation nodes that stand for both cups/bottles/etc, not a 'word' like container but it could be thought of being like a word, like "container", which'd stands for cup/bottle/bucket/etc that hold liquid/etc.

Ok great a representation is made but wouldn't it be awesome for it to have its proper name, container ! ?
Title: Re: The last invention.
Post by: korrelan on July 13, 2018, 11:15:06 pm
Quote
Ok great a representation is made but wouldn't it be awesome for it to have its proper name, container ! ?

You asked me my opinion/ theory on how the human brain stores and links memories.

That doesn’t mean this is the only method or even required, there may be many ways to achieve an AGI.

 :)

Title: Re: The last invention.
Post by: LOCKSUIT on July 24, 2018, 01:02:35 pm
Is the frontal cortex the facts describing facts area here we're talking about? I see now!
Title: Re: The last invention.
Post by: korrelan on July 24, 2018, 06:50:23 pm
Quote
Is the frontal cortex the facts describing facts area here we're talking about?

Yeah… sort of…the prefrontal cortex is the largest swathe of cerebral cortex that has no sensory input/ outputs… so its functionality is purely derived from the rest of the cortex.

It’s the largest bit dedicated to learning what the rest of the cortex is doing, though there are many smaller areas evenly distributed across the cortex that do a similar jobs relevant to their adjacent cortex areas.

 :)
Title: Re: The last invention.
Post by: korrelan on August 15, 2018, 09:51:26 pm
You have probably heard about the persistence of vision but what about the persistence of motion? 

During my AGI research into the persistence of vision I realised that my system is also exhibiting a kind of persistence of motion.  I needed some way to test if humans actually do experience this phenomenon.

I eventually came up with an experiment that anyone can try.

Simply stare at the centre of this video for a 10 seconds (starting from 1:03 in video), and then look away at a picture on the wall, or your keyboard… any fixed stationary object.

WARNING… DO NOT DO THIS… If you suffer from epilepsy or any other sensory/ frequency triggered metal problems.

https://www.youtube.com/watch?v=zXTpASSd9xE

Did you see the persistence of motion?

 :)
Title: Re: The last invention.
Post by: LOCKSUIT on August 15, 2018, 10:17:43 pm
All I see after looking away at my wall is a triangle at the right side. No worky...

Nonetheless I don't think it amounts to anything important in the AGI...explain if it does...
Title: Re: The last invention.
Post by: ivan.moony on August 15, 2018, 10:50:06 pm
It works, static room wall seems like going away...

I noticed this kind of effect when looking to the back window while driving in a moving bus, and when the bus stops, the effect starts. I also saw experiments on Facebook or similar pages that were created specifically to induce such effects. Some pretty mean things are possible, like looking your room swirls in and out in multiple centers of rotation after watching specific 5 minutes video. The effect lasted for 30 seconds, but it was very cool.
Title: Re: The last invention.
Post by: korrelan on August 15, 2018, 11:06:53 pm
Ah I see they call it the motion after-effect, not the persistence of motion… I’d never heard of this before… sweet.

 :)

@Lock

It shows that the visual mechanism sensing motion has a kind of inertia, and motion is not sensed/ calculated on a frame to frame basis.  So the visual cortex neurons representing sensed motion at specific regions in the visual field are firing for longer durations dependent on the length of stimulation… probably not relevant to your AGI.

 :)