The last invention.

  • 438 Replies
  • 63282 Views
*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Global Moderator
  • *********************
  • Deep Thought
  • *
  • 5329
Re: The last invention.
« Reply #390 on: December 27, 2018, 01:08:58 pm »
Hey Korr!

A line as taken from one of Bill Cosby's old albums (back when he actually did comedy),

"I started out as a child..."  ;)
In the world of AI, it's the thought that counts!

*

Korrelan

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1235
  • Look into my eyes! WOAH!
    • YouTube
Re: The last invention.
« Reply #391 on: December 29, 2018, 10:47:55 am »
Ahhh! The house is finally empty; I thought/ hoped the holiday season would get easier once my kids left home… but no… they all come home to drink my booze and scoff me out of house and home… and omg they’re multiplying… grandkids everywhere.

A few years ago a built an small outside annex so I can drink/ code outside in the summer.  I had a plan this year, once they’re all settled and chatting I could sneak off, light the fire and have a few rums in peace and quiet, see…



I was right comfy… but by half ten there was no one left in the actual house, they were all sat outside bugging me… what you doing? What’s the matter, why are you out here, can we play, it’s nice and warm out here, we’ll keep you company… bah! Humbug.

Oh! And apparently It’s not PC to run psychological experiments on your grandkids… they all ended up with the same amount of sweets… eventually…lol.

Quote
Korr, you've been quiet

Yer!  Life’s been hectic lately; I’m still working away in my spare time though.

Quote
how bout you give us a timeline of what you have done so far

1966 - I started out as a child…
1985 - Started messing with AI
2018 - Still working on my AGI
2020 - Release the first true AGI, become a billionaire and solve world hunger/ disease.
2022 - Probably get assassinated for my troubles… lol.

I’m still working on integrating my working modules together, they all work great individually within the same connectome design but once you start mixing audio, vision, tactile, etc it gets a little more complicated, especially when you include the structures designed for ‘free will, self awareness and consciousness”’

It’s the little things I’m discovering that are taking my time, like our audio cortex actually processes certain elements of our vision… and vice versa… everything is mixed up.  Each of our senses has a spatial/ temporal domain/ frequency range of activations, and so once the sensory information is in the GTP the cortex doesn't care where it originated, it’s just processed as a whole.  I can see where synaesthesia arises from now, and how we can mentally picture/ imagine a surface just from touch alone.  Sound/ sight and touch can all have exactly the same properties.  This is how sounds can be smooth and blue can look cold.

I’ve been working under the assumption that each major cortex region mapped its sensory inputs according to the qualities of the input… this works.  I can train a map to recognise speech, wipe/ reset the map and then learn it to recognise objects… no problem. 

Then I noticed the two maps had very similar topologies so I fed vision into an audio/ speech map, it worked… except they were spatially/ temporally incorrect.  High frequency events were not mapping correctly to episodic memory.  This means that my assumptions for the sensory frequency ranges are wrong… I’m adjusting them all accordingly.

I’ve always wondered how subject A could recognise any letter of the alphabet, that subject B traced out with their finger, on any part of subject A’s body, any skin surface, any angle, any size… even locations where subject A had never experienced/ learned such a sensation. 

Up until now I thought the phenomenon was down to just our internal mental model of the world, but it seems we actually use a sensory map comprised from all our senses combined/ frequency aligned to generate the model… we partially use sense of visual location to place the sensation of the finger strokes in 2D… cool.

Ed: It actually helps explain a lot of human sensory phenomena, like how the resolution of blind person’s sense of touch can increase after losing their sight.  Because their vision and sense of touch occupy similar adjacent neural real estate arranged by the corresponding frequency domains, once vision is lost… adjacent neural areas become available for the sense of touch to spread out into… enhancing touch… no complex re-mapping is required.

So… yer… still short of free time… and still a work in progress… I need a holiday.

 :)
« Last Edit: December 29, 2018, 01:25:15 pm by Korrelan »
It thunk... therefore it is!... my project page.  WEB SITE

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • ***************
  • Deep Blue
  • *
  • 2656
  • First it wiggles, then it is rewarded.
Re: The last invention.
« Reply #392 on: December 29, 2018, 03:58:13 pm »
Recognize any letter, no matter its size, location, angle, etc......the sense of touch uses the method vision does

you don't need touch, just do vision or text. Text is really efficient and captures our knowledge.
Emergent

*

ruebot

  • Electric Dreamer
  • ****
  • 146
  • All your words are belong to us.
    • Demonica
Re: The last invention.
« Reply #393 on: December 29, 2018, 04:50:04 pm »
Warning: Article may be NSFW, but comes from The Sun so you decide:

"TouchYou technology, is an electronic smart skin that is made up of tiny sensors that can detect the position of a touch, similar to a laptop touchpad."

"It also detects the force, or pressure of the touch and sends the the information to a wireless Bluetooth device that is connected."

"The sensor can be designed to be attached to any surface curvature, and the number and size of the electrodes (which are related to the sensor resolution or sensitivity) can easily be changed."

https://www.thesun.co.uk/tech/8030636/sex-robots-feel-human-touch-smart-skin/

*

Korrelan

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1235
  • Look into my eyes! WOAH!
    • YouTube
Re: The last invention.
« Reply #394 on: December 29, 2018, 05:59:04 pm »
@Lock

Quote
you don't need touch, just do vision or text. Text is really efficient and captures our knowledge.

I feel you are slightly missing the point of my project LocK.

@ruebot

Cool piece of kit… the sensor itself is interesting.  Obviously some kind of capacitive/ resistance based matrix. 

There is nothing like a good war… or the sex industry to drive tech forward.

 :)
It thunk... therefore it is!... my project page.  WEB SITE

*

Korrelan

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1235
  • Look into my eyes! WOAH!
    • YouTube
Re: The last invention.
« Reply #395 on: January 05, 2019, 01:34:18 pm »
I had a request to explain my cluster… I bung it here…



This isn’t my main work station this is the cluster I can call on when ‘MORE POWER SCOTTY’ is required… and this cluster is definitely overkill for a chatbot… lol.

Quote
What's the fewest number of nodes needed for supercomputer building practice?

It obviously really depends on your requirements.  Building a cluster specific to your requirements can be quite a complex task, though obviously worth the effort in the long run.

There is no minimum or maximum number of nodes/ cores; it depends on how you can split the processing requirements of your project.  It’s a similar concept to multi threading. 

I have several different configurations I can load.  I tend to write my AI applications as single core modules so I can run 32 simulations at the same time with different parameters to test suitability, or I can split a large model and run the nodes as a parallel processor.

For parallel work the cluster requires load balancing, etc for efficient loading and the network is always the bottleneck.  You have t be able to efficiently split the processing into nodes relying on the message passing interface (MPI) to exchange results/ data across the network.  I have custom wrote my own MPI to suit my special requirements.

I split a large 3D neural model into separate sections; each node runs one cycle and reports back to the master node when it’s completed it’s cycle, nodes then share relevant information and again notify the master node when done, the master then issues a command to continue the next cycle.  This kind of setup has a breakeven point because of network lag, etc, it’s not until I need to run hundreds of millions of neurons that it starts to give processing time benefits.

I personally wanted a cheap, reliable set up that didn’t require more than 1Kwh of electricity to run for long periods…each node uses fairly old tech, but still packs a punch even by today’s standards… Over clocked INTEL 478 Q6600 4 core @ 3.2 Ghz, 120 GB SSD, 8 GB Ram, 2 X 1GB NIC, Gigabyte motherboards, etc… fast, solid and reliable.  Each node would probably cost £60 Uk total… so cheap.

The head/ master node as a similar setup except it has more ram (16GB) and decent GPU to run the 4K monitors and projectors, etc. 

Each node runs Oracle VM running 4 virtual machines; I wanted a fast windows compatible system so the software I design can be shared not just amongst the nodes but too anyone on the web with a compatible OS.  You could load my ‘compute node’ software on your computer and I could use your spare processing power to help run my AGI… or you could run your own AGI, etc.

I can also configure the cluster for a corporate/ office environment for testing the network applications I write, etc so very handy to have. 

Obviously the most important thing is the wheels… lol.

 :)
It thunk... therefore it is!... my project page.  WEB SITE

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Global Moderator
  • *********************
  • Deep Thought
  • *
  • 5329
Re: The last invention.
« Reply #396 on: January 05, 2019, 01:44:17 pm »
Ahhh! The house is finally empty; I thought/ hoped the holiday season would get easier once my kids left home… but no… they all come home to drink my booze and scoff me out of house and home… and omg they’re multiplying… grandkids everywhere.

A few years ago a built an small outside annex so I can drink/ code outside in the summer.  I had a plan this year, once they’re all settled and chatting I could sneak off, light the fire and have a few rums in peace and quiet, see…



I was right comfy… but by half ten there was no one left in the actual house, they were all sat outside bugging me… what you doing? What’s the matter, why are you out here, can we play, it’s nice and warm out here, we’ll keep you company… bah! Humbug.

Oh! And apparently It’s not PC to run psychological experiments on your grandkids… they all ended up with the same amount of sweets… eventually…lol.

Quote
Korr, you've been quiet

Yer!  Life’s been hectic lately; I’m still working away in my spare time though.

Quote
how bout you give us a timeline of what you have done so far

1966 - I started out as a child…
1985 - Started messing with AI
2018 - Still working on my AGI
2020 - Release the first true AGI, become a billionaire and solve world hunger/ disease.
2022 - Probably get assassinated for my troubles… lol.

I’m still working on integrating my working modules together, they all work great individually within the same connectome design but once you start mixing audio, vision, tactile, etc it gets a little more complicated, especially when you include the structures designed for ‘free will, self awareness and consciousness”’

It’s the little things I’m discovering that are taking my time, like our audio cortex actually processes certain elements of our vision… and vice versa… everything is mixed up.  Each of our senses has a spatial/ temporal domain/ frequency range of activations, and so once the sensory information is in the GTP the cortex doesn't care where it originated, it’s just processed as a whole.  I can see where synaesthesia arises from now, and how we can mentally picture/ imagine a surface just from touch alone.  Sound/ sight and touch can all have exactly the same properties.  This is how sounds can be smooth and blue can look cold.

I’ve been working under the assumption that each major cortex region mapped its sensory inputs according to the qualities of the input… this works.  I can train a map to recognise speech, wipe/ reset the map and then learn it to recognise objects… no problem. 

Then I noticed the two maps had very similar topologies so I fed vision into an audio/ speech map, it worked… except they were spatially/ temporally incorrect.  High frequency events were not mapping correctly to episodic memory.  This means that my assumptions for the sensory frequency ranges are wrong… I’m adjusting them all accordingly.

I’ve always wondered how subject A could recognise any letter of the alphabet, that subject B traced out with their finger, on any part of subject A’s body, any skin surface, any angle, any size… even locations where subject A had never experienced/ learned such a sensation. 

Up until now I thought the phenomenon was down to just our internal mental model of the world, but it seems we actually use a sensory map comprised from all our senses combined/ frequency aligned to generate the model… we partially use sense of visual location to place the sensation of the finger strokes in 2D… cool.

Ed: It actually helps explain a lot of human sensory phenomena, like how the resolution of blind person’s sense of touch can increase after losing their sight.  Because their vision and sense of touch occupy similar adjacent neural real estate arranged by the corresponding frequency domains, once vision is lost… adjacent neural areas become available for the sense of touch to spread out into… enhancing touch… no complex re-mapping is required.

So… yer… still short of free time… and still a work in progress… I need a holiday.

 :)

Well, what do you expect? You have this sign hanging up for all to see that says, "Come in We're Open!!"  ;)
Very cozy indeed!!
In the world of AI, it's the thought that counts!

*

Korrelan

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1235
  • Look into my eyes! WOAH!
    • YouTube
Re: The last invention.
« Reply #397 on: February 11, 2019, 12:18:27 pm »
I’ve been having loads of problems and I think I’ve figured it out… I think everyone is reading fMRI results incorrectly.

The scan measures brain global blood flow; any area that indicates an increase over a set threshold whilst the patient is performing a specific task is understood to be key area in performing said task. Ie this is the area that performs that task.

According to my research the areas that are being highlighted are actually the areas that are recognising key facets in the global thought pattern GTP, this is an important distinction.

So for speech recognition for example, the whole brain is actually doing the work of processing and recognising the speech patterns, but the blood flow to the Broca’s area increases because that area is recognising the GTP pattern for the results of the speech processing.

An analogy would be a set of search lights all focused on a single point, the point of focus will be much brighter than any single light.

This again also helps explain how different people can have key areas in different locations, or how a complex function can be relearned after damage, how new areas can adapt to compensate.  The new area doesn’t have to re-learn the function from scratch; it just has to re-tune to recognise the gaps in GTP recognition.

This of course also means that all the brain maps, homunculus, etc that are currently being used by neuroscience are incorrect.

The areas are technically responsible for recognising that function but, they are not doing the actual processing.

Just thought I’d mention it.

 :)
« Last Edit: February 11, 2019, 12:39:01 pm by Korrelan »
It thunk... therefore it is!... my project page.  WEB SITE

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • ***************
  • Deep Blue
  • *
  • 2656
  • First it wiggles, then it is rewarded.
Re: The last invention.
« Reply #398 on: February 11, 2019, 02:03:31 pm »
Too much blood liquid mess confusion! Get it to do purposes - aim for results. Figure out how things are thought up. In the end, just summarizing a book may deem more talk worthy then all your work done so far! Then after that, one by one new more tools keep rolling out and you have AGI.
Emergent

*

Hopefully Something

  • Trusty Member
  • ******
  • Autobot
  • *
  • 243
  • So where are these cookies?
Re: The last invention.
« Reply #399 on: February 11, 2019, 07:42:31 pm »
Like what I did in excel in the first screen cap! At least the delegation part, not the analysis centers.  I like the idea, incoming data gets sent to, and processed by, different regions of the brain. Areas that experience the world from radically different perspectives. Like parts that deal with hearing, vision, touch, smell, balance, memory, emotion, threat detection, temperature, time, language, body language, proprioception, haptic feedback in muscles & bones...

Then a specific ratio of these things being processed is associated with a specific situation or activity. Then an area of the brain that knows how to deal with this ratio gets activated, pays attention to the processed data and begins to generate corresponding instructions on how to respond to the situation. It's like preforming a spectral field analysis on sunlight going through a planet's atmosphere. All the light goes through in parallel, you can tell what's in the air right away by the shape of the graph. 

Likewise an area of the brain does a spectral analysis on it's self and recognizes a situation as a whole, so it can activate an area of the brain tailored for dealing with that situation.

*

Korrelan

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1235
  • Look into my eyes! WOAH!
    • YouTube
Re: The last invention.
« Reply #400 on: February 16, 2019, 06:21:39 pm »
Recognising objects/ patterns from grip patterns... this is a continuation of my sensory/ motor cortex experiments.

The hand model has joint position (blue lines) and pressure pads (grey blobs), these produce a frequency modulated input pattern into the connectome that represents a physical (ish) hand and patterns generated from gripping different objects.

As I mentioned earlier I realized that because the sensory organs provide a stream modulated between specific bandwidths, and the brain is self organising with how it orders the receptive areas for said frequencies, its really is not fussy about what its inputs are or where they come from.

The neurons in this model have been laid out in position through neurogenesis according to and audio visual input stream.  This connectome is using audio and visual attention areas to map tactile touch.  This is like hearing or seeing tactile touch patterns… seeing with your hands… freaky… yet cool.

It’s recognising key similar components/ patterns in audio/ visual and tactile input streams and mapping areas accordingly.

Hmmmm... Lip reading?

https://www.youtube.com/watch?v=fABMt_H4L_s

All these bits will add up… eventually Lol.

 :)
« Last Edit: February 17, 2019, 11:13:03 am by Korrelan »
It thunk... therefore it is!... my project page.  WEB SITE

*

Korrelan

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1235
  • Look into my eyes! WOAH!
    • YouTube
Re: The last invention.
« Reply #401 on: February 27, 2019, 10:34:48 am »
I found this very interesting…

As you know my AGI is based on the human model/ cortex/ nervous system.   As part of the system I have stereo video cameras that feed a modified data stream into the connectome model to represent the ocular system/ vision.

The AGI learns to see as it matures through experience, part of that process is building its own mammalian equivalent visual cortex (VC).  The VC learns to recognise the various shapes/ lines/ gradients it sees and creates neural maps in the appropriate connectome areas. 

The basic idea of an orientation map is that any single small area of the map will only respond when a specific type of visual stimulus is received, so there could be a very small section in the fovea of the visual field that will only respond to a short line at 45 degree orientation, etc.

All this works great, the efferent streams from the VC show the expected frequency responses to the visual stimulus, and the AGI can recognise objects, etc.

The problem is motion, the VC responds to movement as expected, as the object rotates or moves within the field of view the frequency responses are smooth as the outline/ colour/ etc of the object are tracked. 

I expected this smooth response and had an working theory of how the system would learn to follow motion through optical flow… but it’s not… it’s tracking movement, no matter how slow through sudden snaps in frequency… where the hell are they coming from.

I’ve figured it out… and you can see what’s happening in this test video at 2:56 as the shapes move over the face.

As an object moves, no matter how slowly across a background there are facets around the perimeter that snap from one orientation to another as the background is either covered or revealed.  There is a point where the receptive field suddenly switches from one orientation to another, without going through the expected rotational frequency range.

I thought this was cool… and wasn’t immediately obvious to me… I’m easily pleased lol.

https://www.youtube.com/watch?v=SPr8KhqVCeo

 :)
« Last Edit: February 27, 2019, 12:20:16 pm by Korrelan »
It thunk... therefore it is!... my project page.  WEB SITE

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • ***************
  • Deep Blue
  • *
  • 2656
  • First it wiggles, then it is rewarded.
Re: The last invention.
« Reply #402 on: February 27, 2019, 09:53:26 pm »
Is the video below incorporated into your AGI korr?

https://www.youtube.com/watch?v=HLuRQKzYbb8&feature=youtu.be
Emergent

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • ***************
  • Deep Blue
  • *
  • 2656
  • First it wiggles, then it is rewarded.
Re: The last invention.
« Reply #403 on: February 28, 2019, 12:25:48 am »
Question #2: Do you have 2 nets, one for semantic map, and another for syntactic temporal sequence features building up?
Emergent

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6541
  • Mostly Harmless
Re: The last invention.
« Reply #404 on: February 28, 2019, 08:57:53 am »
Fascinating stuff  Korrelan 8)

 


Friday Funny
by LOCKSUIT (General Chat)
Today at 05:49:42 am
The Orville
by Freddy (AI in Film and Literature.)
March 22, 2019, 10:10:57 pm
Can You Tell the Difference between a Real and a Rendered BMW 8-series?
by Freddy (Graphics)
March 22, 2019, 09:13:18 pm
As James Taylor said,
by Art (General Chatbots and Software)
March 22, 2019, 02:23:55 am
VFX Breakdown The Walking Dead
by 8pla.net (Video)
March 21, 2019, 11:43:00 pm
The last invention.
by LOCKSUIT (General Project Discussion)
March 21, 2019, 11:21:59 pm
KorrBot
by Freddy (General Chatbots and Software)
March 21, 2019, 02:06:17 pm
The yetzer hara. A hint from nature?
by Art (General AI Discussion)
March 20, 2019, 12:33:22 pm

Users Online

91 Guests, 0 Users

Most Online Today: 115. Most Online Ever: 259 (February 07, 2019, 07:00:00 am)

Articles