Creature AGI (psuedo)coding thread

  • 24 Replies
  • 5163 Views
*

Neogirl101

  • Trusty Member
  • *
  • Roomba
  • *
  • 17
Re: Creature AGI (psuedo)coding thread
« Reply #15 on: May 09, 2015, 11:57:48 am »
I've finally read through the NN tutorial that Ivan posted, and it actually answered a lot of my questions, but not all of them. I found another site introducing neural networks, though, and I'm reading that one too.

In short, thank you, Ivan, for giving me the webpage. It really did help a lot.  :)

*

Neogirl101

  • Trusty Member
  • *
  • Roomba
  • *
  • 17
Re: Creature AGI (psuedo)coding thread
« Reply #16 on: May 10, 2015, 05:21:02 pm »
Hey everyone,

I just redid my AGI... I think it's way more simple and elegant, yet it has the perfect degree of complexity.

I tried to design it to have at least some form of consciousness (hence the global neuronal workspace,) and hopefully at least some things not mentioned in the image will become an emergent property (like introspection and self-awareness, maybe?).

This time, I did the AGI map as an image instead of a long ramble. What do you guys think?

Thank you for any and all input you give.
« Last Edit: May 11, 2015, 04:06:08 am by Neogirl101 »

*

ranch vermin

  • Not much time left.
  • Starship Trooper
  • *******
  • 369
  • Not much time.
Re: Creature AGI (psuedo)coding thread
« Reply #17 on: May 11, 2015, 05:49:24 am »
I look at it this way (and its quite computational)->  virtualize sensors -> virtual statistical store of environment -> motor search engine  -> actualize motor

Ive got big issues now, that im coming down from the dream, and really worrying about this thing actually being possible to run at even just 30hz.

I like your design, looks cool.  But when you said you dont want to "reinvent the wheel"    Id rather call it "codiscovering the wheel" or even maybe "codiscovering calculus"  and I think if your missing something from some patchy lectures, you have to fill the missing pieces in yourself, and they are the all important codiscoveries.

One thing I have to tell you,   is we dont have enough computation power to run an a.i. in realtime these days, (unless you want to proove me wrong, which may happen.) with any decent enough state capacity.    But you can go with us, (you look like you mean business.)  and co-discover the theory, that Markov worked out back in 1800's, with statistical chains, and it works in theory, as far as I can see now, just getting the thing in a working state is the tricky thing.

So I hope you like offline processing.

I can tell you,  markov chains are good theory that will be true to your end result,  back propagation and hopfield nets are also both good, and could be used a finished working implementation.  But getting it to work on threaded cpus or gpus, is still quite difficult to even get a basic system running, even once your implementation is quite unblurred.

My nets are huge and useless,  when I finish this thing, ive got more cells than a human has in his head,  it was a total bitch to run it realtime, but its nowhere near as good as even just an animal,  but it has its robo-perks, like it can talk, its a slightly different kettle of fish.

My big thing now is,  "make 2 things the same thing,  and you generate an option."   

So if I swap all my hellos for hi's,  and hi's for hello's, I get to include the later material in 2 different places... in its hypothetical playback.
A bit from here, a bit from there, and bring it together and see the whole picture.

*

ranch vermin

  • Not much time left.
  • Starship Trooper
  • *******
  • 369
  • Not much time.
Re: Creature AGI (psuedo)coding thread
« Reply #18 on: May 11, 2015, 06:36:58 am »
All my crazy thinking just headed me off in one direction->  'make two thing the same, generate an option'

Just say you wanted to make a pickapath book out of some old funny video,  you automatically generate the cross points, the insensitivity to the frame differences between your swap points, gives you more forks.

and thats actually the birth of the whole thing,  you can use a really good memory to do it,  but that can be done with a not as good job, but thats the basic idea, and its markov chain video, at its simplest starting point! 

actually making the robot, a better markov chain video systtem is behaviour segmentation of the "forky playback", and thats it...  and something that i havent even got up to yet, is more integrated playback, and taking things in even smaller pieces.

Then your motor is a search engine, using these generated forks, itself being a part of the action.  So just say we used the most basic system to do it,  the biggest problem being the robot only having a crc check or something of the photos, to decide what it wants to do.  So give him a yellow filter and a blue filter,   So of all these forking points you develop from random behaviour that doesnt kill it,  develop a web of all the video that went into it,  and then it will be able to remember where the yellow and blue is.

One challenge of it, is being able to see the full pathway,  and I havent solved that yet,  because im still just writing this super instancable spacial memory, and I wonder if im doing it the wrong way!  But im working on a really integratable sim of the environment.  (oh you think thats easy as piss?)

Then if you keep all the photos of the robot, you can play him like a pikapath book at the end.
A bit from here, a bit from there, and bring it together and see the whole picture.

*

Neogirl101

  • Trusty Member
  • *
  • Roomba
  • *
  • 17
Re: Creature AGI (psuedo)coding thread
« Reply #19 on: May 11, 2015, 08:24:00 am »
Hi ranch,

Thank you for your feedback. I'm afraid that I am a major novice in AI (and especially robotics,) and I would like if you could please explain the basics of virtual sensors and virtual statistical store to me. Thank you for your feedback again, and I will definitely look into Markov chains!  :)

P.S. Where do you suggest I use Markov chains?

*

ranch vermin

  • Not much time left.
  • Starship Trooper
  • *******
  • 369
  • Not much time.
Re: Creature AGI (psuedo)coding thread
« Reply #20 on: May 11, 2015, 09:31:26 am »
id like to tell you,  but you can work it out yourself, because your an intelligent girl,  your diagram was excellent.
A bit from here, a bit from there, and bring it together and see the whole picture.

*

Neogirl101

  • Trusty Member
  • *
  • Roomba
  • *
  • 17
Re: Creature AGI (psuedo)coding thread
« Reply #21 on: May 11, 2015, 10:44:42 am »
Thank you so much for your kind comment, ranch! :-) I also understand a little more of what you've said, too.

But I suppose that if there's ever a concept I don't understand (at least not fully,) I can always jury rig it to be simple, elegant, and effective to the best of my knowledge, though I know I'll eventually know more. Co-designing!

*

infurl

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 263
  • Humans will disappoint you.
    • Home Page
Re: Creature AGI (psuedo)coding thread
« Reply #22 on: July 23, 2015, 02:09:49 am »
If you are comparatively new to robotics and artificial intelligence you might find it worthwhile to think about what is called a subsumption architecture. It is a practical and proven method of operating a robot or artificial intelligence in real time. As you become familiar with its strengths and limitations you may well find ways to improve on it, but it's undoubtedly a good place to start.

https://en.wikipedia.org/wiki/Subsumption_architecture

*

Neogirl101

  • Trusty Member
  • *
  • Roomba
  • *
  • 17
Re: Creature AGI (psuedo)coding thread
« Reply #23 on: July 23, 2015, 04:57:46 am »
Thank you, infurl, for the link! I'm looking into it right now.  :)

*

keghn

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 496
Re: Creature AGI (psuedo)coding thread
« Reply #24 on: July 23, 2015, 02:52:10 pm »
 AGI Brain. A cascading RNN.


Take a look at his video of a car racing around a race track:



It could be a pattern loop of life, in a AGI brain. It has a beginning and a end and them come
back to the start.

 The car completed the loop around the rack within
10,000 image frames.
 Each one of these images is recorded or trained into a Neural Network Chip (NNC).
These chips are lined up like dominoes in a loop just like the race track, and tied together
by wires, which are addressing and data buses.
 It record by having a program pointer point to the first chip and clocks in date, then the program
pointer is  clocked to the next NNC, and then the next etc.....

 At a latter time the this can be replayed, froward or backwards at any speed.
NCC can be hardware or software chips, and of any size.
The NNC are trained as auto encoders and classifiers. They can merge to form, later on,
into denser trained NNC.
 
This race track or pattern loop in side the middle of the AGI brain surrounded by millions and
million of other NNC. All of these ree NNC are waiting to get into the loop, replace one, or be
add in to the race pattern loop.


The NNC can output onto a output bus.

The way it learns is by letting the weight sates in all of the unused NN chips jump around
randomly by action of a program, or by out side electromagnetic nose, or by a little bit
of ionizing radiation, or by out side electrostatic discharge:


http://www.eurekalert.org/pub_releases/2015-07/ru-ndb071615.php

 When A image shows up on the bus, from a video camera,  the NN chip that
is in the best state at that moment is selected. Also, the weight matrix is locked in to place. If this
capture is better than the one in the video it will be swapped it out. Also, copies of learned
nn chips are copied into unused nn chip, and the there weight matrix are
vibrate very slightly, randomly.

 NN logic will forms between NNC to predicted were sub classified features, object and
other stuff will show up.

 Pattern loop, or engrams can get very complex with parallel loop, sub loops and so on.


 


Users Online

14 Guests, 1 User
Users active in past 15 minutes:
ivan.moony
[Trusty Member]

Most Online Today: 45. Most Online Ever: 208 (August 27, 2008, 09:36:30 am)

Articles