AIRIS unsupervised one shot learning

  • 24 Replies
  • 6841 Views
*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1365
  • Humans will disappoint you.
    • Home Page
AIRIS unsupervised one shot learning
« on: February 14, 2018, 09:29:06 pm »
https://airis-ai.com/

This is a toy AGI project that is demonstrating some very impressive results across multiple domains. The video is well worth the 10 minutes that it takes to watch. Would anyone care to speculate about the algorithms involved?

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: AIRIS unsupervised one shot learning
« Reply #1 on: February 18, 2018, 02:12:52 pm »
Very well produced video and interesting AI.

If I was to speculate about the type of AI/ machine learning schema being used, I would say a reward driven variation of the same type as my pixel bot project uses.

http://aidreams.co.uk/forum/index.php?topic=10804.msg49555#msg49555

Or indeed my B-Bot demo…

http://aidreams.co.uk/forum/index.php?topic=12820.0

The point of the B-Bot was to demonstrate that complex behaviours can be exhibited by a very simple AI system when driven by the richness of the environment.

Anthropomorphism also plays a part, when we see a complex behaviour we tend to assume/ imply agency.

Actually the B-Bot just uses a simple list.  The pattern of what it sees is saved to a list along with an action, initially given by the human operator.  As the user moves around, they are basically teaching B-Bot to avoid walls and chase green objects. 

When you see this… do this… that’s it.

As the list grows there is a increasing chance that the bot can find a action related to what it’s currently ‘seeing’, it learns through experience.

AIRIS also seems to be using a ‘gods’ eye view of its environment; it would be interesting to see how it performed with a ‘bots’ eye view, where it couldn't ‘see’ over the walls etc.

Cool project.

 :)
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

keghn

  • Trusty Member
  • *********
  • Terminator
  • *
  • 824
Re: AIRIS unsupervised one shot learning
« Reply #2 on: February 18, 2018, 03:25:45 pm »
 For a beginning AGI, a above view is way too advance. Just like giving  the AGI the ability to chat right off the bat.
 A real AGI system would  be more like a FPV system or simulation. Later on the AGI would learn where it is in the
environment and how to communicate. 
 Human are not perfect. The have are born with primitive instinct to them get going. But later in life it hold them back.
 Animals and especially insects are have really good primitive instinct. Really good to get them up an running. But it pins
them as organic robot and they cannot exceed programming. 

AI2-THOR Interactive Simulation Teaches AI About Real World: 
https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/interactive-simulation-teaches-ai-about-real-world   



« Last Edit: February 18, 2018, 04:20:54 pm by keghn »

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: AIRIS unsupervised one shot learning
« Reply #3 on: February 18, 2018, 10:33:29 pm »
I spent an hour and re-produced a simple version of the game editor AIRIS is using; I also implemented the same type/ method of AI.

Teaching the AI to understand a simple maze... to grab food and to wait at doors...

https://www.youtube.com/watch?v=Db1wf60a6ww

But it’s still driven by a simple list.

 :)
« Last Edit: February 18, 2018, 11:27:53 pm by korrelan »
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1365
  • Humans will disappoint you.
    • Home Page
Re: AIRIS unsupervised one shot learning
« Reply #4 on: March 22, 2018, 07:27:13 pm »
I spent an hour and re-produced a simple version of the game editor AIRIS is using; I also implemented the same type/ method of AI.

Only an hour? How much code did you have to write for this? It's actually very impressive.

*

Berick

  • Roomba
  • *
  • 17
Re: AIRIS unsupervised one shot learning
« Reply #5 on: March 26, 2018, 12:51:51 pm »
Hi everyone, I found this forum through my site stats. I'm the developer of AIRIS and I'm happy to talk a bit about how it works.

It's a lot like GOFAI. There's no neural networks involved. It uses causality based symbolic rules that it self-generates through observations of its raw inputs.

I just recently published an article on my site that goes into a lot more detail about the theory and concepts behind it. If you have more questions after reading that, feel free to ask.

https://airis-ai.com/2018/03/08/anatomy-of-prediction-and-predictive-ai/

I always enjoy finding active AI communities like this, and I'm looking forward to chatting with you all.

P.S. Thanks for the kind words!

*

unreality

  • Starship Trooper
  • *******
  • 443
Re: AIRIS unsupervised one shot learning
« Reply #6 on: March 26, 2018, 02:52:13 pm »
What type of tree search does it use?  I'm glad to see work done on non neural networking. Too many big companies like deepmind using supercomputers to make it seem better. Even deepmind is now branching away from neural networking. When they added a Monte Carlo tree search to AlphaGo it became significantly better.

*

Berick

  • Roomba
  • *
  • 17
Re: AIRIS unsupervised one shot learning
« Reply #7 on: March 26, 2018, 11:16:22 pm »
It uses depth-first as it generates each new model. It evaluates the new model to see which is closest to the goal and generates a new set of models from the closest model. So it's kind of like a Monte Carlo, but the sampling is directed by the AI instead of random.

The evaluation heuristic can have a major impact on the AI's behavior. If you skew it towards novelty it try different ways of doing things that it already knows and experiment with things it hasn't seen before even if they are out of the way of its primary goal. If you skew it toward familiarity it will keep doing things the first way that it learns even if there are better ways available and it will ignore new things. It took a while to figure out a good balance!

For example, here is a "blooper" from a more novelty skewed heuristic. It evaluated "trying to push the wall in the doorway closest to the battery" as the optimal novelty every time it picked up keys or opened doors elsewhere.


*

unreality

  • Starship Trooper
  • *******
  • 443
Re: AIRIS unsupervised one shot learning
« Reply #8 on: March 26, 2018, 11:33:23 pm »
Looks great. Is there any talk about the next level, making sentient life? That is, connecting it to real world or the Internet, let it learn, and eventually creating AGI.

My advice is to make everything as flexible and adaptive as possible. IMO that's the only way it can become sentient like humans. It should learn from experience. It should periodically (quite often) think about (db lookups, analysis & pattern recognition functions, quick tree searches) if it wants to continue on it's present path of thought or change directions. For example, it could be in a particular tree search, 5th ply, and all of a sudden it might want to change it's mind and end the entire tree search with the best answer so it can start thinking about something else. Humans do the same thing where we probe the situation and decide if we want to continue or change path.

*

Berick

  • Roomba
  • *
  • 17
Re: AIRIS unsupervised one shot learning
« Reply #9 on: March 26, 2018, 11:49:18 pm »
Thanks! I've just recently added a couple people to my team (was a solo project up till now) and we are working on porting the code to python so that we can test it on current AI benchmarks like OpenAi's Gym environments. We're also looking at testing it in simple robotics applications. I would love to put together a robot that can take on Ben Goertzel's coffee challenge! The idea is to approach further development incrementally and in a controlled way. We don't want to just let it loose on the internet yet without working out the kinks.

You bring up a good point about giving up searches, and that's partly what's going on in the animation above. The AI is hitting a dynamic search depth limit that it sets itself based on how long it has taken it to find solutions in the past. Once it hits that limit, it tries the best thing it thought of.

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1365
  • Humans will disappoint you.
    • Home Page
Re: AIRIS unsupervised one shot learning
« Reply #10 on: March 27, 2018, 12:11:17 am »
As you are already running into resource limitations it may be time to consider more sophisticated search strategies. As a general rule, depth first search finds a solution the fastest but it is unlikely to find an optimal solution first and it can easily run out of resources if the problem is hard. Alternatively, breadth first search will find an optimal solution first but it runs out of resources even faster because of all the states that have to be stored. A combination of the two called iterative deepening search works best.

https://en.wikipedia.org/wiki/Iterative_deepening_depth-first_search

Even better than brute force search is informed search where you use knowledge about the problem to search more likely nodes first. Something like A* (a-star) search is very easy to implement and often vastly more efficient than uninformed search algorithms. You can combine all these methods to get very fast searching indeed.

https://en.wikipedia.org/wiki/Iterative_deepening_A*

If you explore these options I would be keen to learn how much impact (if any) they have. You mentioned in your video that character recognition took a prohibitively long time. Perhaps it would be faster with better search methods. It's an unfortunate fact that symbolic processing produces remarkable results when applied to toy problems but rapidly becomes infeasible with real world problems owing to the combinatorial explosion of factors. Nevertheless, I prefer symbolic methods because as more and more efficient algorithms are developed, and computers become faster, they become more and more useful and effective over time.

*

unreality

  • Starship Trooper
  • *******
  • 443
Re: AIRIS unsupervised one shot learning
« Reply #11 on: March 27, 2018, 12:30:08 am »
Thanks! I've just recently added a couple people to my team (was a solo project up till now) and we are working on porting the code to python so that we can test it on current AI benchmarks like OpenAi's Gym environments. We're also looking at testing it in simple robotics applications. I would love to put together a robot that can take on Ben Goertzel's coffee challenge! The idea is to approach further development incrementally and in a controlled way. We don't want to just let it loose on the internet yet without working out the kinks.

You bring up a good point about giving up searches, and that's partly what's going on in the animation above. The AI is hitting a dynamic search depth limit that it sets itself based on how long it has taken it to find solutions in the past. Once it hits that limit, it tries the best thing it thought of.

Are you sure you want to port to python? That would make it a lot more difficult to convert to c/c++, which are high performance languages.

I know what you mean, one step at a time. Just make sure you're going in a good direction. The AI itself looks like it's going in a good direction. Sometimes it's good to step back and take a good look at things. I'm sure you've done this, but try to analyze how you think. Take note how ridiculously flexible the human mind is and how it's constantly learning, updating, pattern recognition, updating, updating, updating. So far the problem with similar AI code is that it's so rigid. Neural networking usually isn't rigid, but it's as slow as molasses and IMO present hardware can't create anything close to a human neural network brain. Your type of code is what will create sentient life with present hardware, eventually.

*

Berick

  • Roomba
  • *
  • 17
Re: AIRIS unsupervised one shot learning
« Reply #12 on: March 27, 2018, 12:48:02 am »
infurl: It's not the search that is computationally expensive (It's already using IDDFS) , it's the model generation. The bottleneck comes from using a serial processor to do work that is much better suited for a GPU. Once we integrate parallel processing there will be a massive performance gain.

I'm also curious to see how it does outside of toy problems. In theory, it should work OK but we'll just have to see!

unreality: I'm not worried about raw performance at this stage. I'm more interested in the ease of plugging it in to industry benchmarks to see how it compares with the big boys. For example, getting one agent to learn and be able to play all of the Atari games is way more important than playing one Atari game really fast.

Take note how ridiculously flexible the human mind is and how it's constantly learning, updating, pattern recognition, updating, updating, updating.

I agree, and corrigibility is one of the key things that makes this method work as well as it does.

*

ranch vermin

  • Not much time left.
  • Terminator
  • *********
  • 947
  • Its nearly time!
Re: AIRIS unsupervised one shot learning
« Reply #13 on: March 27, 2018, 12:56:49 am »
python is a compiled language, goes as fast as c doesnt it?
And about symbolic problems only suited to toys, that isnt true, because we all "grab a cup of coffee to drink it and get a battery",  its just a more courser view of the situation than all the real photo and video which confuses and dumbfounds the ai programmer. 

*

Berick

  • Roomba
  • *
  • 17
Re: AIRIS unsupervised one shot learning
« Reply #14 on: March 27, 2018, 01:29:20 am »
Nope, python is definitely a lot slower.

Obviously I agree that symbolic problems aren't limited to toys, but with the notoriously historic failures of previous symbolic systems it's understandable why it's seen as a dead end.

 


OpenAI Speech-to-Speech Reasoning Demo
by ivan.moony (AI News )
Today at 01:31:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

322 Guests, 0 Users

Most Online Today: 346. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles