Releasing full AGI/evolution research

  • 290 Replies
  • 195106 Views
*

MagnusWootton

  • Replicant
  • ********
  • 650
Re: Releasing full AGI/evolution research
« Reply #255 on: April 24, 2021, 03:37:35 am »
I doubt I'd be very impressed with it.    But if ur happy I wont get in the way! =)
It'll definitely work to a degree,  but spotting the cracks is really easy to do if u stop being so positively biased about ur results.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #256 on: April 24, 2021, 03:28:13 pm »
An evaluation for AGI that tests AGI on just one single narrow task like who can build the highest tower, or run the farthest, or make the most balloons, does not test for AGI because AGI is general purpose and needs to solve many problems. Using a diverse set of data like enwik8 or similar and seeing what algorithm can compress it better is checking if for a diverse set of real world problems - can you predict the correct solution accurately? So instead of actually building the highest tower, you predict instead the solution in text or image, and you get better compression. You take directly the solution predicted, and so if you find patterns well in data then you predict better and hence compress better. Patterns in the universe is what allows us to do or control/ take advantage of, anything. Hammers are re-used and sold, homes, etc, these are patterns / tools that are really good.
Emergent          https://openai.com/blog/

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1372
  • Humans will disappoint you.
    • Home Page
Re: Releasing full AGI/evolution research
« Reply #257 on: April 24, 2021, 05:33:08 pm »
Honestly, this is like watching a child playing with blocks. :2funny:

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #258 on: April 24, 2021, 10:51:09 pm »
You continue to say that yet have nothing in return to explain or show. I explain, and I show a simple but deep algorithm, and am coding it. While Transformers are incredible, the fact the programmers cannot explain all the things I explain like how the future AI will learn new desired goals (AGI, not just food) or current Transformers learn translation and recency priming, easily, shows it is very cloudy. You can go and read all you want on it and get no where, it's literally just a big Backpropagation algorithm.
Emergent          https://openai.com/blog/

*

MagnusWootton

  • Replicant
  • ********
  • 650
Re: Releasing full AGI/evolution research
« Reply #259 on: April 25, 2021, 08:51:01 am »
An evaluation for AGI that tests AGI on just one single narrow task like who can build the highest tower, or run the farthest, or make the most balloons, does not test for AGI because AGI is general purpose and needs to solve many problems.

Even if you are just asking your robot to move from one xyz position to another,    it depends on how many obstacles were in the way, or if it even had to communicate with someone to get it to happen.

Even simple motivations can involve a complex hypothesis, if theres enough "environmental" stopping it from even getting something simple done.

"you must brush your teeth"

What if it has to go all the way to the shop to get a toothbrush,   then there was no toothpaste left in stock, has to go look in a street directory for another supermarket... etc..etc..  it gets quite tediously complicated at times, even for simple (and easily sensor detectable!! :)) tasks.


If you let the computer decide what it wants to do itself, its more sentient,  but its also alot more dangerous,   if u decide its motivation its safer because its in less control of itself, and thats what u want, IMO.
« Last Edit: April 25, 2021, 10:03:06 am by MagnusWootton »

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: Releasing full AGI/evolution research
« Reply #260 on: April 25, 2021, 03:34:21 pm »
If you let the computer decide what it wants to do itself, its more sentient,  but its also alot more dangerous,   if u decide its motivation its safer because its in less control of itself, and thats what u want, IMO.

There was a computer game back then (beginnings of AI arcade gameplay a few years ago) where AI learnt to play specific game with highest-level-wins strategy. AI finally found a bug in the game and exploited that bug to cheat to survive longer.

*

MagnusWootton

  • Replicant
  • ********
  • 650
Re: Releasing full AGI/evolution research
« Reply #261 on: April 26, 2021, 08:19:02 am »
Thats not actually developing motivation.   The computers motivation is fixed being the score.

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: Releasing full AGI/evolution research
« Reply #262 on: April 26, 2021, 08:26:17 am »
Yes, but look what evolution did with 5.4 billion years of only survive-and-reproduce strategy. More complex behaviors seem to come along with enough cycles spent on genetic mixing of the strongest ones.

*

MagnusWootton

  • Replicant
  • ********
  • 650
Re: Releasing full AGI/evolution research
« Reply #263 on: April 26, 2021, 02:53:45 pm »
Yes, but look what evolution did with 5.4 billion years of only survive-and-reproduce strategy. More complex behaviors seem to come along with enough cycles spent on genetic mixing of the strongest ones.

If you want to develop computer logic (like an ASIC) out of randomly trialing, 20 bits for representing it is 1 million different things to test before u get the best one.   but 20 bits isnt enough for anything.    u require something more like 1000 bits,   and then there hasn't been that many people born since the birth of the earth.   Thats why I'm a little superstitious about evolution.

But I guess there could be some kind of optimization happening in nature, of course.   It all depends on the semantics if there is some kind of structure that develops that can test things linearly instead of exponentially.  even tho that doesnt make sense in the form of a trial a life,  its some other thing happening.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #264 on: April 26, 2021, 05:42:27 pm »
Evolution used billions of years and massively parallel computation. It's the most boring gruesome intelligence you can use to get a solution. The brain merges parent champ DNAs too to try ex. man+wing traits, except its a created idea in the brain, instead of an actual tool. You need to get this correct and understand this.

Anyway, saying a fixed goal is powerful, ya, is slowest way to get a solution. But AGI isn't explained like this, it does update its reward goals, but it isn't the moving changing that makes it faster/ non fixed....it's the pattern finding and pattern creation ways are better....
Emergent          https://openai.com/blog/

*

MagnusWootton

  • Replicant
  • ********
  • 650
Re: Releasing full AGI/evolution research
« Reply #265 on: April 27, 2021, 10:43:56 am »
Where all brought into life in a feeling easy kind of way, not much warning to heed,  and feeling safe and secure, but we are all going to our deaths.
Thats how I feel about it anyway... if you feel like you were warned enough,  I dont think that...

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #266 on: May 02, 2021, 04:41:13 pm »
Somehow I've landed on the idea it will be incredible to use now as HASH TABLE in my AI for memory storage. I'm sharing the idea below asking for help because maybe it won't work.

So let's say I manage to get a Hash Table working and it uses veryyy little memory and time. And let's say I did manage to not need to store contexts (keys!), only predictions, assuming I only rarely store collision duplicates (if use Cuckakoo algorithm) in the same item which do have contexts to tell them apart. Such a design would look as follows:

input prompt:
Walking down a stree>?

Hash Table (brain memory):
[[t=3, b=7], [], [], [y=45], []]

"stree" turns into a code that tells the list to (instantly! :) ) access the first item "[t=3, b=7]" which are the predictions for this context "stree".

I'm wondering though, how would I do delayed recognition with this datastructure? Ex. "thanksZZZgiving" activates the context (and therefore its predictions) "thanksgiving". But in this datastructure I don't have any contexts . I could remove the ZZZ and then turn it into a hash code and search for it, but doing that for all possible delays seems slower than partially activating the context and then awaiting for more activation at that node.

So how do you do hash table + delayed matching if you have no contexts?

Another question: Can I use a hash table for 0-16 letter length matches (assuming I don't store context keys) or will that not work out?
« Last Edit: May 02, 2021, 07:00:49 pm by LOCKSUIT »
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #267 on: May 02, 2021, 08:41:27 pm »
So isn't a hash table that stores only predictions and only 1 or 0 (bit-wise prediction) in every cell the best solution over neural networks etc? Any big disadvantages? It seems to me all you really do is this, correct me if wrong:

input = thanksgivin
generate what to search for = index 45
the hash table = [[1:562, 0:35], [1:77, 0:980], [1:4, 0:9] ...]
prediction counters found = [1:4637, 0:362]

I.e it seems fast to know where to look in the list, and all you store is numbers - and only up to 2 or 4 numbers per cell (and any collisions along with their context key, only collisions need a context key). What could go wrong here? Is this great for holed matches, delayed matches, translation, etc? Or is there future troubles here?

If so, I'm a little lost how to get the key hash then, anyone have a good small generator with very few collisions? It seems easy if I can get this out of the way.
Emergent          https://openai.com/blog/

*

MagnusWootton

  • Replicant
  • ********
  • 650
Re: Releasing full AGI/evolution research
« Reply #268 on: May 03, 2021, 07:07:40 am »
I'm not sure what ur meaning,

A prediction has to have a context or u cant predict anything,

Indeed you can do it bitwise if you want, but its best to work at the bus size of your computer. (eg 32 bits)  so you can match all the bits at once in the ALU,  goes 32 times faster.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #269 on: May 03, 2021, 05:00:50 pm »
32 bits? 64 is better!

Yes, you are handed 'hell' and see it is in your memory with an o at the end, so ou predict: hell>o. With a Hash Table, hell is converted into the index by a Modulus, which then accesses where its predictions are stored. Hell accesses the same index every time, so it will drop its predictions there and also see what can come next.
« Last Edit: May 03, 2021, 05:44:46 pm by LOCKSUIT »
Emergent          https://openai.com/blog/

 


LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

334 Guests, 0 Users

Most Online Today: 447. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles