Releasing full AGI/evolution research

  • 290 Replies
  • 190186 Views
*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #90 on: May 31, 2020, 01:43:16 pm »
To do zero-shot or one-shot learning means you are getting [information] from somewhere. Same for multi-shot learning.

Learning (finding) patterns in big data improves intelligence.
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #91 on: June 02, 2020, 05:36:32 pm »
Did anyone here ever realize 1) more data makes AI make smarter decisions, and 2) you can get more data from the same-size dataset? Here's 2 ways how (I know many more ways):

What do you think?
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #92 on: June 02, 2020, 09:34:54 pm »
I suspected this for years but here's bit more proof:

Say you only see a prompt "cats" and can't generalize to dogs. With the predictions eat, run, etc, you can actually translate these. This helps you predict better. While you've only seen 'eat' and 'run' follow 'cats', you know eat is dine, very similar actually let's say, so you could say 'dine' has a higher chance of appearing than 'bike', all by looking at just the predictions!
Emergent          https://openai.com/blog/

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1177
Re: Releasing full AGI/evolution research
« Reply #93 on: June 02, 2020, 10:17:07 pm »
Maybe it could benefit from some larger higher-level generalizations and heuristics to then predict lower level ones. Think big sine waves made of smaller sine waves etc. Fractal sine waves. You can get good top down predictions, better than trying to get a correct big picture by studying the nature of details. Like having a general prediction for what a book would consist of, then dividing that idea, and elaborating on each part, etc.

The most efficient process for putting rocks into a bucket is biggest to smallest, that way all the pieces will fit together. The biggest pieces of data make a rough approximation of the idea, then progressively smaller pieces make increasingly accurate and precise approximations.  If you put small ideas or rocks in first, some parts of your model may be precise, but the whole thing won’t end up being accurate to life. 

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #94 on: June 02, 2020, 10:47:05 pm »
Ah. I see what I've been doing wrong.

I should have started with a blow up doll. I knew it.

Oh wait, already been there.

That's a good discovery HS. Big rocks go in the bucket first. Then you shake it as well to use up all space. Wahahahaha. You literally do nothing to make the bucket heavy. I heard a company I forget which is letting AI improve circuit boards like that, big components go on first.

Well, HS, that's exactly what my current code does, [long] context matches in the tree shown in the image above - they get most weight during Mixing. You can also do Byte Pair Encoding top-down too but too high is not needed plus costs way too much resources. Hmm, perhaps, reward or temp energy etc can take weight first during Mixing, I never thought about that, even though I should have lol. For example you may consider your friend's opinion on the future word to predict, and ignore attention to other information, you just don't look at it.
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #95 on: June 04, 2020, 03:27:36 pm »
Big new discoveries. Big thing on the way. First some preliminary findings of less importance:


Crosspost:

I used to lol a lot myself, I later realized my research should not include too much lol. It [can] let others know you mean well. Scientific papers don't say lol a lot though. So I guess that says something.

Oh. That's (scientific papers) akin to cops. Cops are trained not to smile or laugh on duty. They mean business. All emotion is gone. They focus on reason /science /ideas /thoughts, not the primitive food/breed agenda. Hmm, but that isn't fully good now is it? Our root goal is very important to us, and all future systems in Evolution. Today many people are busy working (well, maybe not today lol), but we need to remember we have/need love inside and that all our hard work is for the food on the table and long-term survival. We need to not lose focus on high level reasoning or caring about others.
Emergent          https://openai.com/blog/

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: Releasing full AGI/evolution research
« Reply #96 on: June 05, 2020, 03:29:10 am »
Everything in moderation.

There's a time and a place for everything. O0
In the world of AI, it's the thought that counts!

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #97 on: June 06, 2020, 01:39:59 am »
Everybody has got to spread the word that we need to keep the Economy going, to progress to the future technology before we die. We have these computers and advanced algorithms just recently we didn't, we are getting very close. We can't take vacations. Something very powerful awaits us, so big and fun you probably don't realize it. Too many humans so concerned about short term things but not the actual long-term survival! Just sad. They all disappear in the end after all that attempt. We as a species will get much longer lives - just look at the future human AI species, but we ourselves can get in if work hard. There's a huge gift, but it's not free, you're on Earth still!
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #98 on: June 10, 2020, 12:04:23 am »
Isn't "Pooling" basically "Activation Function" !?


So, Pooling puts the largest variable "on show" and puffs it up bigger than it already is, big gets bigger faster aka Pools. The small don't really get included/ Attention.


As for Activation Function, it can be an S curve function, a large sum may be well over a threshold and gets a big boost therefore. It puts the big "on show", as well, like Pooling.

Big get more weight. Small might not even trigger much activation output. Only if big inputs enter does it activate a lot.


So, small variables don't affect the output really as much as they say they will. Big gets extra biggness.


?
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #99 on: June 10, 2020, 04:09:41 pm »
Hmmm, convolution is able to recognize features no matter where they are in the image. Essentially, it goes bottom-up, local features decide the activation of higher features. If they match poorly, it piles up in higher activation in the hierarchy.

Max Pooling is the most common type of Pooling and it ignores all smaller values but that tells me it could mix them Average and yes there is a such thing and I believe a Exponential Mixing would be best so that small values are mixed but barely to act "like" Max Pooling. Activation Function typically does that too. If you have a big value, it ends up bloated much larger than it really is.
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #100 on: June 10, 2020, 04:55:19 pm »
Pre-Release: Building On GPT-2's Successors "Blender" and "PPLM"

I'm writing something bigger, but I really want to release something new I discovered. I hope you can wrap your head around it for now.

Semantics/ Embeds/ Translation allows GPT-2 to recognize a sequence of text to various memories and "see" what word follows next. This helps, so much. It let's it generalize and gather more experience. It's very real and really works.

Blender and PPLM built on GPT-2, at least it looks exactly like that. They force it to talk about certain topics like politics or flowers. They essentially gave it desires / a focus instead of talking about all sorts of domains, of which won't help it Survive.

My big discovery is that we start off at birth wanting food and mating, and we recognize similar words by shared contexts, like money, farming, shelter, cars, science, etc, which get us our Survival needs to spread our genes, or AI designs (the new evolution). This infects the "money" node with reward now, and forces the AI to start blabbering about money all day....It makes it specialize and evolve new goals, so that it collects/ generates new data from a relevant domain. All output of the AI is only to control input, so that it doesn't intake data from random sources such as websites, topics, or lab tests. It specializes it's source so it's non-random inputs.

What this reward updating is, is Semantics/ Embeds/ Translation. The only difference is it is saving checkpoints where to exploit, then searching there. So instead of GPT-2 asking "I will cure cancer by " it focuses on viewing that semantically as mostly (or starting off already at) "I will repair cells by ".

RL for learning to walk does the exact same thing, it specializes its motor actions until it gets most reward/ acceleration. In our case, our reward is Prediction Accuracy.
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #101 on: June 13, 2020, 07:05:34 pm »
Another way of saying it but clearer:
I have realized a very large next step for Blender/ PPLM. I want to keep it short here but fully detailed still. So you know how GPT-2 recognizes the context prompt to many past experiences/ memories, right? It generalizes / translates the sentence, and may decide bank=river, not TDbank. Well this is one of the things that helps it a lot. Now, you know how humans are born with low level rewards for food and mates, right? Well through semantic relation, those nodes leak/ update reward to similar nodes like farming/ cash/ homes/ cars/ science. Then it starts talking/ driving all day about money, not just food. It specializes/ evolves its goal / domain. Why? Because it's collecting/ generating new data from specific sources/ questions / context prompts, so that it can answer the original root question of course. It takes the installed question wanting an outcome ex. "I will stop ageing by _" and is what I said above: "recognizes the context prompt to many past experiences/ memories" except it permanently translates into a narrower domain to create a "checkpoint(s)". So during recognizing a Hard Problem context prompt / question we taught it/installed like "I will stop ageing by _" - it jumps into a new translation/ view and creates a new question / goal "I will create AGI by _". It's semantics, it's gathering related predictions from similar memories, same thing, just that it is picking specific semantic paths, updating, just like RL. RL for text (prediction is objective).
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #102 on: June 14, 2020, 03:20:54 pm »
What I'm saying is:

Blender can be forced to incorporate certain topics into what it says. So it talks about ex. cars all the time, no matter what. Humans do too, but evolve it. They start off wanting food or mom, then they can discover food = farming semantically and now talk about farming a lot more.

This agenda/persona updates/specializes, into a narrow domain. It evolves its question. Output of AGI controls input source to collect/generate data from. > It decides which lab tests to try, and those determine which next lab tests to try. At first, input source is random data collection, just like those robots that learn to walk.
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #104 on: June 16, 2020, 05:06:05 pm »
Results are all over the place. The true answer was 'eagles'. I was testing if humans would find the pattern in each sentence that the 2nd word is always related to the final word.
Emergent          https://openai.com/blog/

 


Will LLMs ever learn what is ... is?
by HS (Future of AI)
November 10, 2024, 06:28:10 pm
Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Project Acuitas
by WriterOfMinds (General Project Discussion)
October 27, 2024, 09:17:10 pm
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
Attempting Hydraulics
by MagnusWootton (Home Made Robots)
August 19, 2024, 04:03:23 am
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

383 Guests, 0 Users

Most Online Today: 571. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles