Releasing full AGI/evolution research

  • 290 Replies
  • 161865 Views
*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #105 on: June 21, 2020, 12:01:00 am »
Wrote something sweet today so will share it:

The brain loves to collect lots of data (sight-see) because more data allows it to solve future problems better.

Dogs love new toys (new data) because it let's them explore new problems. New data is more data.

Talking about exploring, exploiting is when you work all day in some domain you love (ex. AI) because other new data is not actually so useful. We evolve this filter/ our goals, we start off focusing on food and then move attention over to cash then to jobs if they have similar contexts. The brain makes "checkpoints" or filters where to collect new data from in the manifold space. Then explores there. Blender and PPLM both do this (they don't evolve it though)%u200B

As for your questions, browsing Instagram with color turned off feels worse because of missing data. Or perhaps you want to look at food directly and don't need data, but the same problem remains that you are not seeing what you await/ forecast to see because you lack sufficient data. The brain always wants to see a future.

If you play a game, the visuals may make it more relatable to understand the knowledge.

And when treats are paired with other domains, you can make the agent "get into" that domain.
« Last Edit: June 21, 2020, 02:25:09 am by LOCKSUIT »
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #106 on: June 22, 2020, 06:13:29 am »
DNA and Brains both store information and mutate it to create new organisms and new ideas. We pass down our genes and memories to our kids. And today you can easily see the rapid pace of evolution now starting to become faster, and it's clear it's all about computation of data and AI. That's because evolution *is* computation aka phones, communications, AI, etc.

DNA and brains both model a ton of data, they can do a LOT with very little. The model DNA and brains have is modeling patterns. We have many hairs, fingers, eyes, cells, but only one code for it. The universe has patterns because its laws of physics are few. Structures like DNA and brains and other mechanisms exploit this and model the world so that they can predict or successfully reproduce or maintain their own form. So because of patterns, organisms evolve into getting better at being "immortal".

Too large planets become suns/stars because of too much context in the core. The bigger your DNA or brain is the exponentially more it can model. Data relationships are combinational. So, bigger systems can evolve faster and survive much longer, maybe even pass the probability at some point, assuming our universe doesn't die of Heat Death.
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #107 on: June 23, 2020, 05:03:58 am »
Oh, also not only does DNA model patterns ex. code many fingers using 1 template, but also models patterns of things in the environment. For example, it finds that it's common the organism transverses the ground, so wings are most useful.
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #108 on: June 25, 2020, 11:55:13 pm »
https://community.singularitynet.io/t/pre-release-building-on-gpt-2s-successors-blender-and-pplm/2958/3

Notice by the end I solidify the concept that 1) more data improves prediction, 2) new data (exploring) does even more, and 3) favorite data (exploit) does even more! And we evolve/update our filters, unlike Blender/ PPLM which don't evolve/ update them. The same concept is done in RL for learning to walk, but it's more powerful if done for text/ vision!

To make Blender/ PPLM more AGI-like, you force it to talk about food/breeding (survival) most, then it leaks in the embed space to related nodes. It's just generalization to past memories to help prediction, like done in GPT-2, BUT it must save/ update checkpoints! These desires/ forcing like in Blender/ PPLM, drive (as they call it) the model prediction/ attention.
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #109 on: June 27, 2020, 05:39:04 am »
If I didn't work on AGI, I'd work:

1) In the computer industry, making computers faster, waste less energy, smaller, and mass manufacturable using replicators. General Machine Tools, sensors, motors, energy production.
>>>
If you don't want to make the algorithm/AGI more intelligent, you can just train it on more data to improve accuracy. And if you don't have more data, you can just throw more compute at it! Why? Better AI/ more data both control Attention during "searching" for answers. You can, stumble upon a cure for cancer, if you try every single possible drug, pill, or device (ex. nanobot), brute-force style. It's slower, but if your computer is fast or runs parallel, then you can more-so skip the "AI/more data".

2) Cryonics, drugs, reversing ageing. Cryonics is most interesting, because it already works on humans preserving [most] information and works very well on frogs and spiders that have evolved and have natural antifreeze. It's up for debate whether that lowest level of information lost in the brain is critical, but like said, [most] information is preserved! Fascinating domain.
>>>
Evolution is all about Generators. Any system (a rock, fridge, human, Earth) transforms itself into its future self. Its current state/ context decides it future self. This is Computation. Both DNA an brains model/compress tons of data/patterns using very little storage. Through mutations/brainstorming, new DNA and new ideas/models are created. Today evolution moves fast because so many brains are communicating using better iphones etc etc. Anything related to computers, rules. TVs, speakers, games, phones, etc. It is the AI on the computer that can store the past and make futures. It's searching.

Then, once we reach "utopia", I can finally make the greatest video games etc I have in mind.
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #110 on: July 06, 2020, 12:39:59 pm »
Distill - MINE GOLD
https://distill.pub/2020/bayesian-optimization/
Does anyone here know how to apply this to text prediction? Hint: I already told yous.
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #112 on: July 12, 2020, 05:37:14 am »
You can see more of these strange tasks at bottom (images) https://arxiv.org/pdf/1911.01547.pdf

My AGI blueprint/net predicts/recognizes text and uses frequency, similarity, recency, reward...but these strange tasks are modularities/dynamics of the net, for example counting and checking 5>4 are just hierarchy node sequences stored, there' no calculator or counter, just replay...

If I show you 4 blue objects and put yellow line around them, then show new image of new blue objects of different shapes and sizes, how it outline them? It must recognize a Byte Pair Encoding segment "blue object" and translate black square to yellow as long as is touching blue. If I want a every other yellow outline, then same rule but as long as yellow square is not touching yellow. If task is fill in all objects, rule is that translate black square as long as touches a colored square and do so to same color as that colored square as long as can't reach image wall.....mind boggling
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #113 on: July 12, 2020, 01:44:17 pm »
https://arxiv.org/pdf/1911.01547.pdf
See those images? I know it's prediction, like GPT-2 or IGPT. But how do these tests guide us to AGI? I know I can do them, so AGI must. But. How can they help solve cancer etc? GPT-2 says answers. IGPT for video would too. These tests, however, seem more like labor or or repair than answering big questions. Wtf? Count objects, move and flip, stack objects, group objects, outline them, draw maze path, denoise, fill-in, bag all objects, change color, change shape, link, upscale, laser mirrors, gravity, etc, how does these help solve cancer or other inventions??? I get that if you're in a star-ship and 1 of 10 engines break, you can repair it by looking at the others, but....
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #114 on: July 12, 2020, 07:17:04 pm »
I'm still trying to figure out how this can be incorporated into a GPT-2/ IGPT. It's like an Internet of Things. It's prediction, and it uses patterns, but it doesn't seem like it can answer questions. Yet my brain can solve the tests. Ok I gave it thought. Some of the tests are not physics based, hence useless, for example the  cycadelic pattern fill in, that isn't a sequence prediction or even object repair, just art repair... There is probably a maze test, and that and the laser test seems to be a tree search/video prediction than a static prediction. Good for predicting a string threaded through or a video of a man escaping a cave system. You can ask it to de-noise or rotate or summarize or translate objects (cat2dog). There's probably a stack/group test. I'm not sure how these helps answer big questions like GPT-2 "can". It seems like the dynamics of the net are controllable and hence many of the tasks can be useless, some rarely used, and some often used. Is it confusing to anyone else? How often do you rotate objects or solves mazes in GPT-2??? Rarely, right? And what's the stacking for, I know an invention may stack memory cells in rows, I guess the word "row" can be, in vision, actually modifying the old object, ex. GPT-2 may write the apple turned brown and was cut in half and stacked, then melted in an oven. Generating video would require morphing/ re-arranging the object. But that's based on data/objects's relative locations fed to it.
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #115 on: July 13, 2020, 07:26:52 am »
You can see here I wrote a dozen tasks: pattern, untilHits, denoise, move&flip, change color, rotate, duplicate, scale, keepPositinosAbit&countButChangeClor&shape, inflate screen as object that has most counts ignore position size etc, copy pattern, laser, advanced laser, fill-in, outline, ev oth outline, conect objects, stack objects, group objects,

I think asking it to do one thing is one thing, but having it "talk" about a plan/story is another thing, or many should I say, I mean I could generate a video saying "to use the lawn mower, push left, flip your self around, then keep moving, stop at the wall of course, find the shortest route around your home, fill in the holes with soil, and line up some plants of the same type too"

Which incorporates many of the tasks above.
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #116 on: July 13, 2020, 08:11:43 am »
Ya, you could think of every of those tasks as being a word, like color, shape, move, scale, duplicate, and if you see it enough, you write about it more, so, if you see a new cube and saw "move left" many times, your new cube will predict next "move left", either the cube decodes itself/ transforms itself or the next item predicted is "move left". Naturally a video playing can do all the tasks shown above. The difference in the paper is it is *suddenly" changing, it doesn't show you a cube falling down or being rotated, just the final frame! And it's just activating your videos in your brain. So upon seeing new cube, you see it rotate, and you only only draw the final frame i.e once is transformed fully or the Byte Pair Encoding ends (write next word or phrase, you only draw the final word OF that).

This massively clarifies it all to me now, if I'm correct.
Emergent          https://openai.com/blog/

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1365
  • Humans will disappoint you.
    • Home Page
Re: Releasing full AGI/evolution research
« Reply #117 on: July 13, 2020, 08:34:22 am »
This massively clarifies it all to me now, if I'm correct.

How do you propose to find out if you're correct?

Define what you mean by "I wrote a dozen tasks". Did you implement working software which you could demonstrate, or is it all in your head?

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #118 on: July 13, 2020, 08:44:27 am »
I haven't implemented much of it, but what I have implemented so far - even though little (and having seen others code so far ex. openAI's etc's results, innerworkings, explanations of innerworkings), grounded 80% of my work. My AGI work is mainly a collection of my discoveries/ knowledge from others, I move very precisely forward once my prediction confidence is high enough, the more data I store the farther I can extract new data out of what I got so far. It works, I need not yet code anything to be sure. So having coded just a bit etc was more than enough evidence, I already didn't need that data. Though it did fill in some big gaps. I actually was enlightened prior to coding it, not coding it lol, I was studying the Hutter Prize PPM algorithm and was then I learnt how the algorithm worked. Coding it didn't change much after that.

"I wrote a dozen tasks"
I mean the google paper above, I wrote down the tasks they show in images, and news ones, as text, is all...
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #119 on: July 13, 2020, 06:58:04 pm »
How Evolution Works for kids:
https://www.youtube.com/watch?v=ck4RGeoHFko
Emergent          https://openai.com/blog/

 


OpenAI Speech-to-Speech Reasoning Demo
by ivan.moony (AI News )
March 28, 2024, 01:31:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

316 Guests, 0 Users

Most Online Today: 396. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles