The Epic History of Artificial Intelligence

  • 2 Replies

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1276
  • TV News. Pub. UAL (PhD). Robitron Mod. LPC Judge.
The Epic History of Artificial Intelligence
« on: June 11, 2022, 04:11:59 am »

John Coogan
All images were generated by OpenAI’s DALL-E 2:
My Very Enormous Monster Just Stopped Using Nine



  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4638
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: The Epic History of Artificial Intelligence
« Reply #1 on: June 11, 2022, 06:52:35 pm »
Really good video. For when AGI will be made, the consensus is around 2029 (some and me think 2037) because now we have closer resources to match the human brain's demands, and the software seems nearly as magical. These software have large 'brains', multi sensory, embeds, probability models, RL some of them, diffusion 'editing' ability, recency?, etc, which seems to be most of the things the human brain uses to be intelligent.

I've estimated once AGI is made, it only takes 2 years to have a Singularity occur where everyone on Earth is 'raptured' and made non-dying. AGI has many 'little' capabilities, ex. they can erase annoying memories, or pause their brain, or have more retina pixels. But the 3 most important are improving their thinking speed, intelligence, and colony size. It doesn't take much to do any of those, we see even single engineers can cause one of those 3. Running the improved version doesn't cost too much either, for example a more intelligent AI takes more resources maybe but it is exponentially smarter and very much outweighs the extra resources. Now, this #3 leverage of cloning brains is linear, it takes 10x more resources to run 10x more AGIs! Don't fear, because once ASIs invent an advanced nanobot series, those will be the ASI brains, arms, and eyes, and they will clone themselves (the computer clones itself), which means 2, 4, 8, 16, 32... now, cloning using nanobots, is exponential, not linear! So cloning is a 3rd way to leverage, as long as each computer running can clone itself.

Duplicating is how AGI, needing only a few weeks or months to train as we seen with ex GPT-3, will create a 1 million fleet in a day or so. They will all be the same mind and work nicely together, but will be given different jobs the original wanted to do but had little time. 1 Million AGIs is how many humans work on AI perhaps right now. Since AIs need no sleep or food time, they already think 2 times faster than us. This means 3x faster AI progress once AGI is made: 3 years a year. AGIs will want to work on AGI the most because 'like dad like son', humans clone themselves on-to their kids at home, ad so do humans onto their AI kids. And schools try to too. They will also know they are not biological/souls, so will be motivated and educated on the topic all the time. And, being intelligent and knowing so much text etc on the internet, it is strongly likely they will know to work mostly on AGI, for they should realize it is key to improving the ability to then work on ex. farming/ cars/ stores/ hospitals/ any other job.

2 years:
With AGI made say in 2035, we have 3 years of progress a year. It would take us 1M human AI engineers maybe 1 year to upgrade an AIs thinking speed by 2 (with the rest of them wasting much of their time working on worse AIs lol!), so with 1M AGIs alongside us it may take only 4 months. Now we have AGIs thinking 4 years a year, i.e. 5 years a year of progress. It takes another 2x speed increase only now 2 months. 9 years a year now. Now again takes just close to a month. 17 years a year now. Again, 16 days. 33 years a year now....Eventually it will take too long to improve speed, so they will move to intelligence. To improve intelligence so one has 2 years of progress a year so to say, it may take humans currently 3 years. With 33 years a year now, it takes just 3 months, then 1.5 months, then half a about 2 years is a safe way to estimate how long it takes. With 50,000 human years of 1950-2020 "level" progress happen in 1 year, one can imagine all sorts of new cure drugs and methods pop up. After 2 years, there will be a pause, they can't improve anymore!

At that point they will need to scale out by cloning each system on its own. They will think up a nanobot series, test them in their mind's now 3D pixel (like seeing all around chair at once, and every layer in the wood from surface to core at once, i.e. all pixels) physics worlds, including real life, and improve those. They will be like TV pixels, locked and reliable and can change into any book's page, but in physical 3D and can move any way or become any tool ex. from hammer to wrench, or see a person and try to act and look like them ex. puff up their cheeks and imitate voice, but way better than we can do it. Software can morph, it replaced tools like screens replaced books. And libraries and tool renting shops replaced making your own book or tool. And writing replaced being taught one on one by mouth. And language allowed sharing down the line your wisdom (think of it as a DNA storage, and mutation). There is currently 3D printers that make objects by shooting lasers through clear liquids to solidify areas of the goo. The nanobots will probably have a similar appearance of manipulation, seemingly magical. They will be a wireless network of nanobots, that work together nesting movements, and the while planet will become nanobots, the same pattern everywhere, same unit cloned, hard, metal, dark, cold, and predicable, organized, lined up, so that you know your homeworld blindfolded and save energy, and if one breaks you know it by looking at surrounding units and know also how to repair the off pattern by looking at the rest, which act as judges. It will regenerate instantly from damage. And hold us inside respectively, at least for a while, after analyzing our homes and bodies 'gently'. It will convert the galaxy into more of the best technology.




  • Replicant
  • ********
  • 531
Re: The Epic History of Artificial Intelligence
« Reply #2 on: June 12, 2022, 12:29:59 pm »
Yeh that would be good.  The exponential power is much greater than the linear power.
Perhaps a little too powerful,  in the way that it makes it so easy, for you and the rest of the world with you, and the way that it probably seems easier than it probably is, and u can fail coming up with it.

But if u donty, then you will feel a little bit of a fool, perhaps cringe in your bed for a while that u thought it was so easy then u found out you were off by a factor of 1 or whatever it is on that one, and if that would work, the sky would be upside down,  so thank god it didnt!

If the exponential power is there as a grab,   just anyone could attempt some kind of exponential power system,  who wouldnt?  every little brown man in pakistan probably baught a computer yesterday to attempt to do the quantum conversion, which grants all the power and ease than exponential power would give a programmer,   basicly one big cliff to get over, and then the rest is a down hill slide after that.

Maybe the downhill slide, will be too fast, and u'll be going a little out of control, you wont be able to handle it,  itll be like a big cyan photon blast coming out of your computer,  and "get in the way of it" and possibly others "getting in the way of it" will end up making something that seemed like it was going to offer alot (in the ultimate of "smart" building and repair systems) but it ended up, signififying the end of planet earth.

So the real joke is  (or another joke, amongst the collection.) is that if you give any little man, the idea, then anyone could make this system a reality, and it becomes a disaster like atomic weapons,  the more little men know about it,  the more chance it pops up if any of the little talentless little pricks manage to pull it off without killing themselves.

But quantum, has possibly less peril in making it than nuclear weapons, I imagine making nuclear weapons is very perilous and it will kill you, as all of forging kilning matter engineering is all horribly dangerous and full of pitfalls and hard to predict traps, but I dont know, and I dont want to know.  But then quantum is just done on the computer,  and anything virtual actually cant hurt you,  the big amazing amount of super energy your getting, is entirely a mathematical entity with no physical presence, you cant blow yourself up with it, by itself.

But then, if you make the quantum robot,   it now has a mechanism to do evil to you and others, and the more "sportier" you make it, the more dangerous it is to your enemies, but also, to the one that makes it thems-elf.