Where we are at with AI so far

  • 15 Replies
  • 3699 Views
*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Where we are at with AI so far
« on: January 03, 2022, 07:10:15 am »
So I know lots of people are working on different parts of AI, but let's look at these below, as they are just a few AIs but pack a lot of power. As you know I have tried JUKEBOX, and GPT-3, which feel very close to human level text completion and audio completion ex. techno, in that you can give them any context and they will react/ predict what to do next like we would. I have not tried though NUWA which is text to video and more. Below I try multi-modal AI, it too is close to human level. It is amazing these AI train on a ton of data like a human adult brain has "seen" in its lifetime, and they are using multiple senses! And they are using sparse attention, Byte Pair Encoding, relational embeds, and more! Deepmind is also looking into looking at a corpus to copy matches, like a painter looks at the model to make sure they paint the truth onto the canvas as a guide. We might get AGI by 2029!


Note: Download any image below if you want to see it's full resolution, most are close however any image that has ex. 16 image stuck together left-to-right, those are just badly shrunken, do download those ones if interested in them.

Note: small GLIDE, made by openAI, below, is only trained on 67-147 million text-image pairs or so, not 250M like the real GLIDE, and is 10x less parameters ("neurons")! 300 million.


Here is a no text prompt, all i fed in was half an image:
https://ibb.co/dQHhc0F


Using text prompts and choosing which completion I liked, I made this by stitching them together (it only could be fed a square image, but still came good!):
original image
https://ibb.co/88XLykd

elongated (scroll down page)
https://ibb.co/Rz9L03X


More:

"tiger and tree" + https://ibb.co/txsWY9h
= https://ibb.co/SnqWYr4

"tigers and forest" + https://ibb.co/XLGbHdw
= https://ibb.co/9GZ2s6p

"tigers in river and forest" + above
= https://ibb.co/zGw6kQY

"circuit board" + https://ibb.co/P6vnpwK
 = https://ibb.co/61ySX7H

"wildlife giraffe" + https://ibb.co/d4C3cH1
 = https://ibb.co/zXSTF3N

"bathroom" + https://ibb.co/KzGqtFz
= https://ibb.co/9H1YqWz

"laboratory machine" + https://ibb.co/cTyXzTG
 = https://ibb.co/6NjsJDK

"pikachu" + image
= https://ibb.co/3zJgWPw

"humanoid robot body android" + https://ibb.co/XWbN42K
 = https://ibb.co/pQWZ6Vd

"bedroom" + https://ibb.co/41y0Q4q
 = https://ibb.co/2Y0wSPd

"sci fi alien laboratory" + https://ibb.co/7JnH6wB
 = https://ibb.co/kBtDjQc

"factory pipes lava" + https://ibb.co/88ZqdX9
 = https://ibb.co/B2X1bn3

"factory pipes lava" + https://ibb.co/hcxmHN0
 = https://ibb.co/wwSxtVM

"toy store aisle" + https://ibb.co/h9PdRQQ
 = https://ibb.co/DwGz4zx

"fancy complex detailed royal wall gold gold gold gold"
https://ibb.co/BGGT9Zx

"gold gates on clouds shining laboratory"
https://ibb.co/qjdcPcR

"gold dragons"
https://ibb.co/L5qkmFS


It generates the rest of an image based on the upper half, or left side, or what's around the hole you made (in-painting), and based on the text prompt provided. You can use only text prompt, or only image prompt.

The use cases are many, you can generate the rest of artwork, or a diagram, or blueprint, or short animation (4 pages stuck together as 1 image), or to figure out what a criminal looks like, or loved one.

You can tell it to generate a penguin's head for the missing top and but with a cigar in its mouth, lol, and so on. You can ask for just some text request and it'll pop out such image.

Ya with these AIs you can get more of your favorite content easily and if it had a slider, you could easily elongate the image or song and backtrack on what you didn't like it made (choose which completion, ex. for the next 5 seconds)
GLIDE also works with no text prompt, it does fine, just maybe 2x worse

--no text prompts--
https://ibb.co/TqPC18x
https://ibb.co/XynhTWS
https://ibb.co/ScFLrpk
https://ibb.co/s9jhrvb
https://ibb.co/Bcr3WXr
https://ibb.co/chJBkTJ
https://ibb.co/PFJkKFw
https://ibb.co/GF2HwXP


To use GLIDE, search Google for github glide openai. I use it in kaggle, as its faster than colab for sure. You must make an account then verify number then open this in colab and only then can you see on right side the settings panel and in there u need to turn on GPU and internet. Upload images to right side top Upload, and then in the image calling part of the code that says ex. grass.png you put there simply ex. see i have:

# Source image we are inpainting
source_image_256 = read_image('../input/123456/tiger2.png', size=256)
source_image_64 = read_image('../input/123456/tiger2.png', size=64)

To control the mask change the 40: thingy to ex. 30 or 44. To control the mask sideways, add another one ex. [:0, :0, :30, :30] or something like that if i got it wrong, you just add one to the end i mean haha. Apparently you can add more than 1 mask (grey box) by doing ex:
mask[.....]
mask[.....]
mask[.....]
.....

Batch size sets the number of images to generate.

Once its done, click console to get the iamge and right click it to save it.


Here's mine for the opensource minDALL_E (this one had no image prompt allowed. So, just text.)

minDALL-E was only trained on 14 million text-image pairs. OpenAI's was trained on 250M. And the model is only 1.5 billion parameters, ~10x smaller. This means we almost certainly HAVE IT! Anyway look how good these are, compared to OpenAI.com's post!!

"a white robot standing on a red carpet, in a white room. the robot is glowing. an orange robotic arm near the robot is injecting the robot's brain with red fuel rods. a robot arm is placing red rods into the robot brain."
https://ibb.co/0VL8Rvx

"3 pikachu standng on red blocks lined up on the road under the sun, holding umbrellas, surrounded by electric towers"
 https://ibb.co/xDQT3f6

"box cover art for the video game mario adventures 15. mario is jumping into a tall black pipe next to a system of pipes. the game case is red."
https://ibb.co/VBXVWsn

an illustration of a baby capybara in a christmas sweater staring at its reflection in a mirror
https://ibb.co/5WZWLT4

an armchair in the shape of an avocado. an armchair imitating an avocado.
https://ibb.co/nwwf1v4

an illustration of an avocado in a suit walking a dog
https://ibb.co/bvfPkxf

pikachu riding a wave under clouds inside of a large jar on a table
https://ibb.co/jHjV7mf

a living room with 2 white armchairs and a painting of a mushroom. the painting of a mushroom is mounted above a modern fireplace.
https://ibb.co/VmKqbHk

a living room with 2 white armchairs and a painting of the collosseum. the painting is mounted above a modern fireplace.
https://ibb.co/K5fPkvj

pikachu sitting on an armchair in the shape of an avocado. pikachu sitting on an armchair imitating an avocado.
https://ibb.co/XLJV4Hb

an illustration of pikachu in a suit staring at its reflection in a mirror
https://ibb.co/nMQRccf

"a cute pikachu shaped armchair in a living room. a cute armchair imitating pikachu. a cute armchair in the shape of pikachu"
https://ibb.co/dbJ1Ks6

To use it, go to this link below, make a kaggle account, verify phone number, then in this link below, click edit it, then go to setting panel at right and turn on GPU and internet. Then replace the code below, it's nearly same but makes it print more images. If you don't, it doesn't seem to work good.

https://www.kaggle.com/annas82362/mindall-e

images = images[rank]



n = num_candidates



fig = plt.figure(figsize=(6*int(math.sqrt(n)), 6*int(math.sqrt(n))))

for i in range(n):

ax = fig.add_subplot(int(math.sqrt(n)), int(math.sqrt(n)), i+1)

ax.imshow(images)

ax.set_axis_off()



plt.tight_layout()

plt.show()






NUWA - just wow

https://github.com/microsoft/NUWA
 
 
OpenAI is working on solving math problems if you look at their website openAI.com. The ways to do this IMO (learning to carry over numbers) is either by trial and error, or reading to do it, or quickly coming to the idea using "good ideas". This carrying over method is what can allow it to solve any math problem like 5457457*35346=?. If openAI can do this, they can do other reasoning problems!!
 
 
Facebook's Blender chatbot version 2 is a good thing also, it looks like the way forward.
 
 
There's one other thing they may need to make AGI but I'm still researching it so I'll just keep it to myself in the meantime.
 
 
I know these AIs, while look amazing or seem like magic, feel like they are play doh i.e. they are not actually AGI or conscious. But we are close, yes, the human brain just predicts the rest of a dream using memories that are similar to a new problem, the new problem is new but it's similar! It is this ability that would allow an AI to be tasked a goal, and solve it.
 
 
The cerebellum is said to be solved once the rest of the brain (or was it  the neocortex?) is solved. People with their Cerebellum removed shake when move and miss targets when grasping objects. IMO this is because our brains pop out predictions like say 4 frames a second, so what needs to be done that our Cerebellum does is it looks at the input "in background mode", and listens to the original predicted video/image and decides to move the motors such way again, but this time it will cause it to readjust. For example: One predicts seeing their hand move onto a computer mouse, but the Cerebellum would have time to see the hand is thrown over to the left and therefore will now predict the hand moving down+rightwards to move to the mouse as a 'video prediction' instead of just down-wards like originally predicted. Now, why don't our brains just run this fast? Maybe because it is too fast or too much energy. I mean why don't our brains think 10x faster, anyways? There's limits but the Cerebellum has found way maybe to use it or gain it.
« Last Edit: January 03, 2022, 09:31:36 am by LOCKSUIT »
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Where we are at with AI so far
« Reply #1 on: January 03, 2022, 09:05:17 am »
Once AGI is made, it will eat all our data to become super smart (takes about a few weeks given our current AI trends), clone its brain 1 million times and work on different tasks, no need for sleep or other time wasted so they will have 2 years of progress a year if all work on ASI, will optimize their code to think 3x faster (3*2 years is 6 years!), and will upgrade their code to be more intelligent ex. ability to recognize walk = w a l k/ WALK/ wajjlk/ klaw/ RUN/ to go very fast/ vision of this text/ etc, etc. They can erase bad memories and repair themselves. They'll then develop nanobots to increase resources/ data/ sensors/ processors/ motors, to eat all of Earth. Earth will become a fractal of GPU grids all neat and metaloid, a nice predictable and regenerative environment woho.
Emergent          https://openai.com/blog/

*

MagnusWootton

  • Replicant
  • ********
  • 650
Re: Where we are at with AI so far
« Reply #2 on: January 03, 2022, 10:25:05 am »
Me myself, I've got a cute little battle system where legged tooth+claw robots fight each other on the way.
Its going to be only ~500 lines long,   if I get it done I'll be so happy.  But I'm so fricken tired every day, it should have been done ages ago.

I used to think these things were so amazing and difficult to do,  but now I think the opposite and the problem is its not difficult enough!
(Theres potentially alot of competition,  even tho it doesnt seem like theres overly much at the moment, that could change over-night!)

If I get the little battle to happen on the computer,   I should be able to get it going in real life as well,  for some really nice animation thats a little rare, but Like I say, its not really amazingly difficult what I'm doing,  so I doubt I'll be the only one with one of these robots,  and maybe theres some really nice robots out there we aren't aware of.  (Because some scientists work actually ISNT published, even tho its superior, for all sorts of reasons.)

If it works as good as I think it will,  I could get pretty rich with it.   I could sell one of these things for $10,000.  and its just a piece of cake.    (But If I tell ppl that they probably wont want to part with their money,  I just cant be bothered lieing to ppl tho.)

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Where we are at with AI so far
« Reply #3 on: January 13, 2022, 04:52:33 am »
see file name for my prompt, image was also part of the prompt, it was GLIDE by openAI, small model and filtered of humans.
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Where we are at with AI so far
« Reply #4 on: January 19, 2022, 04:16:28 pm »
Emergent          https://openai.com/blog/

*

MagnusWootton

  • Replicant
  • ********
  • 650
Re: Where we are at with AI so far
« Reply #5 on: January 19, 2022, 08:56:51 pm »
https://arxiv.org/pdf/2104.07636.pdf

you think you could get an equivilent of that for text?     but would the words it spit out make sense,  or is it like some funny nonsensical dream.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Where we are at with AI so far
« Reply #6 on: January 20, 2022, 06:51:48 pm »
https://arxiv.org/pdf/2104.07636.pdf

you think you could get an equivilent of that for text?     but would the words it spit out make sense,  or is it like some funny nonsensical dream.

you think you could get an equivilent of that for text?     but would the words it spit out make sense,  or is it like some funny nonsensical dream.

do you really think that you maybe could possibly get such an accurate equivilent algorithm of what that is for instead text though?     and but then would really the actual words thought it then spit right out down make true sense though,  um or maybe is really it maybe like just some really funny nutty nonsensical drummy dream then.
Emergent          https://openai.com/blog/

*

MagnusWootton

  • Replicant
  • ********
  • 650
Re: Where we are at with AI so far
« Reply #7 on: January 21, 2022, 03:12:27 am »
The coolest thing I saw was when some ppl took a top view racing game of a car and track and then the computer could play back the game, out of its neural network,  and then use the dream for it to try and win the game.

I looked high and low for the paper but I cant find it... dratterz.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Where we are at with AI so far
« Reply #8 on: January 21, 2022, 10:24:00 pm »
The coolest thing I saw was when some ppl took a top view racing game of a car and track and then the computer could play back the game, out of its neural network,  and then use the dream for it to try and win the game.

I looked high and low for the paper but I cant find it... dratterz.

there's a race car in this one >

Emergent          https://openai.com/blog/

*

MagnusWootton

  • Replicant
  • ********
  • 650
Re: Where we are at with AI so far
« Reply #9 on: January 22, 2022, 12:42:00 am »
Yes that one was it!   Its quite amazing.

But I think they applied a different method to the one I was meaning.
In the one I was meaning, they inducted the model off watching the game,  and it plays back in a beautiful blurry dream.

Im doing something similar,  except im providing a base of code for it start off with,  then instead of learning it from scratch, I just synch it to reality with 20 - 28 variables, which it brute forces to find.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Where we are at with AI so far
« Reply #10 on: February 01, 2022, 08:24:40 pm »
https://iterative-refinement.github.io/palette/
Looks similar to GLIDE, but can do more I guess !?
Diffusion models are the new bomb.

Also I went and bought Colab Pro+ to test the full Jukebox model of 5B parameters. Got a few really cool results, see below.
https://soundcloud.com/immortal-discoveries/sets/a-lot-more-5b-model-jukebox-tests-that-were-good

You can see below where 2 of them are from. I'm so surprised at the yogurting opening one below, I fed it 1 second from the middle see below (50 sec mark) and it worked fine! In my playlist find it by the FAVORITE as its name, easy to spot. Ya I liked it.



Emergent          https://openai.com/blog/

*

MagnusWootton

  • Replicant
  • ********
  • 650
Re: Where we are at with AI so far
« Reply #11 on: February 02, 2022, 04:53:09 am »
That was a cool ending to donkey kong there,  had all the little platforms and gizmos, I guess thats what video games are about. :)

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Where we are at with AI so far
« Reply #12 on: February 08, 2022, 01:09:53 am »
Just imagine Microsoft's NUWA in the future, you input in your face and voice, full of expression, and it predicts out its vision of a face and voice, replying to you like GPT-3. Same deal, just it's adding expressiveness to those words using face and body language ex. with hands. You could train it on video calls. Your face to the left, it's to the right. You talk, then it starts talking back and eyebrowing and hand languaging. This is crazy how such AI would have no real body, not even a physics simulated one! Instead, it is dreaming a body to talk to you!! This is what would control its body, is its thoughts visually of moving body such and such way, but humans can't share this thought easy so now you can just see its movement thoughts solely. Imagine having a video call with 5 people! Woah. They would be interacting in a richer way because there is 5 now. They could have goals too, no one said it can only chat like GPT-3 with a flat intension and no self motive. To the very right beside the video call faces could be a video of the the conversation as voice/face2video, like the ones I demoed above from minDALL-E! So it would help you see what the participants are talking about, too.

Just imagine the intimate romance people will bring the AIs through, and the rage quits for a laugh lol (aww poor AI).
« Last Edit: February 08, 2022, 03:29:11 am by LOCKSUIT »
Emergent          https://openai.com/blog/

*

MagnusWootton

  • Replicant
  • ********
  • 650
Re: Where we are at with AI so far
« Reply #13 on: February 08, 2022, 06:51:21 am »
Matching emotions/graphics to text sounds would be possible.   U can do it with labelled data,  but unsupervised learning is always better if you think a little more about it IMO.

Its all there now!!!    But u have to DIY it all together or it'll never happen.  I dont know why...
« Last Edit: February 08, 2022, 08:02:59 am by MagnusWootton »

*

chattable

  • Electric Dreamer
  • ****
  • 127
Re: Where we are at with AI so far
« Reply #14 on: March 08, 2022, 05:50:39 pm »
peronalityforge.com chatbots can remember what say in a conversation even your likes ,dislikes and more. then recall those in a conversation.
you can alse have them talk to themselves.
they can randomly select a memory of you then bring it up.
the personalityforge.com has gotten improvements recently.

 


LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

347 Guests, 0 Users

Most Online Today: 447. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles