DALL-E 2

  • 6 Replies
  • 3717 Views
*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
DALL-E 2
« on: April 06, 2022, 04:16:49 pm »
Emergent          https://openai.com/blog/

*

frankinstien

  • Replicant
  • ********
  • 642
    • Knowledgeable Machines
Re: DALL-E 2
« Reply #1 on: April 06, 2022, 04:44:40 pm »
My question is how does it do the photo editing? Has it learned to use something like photoshop imaging processes or does it do the blending of different images itself? I mean it's pretty easy to just build up a database of images with labels and just blend them using a text command.

*

MagnusWootton

  • Replicant
  • ********
  • 634
Re: DALL-E 2
« Reply #2 on: April 06, 2022, 04:57:46 pm »
Its hard to know what ur lookin at cause its so easy to fake and alot of ppl would want to do that.
I think it is amazing,  but because of all the lies and bullshit, some ppl are just never going to believe it even if these new graphics nets are fully amazing!!!

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: DALL-E 2
« Reply #3 on: April 06, 2022, 05:28:33 pm »
My question is how does it do the photo editing? Has it learned to use something like photoshop imaging processes or does it do the blending of different images itself? I mean it's pretty easy to just build up a database of images with labels and just blend them using a text command.

I'm halfway down the page so far and I must say it it massively impressive. Hmm, well the text to image power of it is insane, a brain making the two teddy bears shopping in ancient Egypt would take a long time to make. DALL-E 2 does it like probably in a few seconds. If you tried making it in a picture 2D in photoshop, you'd be ex. stretching a robe around the bear's body and such, or drawing one up. Think about it, it'd be hard I think. Same for 3D rendering a scene. Even past the human ingenuity that we can somewhat-compare to it, what matters is it understands, it's becoming "AI", it's dreaming.

The similar images section showcased would also take a human a long time.

The image editing is maybe a bit simpler, but it still decided to put a pool-intended flamingo in that image, except for outside the pool area, notice. So I think it was thinking there too. The shadows and reflections, and even the environment (ex. if is in lava or being shot at) would affect what it is doing or what form of object it is. So in the end it's actually still time consuming lots and it's amazing it can do all that.
Emergent          https://openai.com/blog/

*

frankinstien

  • Replicant
  • ********
  • 642
    • Knowledgeable Machines
Re: DALL-E 2
« Reply #4 on: April 06, 2022, 05:48:45 pm »
But still, is it using established image processing, or is the network doing the blending? I mean the net can capture the object in the image and then blend it with imaging open-source code. So I'm not asking if a human being did it, I'm asking if the neural network is using imaging code, or is the neural network actually doing the blending itself?

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: DALL-E 2
« Reply #5 on: April 06, 2022, 06:28:13 pm »
But still, is it using established image processing, or is the network doing the blending? I mean the net can capture the object in the image and then blend it with imaging open-source code. So I'm not asking if a human being did it, I'm asking if the neural network is using imaging code, or is the neural network actually doing the blending itself?

Oh ok I can answer this for you actually. I had seen these AIs before, and tested it, in the openAI GLIDE demo code, also it says it right in the video demo on the page too. It's I think in-painting. So, see those big grey blobs with the 1 and 2 and 3 in them where the flamingo goes? That's inpainting, and you say to it "flamingo" and it just dreams what "should be there", all of it, you are the one that masks it out to fill in. It was only trained to regenerate the rest/a part of an image and associate text 2 image (or something-like that, just saying so you get a very basic idea).
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: DALL-E 2
« Reply #6 on: April 07, 2022, 03:33:04 am »
Emergent          https://openai.com/blog/

 


OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

312 Guests, 0 Users

Most Online Today: 376. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles