GPT-3

  • 19 Replies
  • 3363 Views
*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1365
  • Humans will disappoint you.
    • Home Page
Re: GPT-3
« Reply #15 on: July 31, 2020, 04:12:22 am »
https://www.theverge.com/21346343/gpt-3-explainer-openai-examples-errors-agi-potential

Here's another cogent article about GPT-3.

All those impressive demos that you're seeing? They're cherry-picked out of the numerous attempts that failed miserably. GPT-3 is no closer to anything that's actually useful than anything else that has been tried. However it did waste a lot more electricity than anything ever before.
« Last Edit: July 31, 2020, 05:07:13 am by infurl »

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: GPT-3
« Reply #16 on: July 31, 2020, 04:38:30 am »
I didn't realize until last week GPT actually looked at text in different attention views. It's really closer to AGI than I thought. I thought it was just highlighting which words were similar =) "[Jen Cages] has mom named [Beth Cages]". It certainly can cause it to repeat but below I show what else those windows do.

For example, instead of predicting the next word - by finding a similar match to the last ex. 10 words:
"and that [ape was sitting all alone in a cage and he] _"
it can predict the next word successfully by looking at just particular words only if the scenario only needs that ex.:
"[Jen Cages] is a girl who [has a mom] named [Beth] *Cages*"
Notice the [] are only needed on a few words....it works on many more sentences because of this.

It also seems the new GPT-3 is a tad more accurate being trained on more data of course and can be asked to do a specific task more commandingly ("on-demand fine-tuning")
Emergent          https://openai.com/blog/

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 633
    • AI / robot merchandise
Re: GPT-3
« Reply #17 on: July 31, 2020, 09:47:59 am »
I wonder if "predicting the next word" shouldn't be called "recalling the next word" from its training data, or where we can draw a line between merely swapping subjects and generalising. I think there is a difference, but they are similar in utility.

The other day someone posted a video of GPT3 producing two SQL queries, with no further context. I thought it was suspect that they only showed two, and after locating the original Tweet, found the user saying that it made a lot of mistakes, but they didn't show those. Cherry-picking indeed. That is not to say it might not be some degree of useful as an autofill, but it does mean users would have to double-check its code suggestions or risk glossing over mistakes they wouldn't have made themselves.
https://twitter.com/FaraazNishtar/status/1285934622891667457

At the end of the day, I don't look at the results but at the underlying mechanism to tell me its inherent limits. I don't think it's possible for purely associative processes to reliable reproduce rule-based systems like math or time or physics. It seems every GPT version is just memorising more data to obscure its inherent incapabilities, raising accuracy without addressing the root problems. The problem I have with that is that this will always leave some edge cases, no matter how rare, that go completely off board. It's the difference between an AI that recommends a cure that works in 95% of all cases, but kills the rest in spectacular fashion, or traditional doctors that recommend a cure that only works in 70% of all cases, but only causes mild inconvenience for the rest.
CO2 retains heat. More CO2 in the air = hotter climate.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: GPT-3
« Reply #18 on: July 31, 2020, 06:56:48 pm »
By far it is not recalling, the ways these things work doesn't just recall, they mix, and use other methods. GPT is very advanced. I won't say it is complex though, it can be explained easy if you know your work.

GPT is only a step forward, calm down everyone, the next AIs will add what they lack.
Emergent          https://openai.com/blog/

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 633
    • AI / robot merchandise
Re: GPT-3
« Reply #19 on: August 01, 2020, 05:18:17 pm »
I am calm. You're the energetic one on this forum.

Here's an interesting test where someone tried to teach GPT3 to disginguish nonsense. I think I see what metric it's using, but that metric would produce too many false positives for rare questions to be of use to me.
https://arr.am/2020/07/25/gpt-3-uncertainty-prompts
CO2 retains heat. More CO2 in the air = hotter climate.

 


OpenAI Speech-to-Speech Reasoning Demo
by ivan.moony (AI News )
Today at 01:31:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

275 Guests, 0 Users

Most Online Today: 343. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles