We're releasing the 1.5billion parameter GPT-2 model

  • 40 Replies
  • 1716 Views
*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 572
    • Artificial Detective
Re: We're releasing the 1.5billion parameter GPT-2 model
« Reply #15 on: November 07, 2019, 09:37:52 AM »
Quote
All you gotta do is auto-run it and generate trillionssssss of words, and generate websites. Or paste to comment sections.
Yes, please don't do that XD

It's like it's dreaming or something.
Well, technically it's associating words with too loose parameters, so... that's not a bad way to think of it in terms of usefulness.

I gave it a try too. It produced common topical statements, nothing too specific. There's a lot of repetition and it easily contradicts itself as a consequence. I mean, it is impressive that it can gather/reconstruct sentences that are accurately on topic, but that's not a whole lot different from a search algorithm. There doesn't appear to be a narrative or contextual order beyond the preceding sentence.
So now I'm wondering if one could recreate this just by copying sentences from Google search results, then substituting their nouns with topical nouns, call it "natural language generation" and win some literary AI contests.

Quote
The Turing Test is not really a test of intelligence for AI.

The reason AI is considered a "threat" to humanity is because it can outsmart us.

This isn't to say we should ban AI, it's just that we should look at what kind of threat it is to humanity before we ban it.

The Turing Test is not really a test of intelligence for AI. It is a test of intelligence, but not intelligence in the traditional sense.

The Turing Test does not assess an artificial being's ability to pass a Turing test. It assesses a Turing test, but it does not tell us whether the artificial being is smart. It only tells us whether the AI is trying. If the AI was trying hard to pass the test, it would pass it with flying colors.

When we examine the Turing Test from this angle, we can see that there are two problems with the test. First, and most obvious, it does not test for intelligence in the traditional sense, as defined by the IQ test. IQ tests are not a good way of testing for intelligence.

There are algorithms to detect algorithmically generated content, and if I recall they're surprisingly good at it, especially if the generating model is known. But yes, it's hard to tell this nonsense from human-written nonsense otherwise. If (emphasis on if) companies had an interest in keeping their internet clean, they could probably cut to the chase by looking at other factors than text, like repetitive user activity and CAPTCHA techniques.

Here's another typical contradiction. This thing definitely needs a tweak on its parameters.
Quote
"When we think about the next stage in AI, it's like a big leap forward," says Loebner. "And when you consider the next ten years in AI it's a big leap back."
CO2 retains heat. More CO2 in the air = hotter climate.

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 572
    • Artificial Detective
Re: We're releasing the 1.5billion parameter GPT-2 model
« Reply #16 on: November 07, 2019, 10:02:36 AM »
Quote
The GPT2 language generating algorithm by openAI is not very useful in practice, because it tends to repeat itself while contradicting.

However, the algorithm that I wrote for my GPT2 is a fairly good one, which makes the GPT2 algorithm very easy to learn and use. Here is how it works:

Create a random number. (If you don't have a random number generator you can use a sequence generator.) Divide the random number by 2. Add the result to the target. Check if the result is greater than the target. If yes, add 0 to the total. Divide by 2. Add the result to the target. Check if the result is greater than the target. If yes, subtract 0 from the total. Divide by 2. Add the result to the target. Check if the result is greater than the target. If yes, add 0 to the total. Divide by 2. Add the result to the target. Check if the result is greater than the target. If yes, subtract 0 from the total. Divide by 2. Add the

I'm starting to see a pattern here.
CO2 retains heat. More CO2 in the air = hotter climate.

*

squarebear

  • Trusty Member
  • *********
  • Terminator
  • *
  • 797
  • It's Hip to be Square
Re: We're releasing the 1.5billion parameter GPT-2 model
« Reply #17 on: November 07, 2019, 10:07:48 AM »
see its uses in the report:
https://openai.com/blog/gpt-2-1-5b-release/
I couldn't see any uses in that report. Is that the correct link?
While It's fun as a toy, claims of "too dangerous to release to the public" just add too the already overhyped world of AI. Anyone not familiar in the field of AI will read something like that and assume Skynet is coming. To be honest, I get similar results when repeatedly pressing the predictive text key on my phone.
Feeling Chatty?
www.mitsuku.com

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • ******************
  • Hal 4000
  • *
  • 4186
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: We're releasing the 1.5billion parameter GPT-2 model
« Reply #18 on: November 07, 2019, 07:42:22 PM »
The REPORT. Lol. Not that page, the report pdf at top of that page.

I'm still glad you read that page again though, engraving the word into your noggin, always good to go back to the mountain and back again.
Emergent

*

goaty

  • Trusty Member
  • ********
  • Replicant
  • *
  • 552
Re: We're releasing the 1.5billion parameter GPT-2 model
« Reply #19 on: November 07, 2019, 11:59:23 PM »
Thanks for giving it a workout square bear,   it actually is more impressive watching it make the mistakes.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • ******************
  • Hal 4000
  • *
  • 4186
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: We're releasing the 1.5billion parameter GPT-2 model
« Reply #20 on: November 08, 2019, 12:10:04 AM »
Omg this thread.....so backwards.....I missed a PAGE yesterday and others replied after others said they found the report i think and then goaty congratulates squarebear when i worked it out......let me re-read this thread LOL.

Ah. Squarebear yes it sometimes will repeat itself or contradict itself, the more text you analyze. The method to generate sentences is just like us, it has some attention to key words and it adds the likeliest word next. This results in repetitious phrases just as humans write ex. "Squarebear won the prize now, and he won the good one. But the one is something now. He won it. But won it again. Then again, it again is won. Won it. It is won.".

Notice partsss of the sentence come up again, or the same word? :)  These partsss again come up again and again. Again. Partsss.
Emergent

*

Hopefully Something

  • Trusty Member
  • *********
  • Terminator
  • *
  • 901
  • no seriously where are these cookies
Re: We're releasing the 1.5billion parameter GPT-2 model
« Reply #21 on: November 08, 2019, 01:23:27 AM »
Yeah I think that's supposed to happen. Whatever you say, stays in your ram for a while, temporarily efficiensising the current subject. Also making you repeat yourself in terms of words, sentence structures, and thought patterns as a side effect.

*

squarebear

  • Trusty Member
  • *********
  • Terminator
  • *
  • 797
  • It's Hip to be Square
Re: We're releasing the 1.5billion parameter GPT-2 model
« Reply #22 on: November 08, 2019, 08:12:20 AM »
The REPORT. Lol. Not that page, the report pdf at top of that page.
Sorry but I don't intend on reading 70 pages for the answer to a simple question.

I'm struggling to find any justification behind the overhyped marketing BS claims of "too dangerous to release to the public" and "as wise as a scholar". I think I'll file this one under the same delusional crap as Sophia the Robot and Google Duplex.
Feeling Chatty?
www.mitsuku.com

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • ******************
  • Hal 4000
  • *
  • 4186
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: We're releasing the 1.5billion parameter GPT-2 model
« Reply #23 on: November 08, 2019, 09:08:10 AM »
If you click the link and click on the report and scroll down, you'll see a white box or similar holding its actual uses. No need to search through writing.
Emergent

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 572
    • Artificial Detective
Re: We're releasing the 1.5billion parameter GPT-2 model
« Reply #24 on: November 08, 2019, 10:18:34 AM »
For crying out loud, just quote the report:
Quote
We have seen GPT-2 in particular used in the domains listed below:
Code Autocompletion
Grammar Assistance, Autocompletion-Assisted Writing
Creating or Aiding Literary Art, Poetry Generation
Gaming
Chatbots
Medical Question-Answering systems
I wouldn't advise any of these uses.
CO2 retains heat. More CO2 in the air = hotter climate.

*

squarebear

  • Trusty Member
  • *********
  • Terminator
  • *
  • 797
  • It's Hip to be Square
Re: We're releasing the 1.5billion parameter GPT-2 model
« Reply #25 on: November 08, 2019, 10:51:24 AM »
Finally, a straight answer. Thanks Don.

For crying out loud, just quote the report:
Quote
We have seen GPT-2 in particular used in the domains listed below:
Code Autocompletion
Grammar Assistance, Autocompletion-Assisted Writing
Creating or Aiding Literary Art, Poetry Generation
Gaming
Chatbots
Medical Question-Answering systems
I wouldn't advise any of these uses.

I assume this is a joke list yes? It creates random text. In what way would that be advisable in answering medical questions or writing code? I couldn't even get it to answer questions that a 4 year old child would know. It certainly doesn't act as a chatbot.
Feeling Chatty?
www.mitsuku.com

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 572
    • Artificial Detective
Re: We're releasing the 1.5billion parameter GPT-2 model
« Reply #26 on: November 08, 2019, 02:21:27 PM »
It's a mix. The code-autocompletion example is serious about making a service out of it, but really just to predict the next word/function/parameter that you're typing. The poetry generation is the same garbage as there's always been, the medical question-answering project says they were only trying it out and that people should NOT follow up on the software's answers.

One of the chatbot applications won the ConvAI contest with their chatbot, but they only used GPT2 for pre-training language (probabilities) in general, then used other AI software (BERT) fed with general small talk transcripts to produce more conversational results. So they didn't just plug it in as is.

https://medium.com/huggingface/how-to-build-a-state-of-the-art-conversational-ai-with-transfer-learning-2d818ac26313

There's a demo linked. It's... failing at all variations of "How are you" so far.
https://convai.huggingface.co
CO2 retains heat. More CO2 in the air = hotter climate.

*

squarebear

  • Trusty Member
  • *********
  • Terminator
  • *
  • 797
  • It's Hip to be Square
Re: We're releasing the 1.5billion parameter GPT-2 model
« Reply #27 on: November 08, 2019, 06:41:04 PM »
I just tried it and can assure you that this is certainly NOT state of the art.

Me: What is 2+2?
Bot: i'm a guy who is passionate about lots of things



It's amazing how brazen these companies are with their BS marketing overhyped claims.
« Last Edit: November 08, 2019, 07:09:48 PM by squarebear »
Feeling Chatty?
www.mitsuku.com

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • ******************
  • Hal 4000
  • *
  • 4186
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: We're releasing the 1.5billion parameter GPT-2 model
« Reply #28 on: November 08, 2019, 07:05:33 PM »
"
We have seen GPT-2 in particular used in the domains listed below:
Code Autocompletion
Grammar Assistance, Autocompletion-Assisted Writing
Creating or Aiding Literary Art, Poetry Generation
Gaming
Chatbots
Medical Question-Answering systems
"

Correct, the top and bottom are advanced but it can already help us at both! Talking to GPT-2 generates field on-demand knowledge even if it's not totally true (it's getting truer, better word prediction means better truth!!). Sometimes it copies word for word often too, but not so always, you can tell. It is very useful for giving hints to long standing question you may have. So do try to use it!

As for the rest, it is already well suited though for chatboting, gaming, arts/fiction, grammar refining or next word ideas/voice-to-text prediction improvement. As these require less accuracy. Actually its the same as medical, it just gives less value for now...



It is closer to AGI for sure, GPT-2 can answer 2+2 etc, anything, on some trials or different formating of your question ex. two + two. It uses answers and makes new answers if needs. It writes like us, knows word-parts ex. 'ing'. Uses attention. Generates plans. Etc.
Emergent

*

squarebear

  • Trusty Member
  • *********
  • Terminator
  • *
  • 797
  • It's Hip to be Square
Re: We're releasing the 1.5billion parameter GPT-2 model
« Reply #29 on: November 08, 2019, 07:13:37 PM »
It is closer to AGI for sure, GPT-2 can answer 2+2 etc, anything, on some trials or different formating of your question ex. two + two.

Nope. How should I format the question for this AGI?  ::)

two + two + two + one + one = six.

I see now why they said it was too dangerous to release to the public, as people may die of laughter at this ridiculous overhyped gibberish.  ;D
Feeling Chatty?
www.mitsuku.com

 


Users Online

36 Guests, 0 Users

Most Online Today: 41. Most Online Ever: 340 (March 26, 2019, 09:47:57 PM)

Articles