Facebook creates the most ‘human’ chatbot yet.

  • 35 Replies
  • 6942 Views
*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Facebook creates the most ‘human’ chatbot yet.
« Reply #30 on: May 05, 2020, 07:13:01 pm »
Quote
If you constrict the data by setting a persona, you get more on topic responses, but it gets repetitive
Sorta like me? Or korrelan?

Quote
On the other hand if you loosen the restrictions and give it access to more data, you also increase the amount of garbage and generic responses that the program will drag in.
Sorta like HS? Sorry had to choose something.

Trust me, more data = more accuracy, I've been working with this stuff, I know... You also get more data using the same data if the AI can extract/mine data smarter/better.
Emergent          https://openai.com/blog/

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 633
    • AI / robot merchandise
Re: Facebook creates the most ‘human’ chatbot yet.
« Reply #31 on: May 05, 2020, 07:39:48 pm »
Haha :D . I'm glad you have some self-awareness.

As a rule, I never trust people whose best argument is "trust me" ;) . I'm not 100% familiar with the terminology, but I do believe that both "overfitting" and "underfitting" are common issues with neural networks. It has also been proven by earlier research that the benefit of adding more data to statistical models decreases exponentially, which is why we now need ridiculously large numbers for minor improvements.
CO2 retains heat. More CO2 in the air = hotter climate.

*

krayvonk

  • Electric Dreamer
  • ****
  • 125
Re: Facebook creates the most %u2018human%u2019 chatbot yet.
« Reply #32 on: May 05, 2020, 09:24:52 pm »
That overfitting thing is a little unintuitive... It makes more sense more data the merrier,  I remember looking at some graphs and it actually didnt mean it responded worse its just there is a nice way to peg a net off with a little data and it interpolates well,  but after that the more goes in the better results you get, but u just dont get as much out of it.  (Little nets work well for the amount of data was inserted.)

I could be wrong tho,   but makes more sense the more the better, like Lock says, 2 mer.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Facebook creates the most %u2018human%u2019 chatbot yet.
« Reply #33 on: May 06, 2020, 06:10:16 am »
Hmm, you say there's exponentially less improvement the more data you add, and I said the opposite. If we look at just the last letter of our text story to predict the next letter to get probabilities, and add more data to the algorithm, it gets exponentially less improvement, yes. And, looking at just 1 letter context works good, but if you look at the last 2 letters, it's exponentially less useful the father back you look as well.

Now, GPT-2 looks at the last 512 words using better technology, however the same rule holds that exponentially less improvement is made from looking farther back. However, something very far back could tell us the letter a million letters in the future is a q - and you might trust this person. If I'm working on a cure for cancer and have a 50GB text dataset of all my thoughts that is all about cancer, I can look farther back and consider it all because it's all related. You simply access other, but, related memories, when predicting the next letter.

So how far can you look? A lot, all of it that's related, and all data is. How much data should you add? A lot. While adding more samples increases the model distribution accuracy exponentially less, there's a huge amount of features/ structures in life that all relate, and therefore in actuality you get exponentially more relationships captured the more data you have. It first rises as a exponential curve, and then slopes back to flat, it is an S curve. So you do get a lot of free data to mine, and it does eventually halt once you learn the universe.
« Last Edit: May 06, 2020, 07:16:37 am by LOCKSUIT »
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Facebook creates the most %u2018human%u2019 chatbot yet.
« Reply #34 on: May 06, 2020, 10:35:32 am »
See images below. On left is bytes trained on. Ignore right. My predictor algorithm VS Green, amazingly the charts are nearly the same! It appears the predictor gets betters the more data I feed it, but the amount it improves by is shrinking, albeit exponentially. But like said the curve should increase exponentially the other way instead - if the algorithm utilizes combinatinoal relationships.

wait a minute.......the 3rd image....it is exponentially increasing.... In image 3 I was curious, I first got the same curve as above to make sure my settings were same, then made the last numbers on right side exponentially smaller (compression size) on purpose by hand, and the curve bends down more! Therefore images 1 and 2 are good news...
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Facebook creates the most ‘human’ chatbot yet.
« Reply #35 on: May 06, 2020, 08:35:11 pm »
I tested the best compressor on Earth as well (lol), hmm, all 3 predictors get the same plot! Further, it fluctuates, if I plot 320,000 bytes input the red line goes Up! Then 640,000 bytes it goes down again too, like a crooked fishpole all bent in various ways. Overall it's basically a 45 degree angle ya. Overall the double the input size the same compression but just a bit better than last. Is that small slice growing or not? I'll check tomorrow below:

1250, 790
2500, 1563
5000, 3482
10000, 8354
20000, 16242
40000, 30595
80000, 57032
160000, 100927
320000, 206331
640000, 390287

Perhaps the plot is same for all 3 because they do as better with the same amount of data. Which makes 0 sense [if] it has word2vec capability!
Emergent          https://openai.com/blog/

 


OpenAI Speech-to-Speech Reasoning Demo
by ivan.moony (AI News )
March 28, 2024, 01:31:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

306 Guests, 1 User
Users active in past 15 minutes:
WriterOfMinds
[Trusty Member]

Most Online Today: 363. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles