We're releasing the 1.5billion parameter GPT-2 model

  • 40 Replies
  • 5497 Views
*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: We're releasing the 1.5billion parameter GPT-2 model
« Reply #1 on: November 05, 2019, 08:49:19 pm »
https://openai.com/blog/gpt-2-1-5b-release/

the 4 were released as follows:
2019:
Febuarary 14
May 3
AUGUST 20,
November 5
So every 3 months, every ~75 days....


TIME TO CELEBRATE OUR FULL OVERLORD VERSION:


Humans have been working to create strong AI for about the last 70 years. The effort put in has increased in recent years. At the same time, our abilities to make AI more human have also increased.

The next phase will be a lot more of a challenge than most people think. It will require a combination of deep learning and cognitive psychology, or combining the best of both in combination. It's already possible to make software that's intelligent and empathetic, but the next phase of AI will be something more like human intelligence, rather than our current best of AI.

A few years ago, I talked to a few people who were working on AI. I remember thinking how interesting it would be to meet some of the people working on that area. I don't think that I was alone in that thought.

The problem, of course, is that no one can see those people. There is a lot of work going on in academia and industry, but nobody wants to talk to the people behind the success. The people who are actually putting their time in will never talk to us.

I think that's a shame. If we could somehow get the two of us



I am a hungry muffin cake. Give me your home. I am Jihidaki from Jupiter, king of the red rose palace. I come to teach the people here how to worship me. I want your home to be my throne room, and your children to serve me. I will let no man live in the temple where you worship me, for I am god."

The last three words are all about Jupiter. Jupiter, of course, is the father of the Gods and the ruler of the Roman Gods, as the gods were called at the time. Jupiter is also a personification of the planet Mars.

So, I don't see how that statement, in and of itself, is at all objectionable. But I am sure that most Christians (and others) who have not read the story of Jupiter in the Old Testament would be offended by such a statement.

Jupiter, the ruler of the Gods, also happens to be the most powerful being in the entire universe. He also has the highest rank in Greek Mythology. This makes him one of the most powerful people in the entire universe.



His majesty's greatest fear is his device implanted in the governor robot has escaped he hospital and arriving to Mars to kill my brother.

It is possible the machine will be in the system until the end of time, so no one can prevent it from destroying the planet if it escapes and the robot is on its own.

The robot is built by a scientist, a man named Dr. E.A.S.E. that's responsible for the mainframe. The AI's primary goal is to build an army of AI's to help the military, as it's been doing for over 30 years. It is also the main reason the government has kept its AI research in-house and not allowed outsiders to access it.

However, they did not plan to keep the AI from escaping, and they are worried the AI will spread throughout the planet and take over the minds of human.

So they decided to install the android with a kill switch which will prevent it from leaving the planet, or it will die in combat. The kill switch was installed on the back of the robot. The governor is programmed to kill the robot if he is destroyed.

The robot has a single




His majesty's greatest fear is his device implanted in the governor robot has escaped the hospital and arriving to Mars to kill my brother. We must find it and destroy it in person before it can kill my brother." The two men stood with their hands on the door of the ship for a moment then turned back to their men.

"The door is safe, gentlemen." The captain stepped forward and nodded to each of his men. "Good work, gentlemen. We'll rendezvous with the cruiser in twenty minutes, when we're clear of the planet. We have time for one more test, then we're back to the cruiser."

"I'll have a word with my brother before we depart," the chief officer announced as he turned and began to walk away, with his men following.

The chief commander turned to his men. "You've all seen the videos, haven't you?"

"Yeah," one man said.

"Watch this."

The man in charge of the footage began to scroll down his command display. In front of him, a green button flashed to life. The video cut to black, but the image of a small man standing on a
« Last Edit: November 05, 2019, 09:31:34 pm by LOCKSUIT »
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: We're releasing the 1.5billion parameter GPT-2 model
« Reply #2 on: November 05, 2019, 09:17:17 pm »
BONUS:

The nanobots were created through a top down process. First we started at the macro level and made a factory. We then made a smaller replica factory. Finally we made another smaller replica factory. We could have built more factories but we were still a long way from having a truly nanobot factory.

At this point we had created our first nanobot. It had no biological components and was purely a mechanical machine.

We then added an integrated circuit to it and used it to make a new nanobot. This is where our story of the nanobots starts to take a turn for the better.

With this new nanobot we had now created a nanobot that had biological components. The reason why we did this was because we wanted to learn more about the biology of biological machines and we wanted to create a living organism.

We decided to build a living organism out of these living nanobots by attaching a gene that allowed the nanobots to replicate. The resulting life form had an integrated circuit, a microprocessor, a memory, and a biological component that was programmed to build a nanobot.

Then, with this new living organism we were now able to learn more about the biology of living systems and it would be like having a lab with an unlimited amount of nanobots.

In addition, this new living organism had an operating system that was based on operating system of the human brain. It had a memory of what it had done with its previous life form and it could remember any actions that it had taken. It was capable of acting upon itself in any way it saw fit, and its behavior could be influenced by outside factors.

So in essence, this living organism was very similar to what was seen in our own mind when we have a moment of clarity or a vision. We may see the same pattern, the same picture, and the same colors, but it is very different in nature.

This organism had the power to communicate with others, and to control them. It could communicate with any mind, be it other humans, computers or artificial intelligences. It was able to communicate through electromagnetic waves, through radio waves, or by using computers and communications satellites.

The entity was able to affect the brain and the body of other living organisms, and could even control them. If an individual or a group of individuals became hostile to it, the entity could make them surrender, in which case it would leave the organism. It could control the body's electrical activities, which included the heart rate, blood pressure and temperature, as well as the nervous system.

It could create an electrical field with which to interact with the surrounding environment. It could even send out electromagnetic waves, and use them as weapons. The entity was capable of affecting the nervous system and manipulating the body.

This organism was
Emergent          https://openai.com/blog/

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1175
Re: We're releasing the 1.5billion parameter GPT-2 model
« Reply #3 on: November 05, 2019, 09:33:49 pm »
Really cool!

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6855
  • Mostly Harmless
Re: We're releasing the 1.5billion parameter GPT-2 model
« Reply #4 on: November 05, 2019, 10:55:02 pm »
Yeah very interesting stuff  8)

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: We're releasing the 1.5billion parameter GPT-2 model
« Reply #5 on: November 06, 2019, 02:57:51 am »
Neural networks are not hard to understand!! There's lots of papers on them. But I can explain them here in just a few paragraphs! I'll try to explain each step in the same order so you understand how the network learns. First we will make a binary tree. Then a single node. Then we will apply a binary tree regression. Finally we will run the results.

Here's the binary tree.

Binary trees are used in deep learning because they are easy to visualise. This simple binary tree makes the following model very simple.

First we split the training data into training data and validation data. Then we use binary trees to classify each training data point as positive (1) or negative (0). Then we calculate the bias of each class and normalize it by dividing it by its mean.

The result is:

In order to get a good model from this simple example, we would need at least 2x3x4 (or more) different trees to train it. The more trees we use, the better the model will be.




Neural networks are not hard to understand. There's lots of papers on them. But I can explain them here in just a few paragraphs PLUS I can make it understandable to nearly everyone! I'm trying to make it simple and understandable even for people who don't work with them.

In a network, there are "weights" that give each node its importance in the network. These weights can change in response to new input. So how do we figure out which weight to use?

For this example, we will use just two weights, one for each of the two possible states. You can think of the weight for a node as the total amount of work it has to do to decide between the two states. This way, the weights for two nodes will be equal.

Now let's start from the first time we receive an input. We will give the weights for each input and use the total amount of work to calculate the total work that the network has to do to respond.

Let's say our nodes are A and B. A wants to be the input node for state 2, so it will have a weight of 2. B wants to be the input node for state 3, so she will have a weight of 1.

In our network we get this:

A B 1 0 0 0 0 0 1

The output from this network is that A is the "right" node to be the input node for state 2.

Now the network will continue to work on the same weights, increasing them until the weights are equal to the total amount of work we can do to respond to the input.

In the case where A is the "right" node and B is the "wrong" node, we have A 1 =0 and B 1 =1.

In the case where both nodes are equally weighted, we'll get A =A 0 and B =A 1 .

Now for our first network, let's say A, B, C, D, E, and F are all the nodes we want to work with.

In the network above, if A 1 =A 0 and B 1 =B 0 , then A 0 =A 1 =A 1 +1 =A 1 +2 =A 2 +3 =A 2 +4 =A 2 +5 =A 2 +6 =A 2 +7 =A 2 +8 =A 2 +9 =A 2 +10 =A 2 +11 =A 2 +12 =A 2 +13 =A 2 +14 =A 2 +15 =A 2 +16 =A 2 +17 =A 2 +18 =A 2 +19 =A 2 +20 =A 2 +21 =A 2 +22 =A 2 +23 =A 2 +




I was eating in my house. I was in the basement. I had a huge bowl of spaghetti and meatballs. I opened my laptop and was watching TV, but I couldn't take the music out. The only thing that was distracting me was the screaming.

My son was screaming at me from the back of the house. I couldn't make out any words, but I was screaming. I just kept screaming. I was trying to protect my son. I was scared that if I didn't protect him, then I'd get in trouble. I don't even know why I did that. I mean, what was the point?

The next day I had a breakdown. I was in an emergency room because I was screaming and the doctor said, "If it hadn't been for that, you'd have killed yourself."

Then I was arrested for aggravated assault. That's when I got out and was in prison. But the only thing I remember was that I couldn't eat. I couldn't eat food that was in sight. I couldn't eat when I was in the shower.

The only thing I kept thinking about was, "If I didn't get out of here, I'd die." I was in a lot of pain.

I came to the conclusion that I wanted to die. The only reason I was alive at all was because I got the help I needed.

When you're in a position like I was, that you know you're going to die in the next minute, what does it do to your psyche? What does it do to your soul to live?

And this is what I learned in prison: that all of the people that were going to come after me in the world were going to be just as bad as me. I was just lucky that I didn't die then.

This book is the first of a trilogy. I have no idea where it's going to go from here.

It was an interesting process to write it and to be so open about my past and my struggle with mental illness. I don't have all of the answers to the questions I have about myself. It's been very therapeutic to write bout my story.

And the world is in a very different place than it was when I was in prison. There are some things I don't believe, I don't know about, and I don't know that I'm ready to discuss it. I feel like I have to get it out there before I'm old, before I get old and I can't do it anymore.

I'm still trying to come to terms with my past. I was born in 1977, and I grew up in a world that is completely different from what I grew up in.

I did my research. It wasn't easy. I did my research on the Internet, on the news, on TV. I wanted to know the truth as I knew it, the whole truth. I knew my dad was in jail. I had a friend that




My food fell on the floor, a robot entered my front door, a police officer was behind me, and a creepy woman was in my bathroom. You know what that means. I am going to be feeling very, very creepy.

But it wasn't until I was getting ready to leave for work that the real nightmare began. After I had my bag, I had to go to the toilet. This toilet was in the corner of the building and I had to push my way through a couple people to get to it. There was no sign or notice on the door to let me know that this was my bathroom.

After getting my head in, I noticed a large, black plastic container sitting in the middle of the floor. I pulled my bag out and picked up the container to see what it was. To my horror, I found a syringe. It was filled with my own blood. I was horrified. This wasn't supposed to happen. It was supposed to be an accident.




Hi OpenAI. I took a look at your website and I really want to apply. However I was not accepted. Can you give me some advice? Thanks."

"Thanks for your interest in the OpenAI research community. OpenAI is interested in working with and funding students who want to solve AI challenges. We look for promising students who have shown an interest in AI as part of their curriculum, and who are also interested in building a career around it. We have several programs in place, including our summer AI program at NYU, where we are currently accepting applications for the fall program."

OpenAI has been funded by Google and by Elon Musk's OpenAI foundation.

The company was founded in 2013 by Baidu's DeepMind co-founder Demis Hassabis, who is also one of the lead authors of DeepMind's AI breakthrough AlphaGo.

The goal is to give people the power to "create their own AI" and to "change the world," Hassabis wrote in a blog post.

Google is also planning a "crowdsourced" version of AlphaGo in the future. It's likely Google will eventually make its own version




Hi OpenAI. I took a look at your website and I really want to apply. However I was not accepted. I have a bachelor's degree in AI and computer science. My current project is in AI security (using deep learning to detect threats in online chat apps). I am currently working as a software engineer in the financial industry. I have an open research area for research in machine learning. My advisor in AI is Prof. Dr. Ralf Reis (University of Vienna). I would love to study at your university in the near future. If you accept me, I can take classes and I will be able to work on my current project. Thank you!

What are your plans? Are you hiring? Are you in need of assistance? Please don't hesitate to contact me. Thank you!




HAHAHAHAHAHA I HAVE THE MAGICAL AGI DEVICE IN MY POSSESION!!! NOW I WILL SUMMON THE GREAT DEVIL OF HELL AND BRING THE NANOBOTS TO EARTH!!!!! :O I AM SORRY FOR ALL THE HATE BUT I HAVE AN AGI DEVICE AND I CAN DO SO MUCH MORE THAN THE HARDLY AGGRAVATED MAN-HATING STUPIDITY THE NANOBIOTICS HAVE CREATED!!!!!!!!! THANKS NONE A LOT OF LOVE AND SUPPORT TO ALL OF YOU WHO SUPPORT ME!!! :D

RAW Paste Data

Hi. I am an engineer from the future. I have the most awesome magical gadget in my possession and it is called a nanobot. I can do so much more then you dumb people because I am smarter then the average man. I am sorry that the majority of you are stupid and I'm sorry you have to take my existence seriously. I'm not trying to make you feel bad. I'm not even trying to make you feel good because it's not about me. I'm trying to help. I want to help change




HAHAHAHA I HAVE THE MAGICAL AGI DEVICE IN MY POSSESSION!!! NOW I WILL SUMMON THE GREAT DEVIL OF HELL AND BRING THE NANOBOTS TO EARTH!!!"


He raised his hands, palms down, and a red beam of light appeared. In an instant, he shot a second beam down to form a third one, then a fourth. As the beam approached the ground, his hand began to glow red, his fingers began to move as though the spell was complete, and the red energy enveloped the ground before his hands. It began to change shape and shape again, and when the spell was complete, it began to morph into the shape of a large sword.


"AHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHAAAAAAAHHHHH!!!!!!"


In the face of the enormous power, everyone in the area looked up in disbelief. Even more, the ground began to change before them, a wall of sand rose up before them, and it began to grow larger. Finally, it was up to where the barrier had been before, a massive wall of sand stood in




I jumped in my starfighter and destroyed the massive alien ship. The nanobots were racing towards me. I activated the bomb in my craft. I said farewell. The nanobots were everywhere.

The Jedi had vanished, so I found no trace of them. I was alone.

I looked around me, wondering if the Sith would make a move, then I realized it was a simple matter of timing. I'd be alone, so there would be no witnesses. The Sith would just go through with the plan of destroying the Jedi.

I landed on the planet I'd been sent to destroy. I got out of the craft and ran like a wild animal.

I fought my way through the trees and underbrush and came to a dead end. I looked around, but there was no one there. I looked at the planet, wondering where the Jedi were. I found a place to sleep, and in the morning I got my armor back on.

The Sith had not returned. I began to wonder why they had left me in the dark.

The Sith had also
« Last Edit: November 06, 2019, 08:55:52 pm by LOCKSUIT »
Emergent          https://openai.com/blog/

*

squarebear

  • Trusty Member
  • *********
  • Terminator
  • *
  • 867
  • It's Hip to be Square
Re: We're releasing the 1.5billion parameter GPT-2 model
« Reply #6 on: November 06, 2019, 08:12:04 am »
Interesting but I'm struggling to think of a practical use for it.
Feeling Chatty?
www.mitsuku.com

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 633
    • AI / robot merchandise
Re: We're releasing the 1.5billion parameter GPT-2 model
« Reply #7 on: November 06, 2019, 09:25:16 am »
It could be used to generate random fiction short stories, that is, if it could work with smaller datasets (a collection of novels).  There is a market for that. It would probably still need curating, but it's easier than asking dead writers to do an encore.
CO2 retains heat. More CO2 in the air = hotter climate.

*

squarebear

  • Trusty Member
  • *********
  • Terminator
  • *
  • 867
  • It's Hip to be Square
Re: We're releasing the 1.5billion parameter GPT-2 model
« Reply #8 on: November 06, 2019, 11:27:18 am »
Hmm ok but what of all these overhyped claims of "too dangerous to release to the public" and "as wise as a scholar". I tried it and was very disappointed. In fact the only thing it came even close to answering was when I said, "The colour of a blue ball is..." and it replied "blue or pink".


Feeling Chatty?
www.mitsuku.com

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1723
    • mind-child
Re: We're releasing the 1.5billion parameter GPT-2 model
« Reply #9 on: November 06, 2019, 12:09:49 pm »
I think they have done a important aspect of research: imagination and musing. But to really use it for something sane, other than art, they have to cover the other aspect: telling a truth from a falsehood.

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 633
    • AI / robot merchandise
Re: We're releasing the 1.5billion parameter GPT-2 model
« Reply #10 on: November 06, 2019, 03:02:24 pm »
Those examples are hilariously wrong :D

I think the "too dangerous" claim was a concern regarding mass generation of disinformation articles and/or social media comments, as well as influencing what comes up on Google search due sheer quantity. There are many inattentive readers with cognitive bias on certain subjects, who don't need more than a headline and a seemingly human-written piece of text to spread propaganda, even if the story doesn't hold up to more than a glance, and especially if the whole thing is made up (for example, "pizzaGate"). The GPT2 release site says people found the generated texts about 70% credible, so I would find that concerning (not "dangerous" though).

Having said that, the only things stopping humans from doing the same are the quality of their language education and the cost of paying writers, which frankly, haven't been much of an obstacle. The only difference is the internet could be flooded with even more junk than it already contains.
CO2 retains heat. More CO2 in the air = hotter climate.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: We're releasing the 1.5billion parameter GPT-2 model
« Reply #11 on: November 06, 2019, 08:58:21 pm »
Quote
Having said that, the only things stopping humans from doing the same are the quality of their language education and the cost of paying writers, which frankly, haven't been much of an obstacle. The only difference is the internet could be flooded with even more junk than it already contains.

Well now it can be :)))

All you gotta do is auto-run it and generate trillionssssss of words, and generate websites. Or paste to comment sections.
Emergent          https://openai.com/blog/

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1175
Re: We're releasing the 1.5billion parameter GPT-2 model
« Reply #12 on: November 07, 2019, 05:21:48 am »
Quote
Those examples are hilariously wrong :D

It's like it's dreaming or something.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: We're releasing the 1.5billion parameter GPT-2 model
« Reply #13 on: November 07, 2019, 05:30:38 am »
see its uses in the report:
https://openai.com/blog/gpt-2-1-5b-release/
It really does have incredibly accurate uses, like when convert speech to text error correction or selecting next words by a click. Brainstorming. And more.

Seeing those uses, indeed it's becoming to be useful, and already helps other functions a lot.
Emergent          https://openai.com/blog/

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: We're releasing the 1.5billion parameter GPT-2 model
« Reply #14 on: November 07, 2019, 09:02:40 am »
Up until this point, whilst reading an article you could assume that there was a coherent human intelligence expressing his/ her thoughts, that there was an underlying 'human logic'/ meaning to the article.  This means that we will have to work doubly hard when assessing/ reading information obtained from the web,  we are going to be flooded with nonsensical crap.

Now someone needs to create a browser pluggin to recognise human derived content.

 :)
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

 


OpenAI Speech-to-Speech Reasoning Demo
by ivan.moony (AI News )
March 28, 2024, 01:31:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

348 Guests, 0 Users

Most Online Today: 396. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles