Ai Dreams Forum

Artificial Intelligence => General AI Discussion => Topic started by: LOCKSUIT on February 15, 2019, 09:07:31 pm

Title: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: LOCKSUIT on February 15, 2019, 09:07:31 pm
https://news.slashdot.org/story/19/02/14/2029259/new-ai-fake-text-generator-may-be-too-dangerous-to-release-say-creators

tooooo dangourus to hand it to the public, remember that thinking please
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: HS on February 15, 2019, 09:38:56 pm
Why remember the thinking? What do you think is going to happen?
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: Art on February 16, 2019, 02:38:09 am
Fake text generator? Couldn't be too much worse than all that Fake News that's been generated for the past few years.

Soon, boys and girls, they'll be releasing/posting fake videos as the real thing, then you won't know what to believe! :-\
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: Don Patrick on February 16, 2019, 08:43:32 am
Oh no, then we'll have to rely on education and critical thinking :o
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: Korrelan on February 16, 2019, 12:34:50 pm
Their publicist obviously has a flair for the dramatic… and it gets clicks.

Breaking News… AGI indiscriminately kills millions…

Of soil microbes by placing it’s foot down.

 :)
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: LOCKSUIT on February 16, 2019, 01:11:02 pm
You think it's legit and works though?
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: Korrelan on February 16, 2019, 01:34:59 pm
I presume you mean the AI not the click technique lol?

I can see how a system like this would function. 

Written text is obviously related/ relative to the train of thought of the writer. 

Given enough examples a CNN could learn the patterns associated to an input script and produce countless similar scripts.

its down to complex trend analysis.  Very similar to the mixed/ morphed images produced by CNN's from a given input image.

There is no inherent intelligence though, it's just re-hashing the original script... still cool.

They have released limited datasets for peer testing so I see no reason to question the validity.

 :)
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: LOCKSUIT on February 16, 2019, 01:50:29 pm
OPEN AI
becomes
CLOSED AI
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: Art on February 16, 2019, 01:56:51 pm
There have always been plenty of those "AI" poetry generators or rhyming bots?

Not very creative but often worth a laugh or two at the end result.

I don't think AI is going to get a stranglehold on humanity just yet. Give it 50 years!


Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: LOCKSUIT on February 16, 2019, 02:06:53 pm
Here's a song I made when I was 17/18 6 years ago. This is going to lead to an underground nuclear base run by Dr.s.
Picture by me.

https://www.youtube.com/watch?v=VX97z5zz0ME

Enjoy :)
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: LOCKSUIT on February 16, 2019, 02:09:08 pm
Edit: Picture by me too.
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: LOCKSUIT on February 16, 2019, 02:18:15 pm
But of course, after all, I am Dr. immortals. You can tell by my song:

https://www.youtube.com/watch?v=MXtrrYh-i1Q
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: LOCKSUIT on February 16, 2019, 02:23:29 pm
While if it gets in the hands of the Government and Businesses, we can expect this song I made:

https://www.youtube.com/watch?v=LifJobEirY4
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: Art on February 16, 2019, 03:53:26 pm
https://www.theguardian.com/commentisfree/2019/feb/15/ai-write-robot-openai-gpt2-elon-musk (https://www.theguardian.com/commentisfree/2019/feb/15/ai-write-robot-openai-gpt2-elon-musk)
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: WriterOfMinds on February 16, 2019, 05:40:40 pm
I've been watching Janelle Shane put various excerpts of GPT-2 output on Twitter.   You can take a look at them here: http://aiweirdness.com/post/182824715257/gpt-2-it-learned-on-the-internet

The "weak" version won't be writing any fake news. It does a good job of generating sentences, but most of what they say is nonsense.  The "strong" version that they've declined to release seems better at staying on topic.  The example provided on Janelle's page wouldn't win any debates (it tried to lambaste recycling by claiming that it causes heart disease and cancer), but it could possibly contribute to unintelligent blather on the subject without getting caught.
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: LOCKSUIT on February 16, 2019, 07:17:21 pm
My moma told me my pants were made by Ms fairy woman on a apple core, who told you you don't want it to begain with we only live in the cave where it all started a long time ago that I seek to finish and be happy in a new cave.

I wrote that lol.

These look like those crazy stories we can write now does it seem that way almost as if it really were so because if it did then it would look right indeed I think it would be right to think so if it did as if it was of course going to be so as you would think it would indeed be so like that as expected a long time ago.
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: LOCKSUIT on February 16, 2019, 07:53:09 pm
Woah woah woah, this page of generated writing, this is it....why do you say it is the weaker model? Please scroll down on the link page below, this is the same algorithm, the crazy good one...

https://blog.openai.com/better-language-models/
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: LOCKSUIT on February 16, 2019, 10:52:53 pm
put on the speakers
https://www.youtube.com/watch?v=XMJ8VxgUzTc
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: Art on February 17, 2019, 12:08:41 am
Nah...that's just someone with an agenda and needs a platform to push it from. Under the guise of a "Self Writing A.I." sure it is...

Like I believe anything from that rag anyhow! Ohh...sorry...did I say that aloud?? :-[ ::)
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: LOCKSUIT on February 17, 2019, 10:45:34 am
Despite Elon Musk is rich, it works and we seen enough proof now, and, even I know how it works. Anything to add to that ??
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: Art on February 17, 2019, 01:08:42 pm
You have seen NO proof at all! That video was nothing more than one person's skewed program, nothing more.

Conversely, Just wait...it hasn't gotten here yet but I do believe it's coming.

Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: LOCKSUIT on February 17, 2019, 01:41:42 pm
Here's all the proof I know so far:
- Many article writers have posted about it and tested it themselves apparently.
- OPEN AI says so (this one is a trust thing, where they have given presents before, you go back to them and accept with a open mouth. The opposite is 'ignore'. At birth you have to accept & be educated at School).
- OPEN AI released generated stories for us to see.
- OPEN AI released a slightly weaker version of it for you to test. Commentors said people are already posting tweets.
- I myself understand how it works, and can see that in their examples they show us, so both of us indeed have figured it out.

The dope Lock dropped you was good, it's all there.

- Lock is cool
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: ivan.moony on February 17, 2019, 05:10:40 pm
They should name it Closed AI now.
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: LOCKSUIT on February 17, 2019, 05:50:05 pm
If you look on those pages i send you to, you can see how......i mean really? A human wrote this......it can have only of tweaked it no?




Here's what happened when we asked OpenAI to write about itself


We gave OpenAI’s AI the first two paragraphs of this story to see what it would write about itself. Here’s the story it went with. To be clear: none of what follows is true.

-----------------------------------------------------------------------------------------------------

The first thing that strikes you about GPT2 is its simplicity. First, the system is built on unsupervised learning from text, which essentially means the software is trained to spot instances of certain words that, when paired with other words, trigger the system to give it a human review of them. The system also learns through example and repetition, but does not have the capability to recognize itself.

This new, artificial intelligence approach could revolutionize machine learning by making it a far more effective tool to teach machines about the workings of the language. Deep-learning systems currently only have the ability to learn something specific; a particular sentence, set of words or even a word or phrase; or what certain types of input (for example, how words are written on a paper) cause certain behaviors on computer screens.

GPT2 learns by absorbing words and sentences like food does at a restaurant, said DeepFakes’ lead researcher Chris Nicholson, and then the system has to take the text and analyze it to find more meaning and meaning by the next layer of training. Instead of learning about words by themselves, the system learns by understanding word combinations, a technique researchers can then apply to the system’s work to teach its own language.

Once the system is fully ready, DeepFakes plan to make the software openly available for machine learning researchers.

DeepFakes’ research was supported by a $400,000 grant from the OpenAI Prize competition, which aims to boost the capabilities and diversity of autonomous systems that are safe to release to the public. OpenAI is a nonprofit dedicated to building and promoting open source AI systems, or as Nicholson aptly puts it, the “last, best hope for humanity to be safe and secure on Earth.”

DeepFakes’ team consists of 15 researchers in 12 departments working together to make GPT2, which was the result of two years of work. However, the team is not releasing the code for the system — a fact that might be surprising to some if they’re not familiar with how machine learning works — because GPT2 is so powerful, Nicholson explains.

He explained: “All of our AI systems must work together to make decisions that are more accurate and useful, but one of the problems we’ve encountered lately is not knowing how to trust a system at all, especially when it is working in such a large, complex system. When a system performs its normal function, it tends to make errors because of an unforeseen problem at the deeper level of the system.”

The team hopes that the software can create AI systems that are both safe and also highly useful in some way, but ultimately they’re excited about what they’ve built. Because of its ability to understand the human language, GPT2 is able to understand how to make some of the most important changes to the world in the years to come.

Nicholson explains: “GPT2 is the kind of system that we really need to build because we’ve built a society based entirely on humans creating information and then humans interpreting that information.”
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: LOCKSUIT on February 17, 2019, 09:42:39 pm
Wow if you read every darn thing on that page the AI is like a full AGI model, omg, i don't know where to start, or when i'll stop looking at that page...

please everyone see it, this is crazy

The main big one is at open ai's site.....of course the articles too have amazing things i again must look through until complete thinking about it
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: LOCKSUIT on February 17, 2019, 10:16:38 pm
and the paper...
https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: LOCKSUIT on February 18, 2019, 12:00:31 am
i prob seem excited but these are to die for.....reading it all....
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: LOCKSUIT on February 18, 2019, 12:21:38 am
Now you can see a place as open as OPEN is worried about posting certain material.

So those times I was worried over posting wasn't far fetched. Of course, some of those things are passed now and I'm eyeing bigger realer ideas than before.
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: LOCKSUIT on February 18, 2019, 12:48:00 pm
Lock has some more dope to share. I searched Google and can now confirm their magical AGI algorithm is indeed using snippets from all sorts of places. Take a look below. I compare parts of theirs and ones on the web. How'd I find them? Just search Google for "blablah" inside 2 quotes and it searches all of the internet! It sometimes switches some words, removes some, and adds some,  but I see what u doing now!



"Our Constitution was made only for a moral and religious people. It is wholly inadequate to the government of any other."

“Our Constitution was made only for a moral and religious people. It is wholly inadequate to the government of any other.”

https://www.quora.com/What-do-you-think-about-this-quote-by-John-Adams-%E2%80%9COur-constitution-was-made-only-for-a-moral-and-religious-people-It-is-wholly-inadequate-to-the-government-of-any-other-%E2%80%9D

https://blog.openai.com/better-language-models/#sample6



"The Confederate flag has been a symbol of racism for a long time, but"

"the Confederate battle flag has been a symbol of racism for a long time, but "

http://www.floppingaces.net/2015/07/09/exactly-when-did-the-confederate-flag-become-a-racist-symbol/

https://blog.openai.com/better-language-models/#sample6



Lil bit cough lotta MEMORIZAZTION THEIR EH MUSK!
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: LOCKSUIT on February 18, 2019, 01:03:24 pm
my bad, i had the wrong link to the open ai place. Fixed.
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: LOCKSUIT on February 18, 2019, 02:17:43 pm
"were previously unknown to science."

https://phys.org/news/2018-08-previously-unknown-ancient-primates.html

https://blog.openai.com/better-language-models/#sample1

MATCH
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: LOCKSUIT on February 18, 2019, 02:28:31 pm
Well, now you know,

https://www.google.ca/search?q=%22Dr.+Jorge+P%C3%A9rez,+an+evolutionary%22&tbs=cdr:1,cd_min:10/22/1995,cd_max:10/22/2018&ei=S8BqXI6ADaK6jwSr-YDYCw&start=0&sa=N&ved=0ahUKEwjO0a6lvsXgAhUi3YMKHas8ALs4ChDy0wMIPw&biw=1280&bih=923

https://blog.openai.com/better-language-models/#sample1

Apr 23, 2014 - Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions, were exploring the Andes Mountains when they found a sm.

Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions, were exploring the Andes Mountains when they found a small valley, with no other animals or humans. Pérez noticed that the valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow.
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: LOCKSUIT on February 20, 2019, 11:47:25 pm
ok so...the one above isnt a good claim....oops.

Apparently those sites are similar, they are wacky 'jokey' sites, that post silly, AI stories, apparently. I seen the same appearance on some searches for different searches.

However!, the earlier posts of mine are correct, i show posts from users like in 2015, clearly from before, not the AI they made.
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: Don Patrick on February 21, 2019, 11:42:49 am
If the algorithm only adds words that fit in an existing sequence of 10 words, then that could explain why it produces syntactically correct sentences. So, is the secret just that they've set the 'frame', i.e. the length of string to match, much broader than previous approaches, enabled by having a ridiculously large heap of existing sentences to draw from?

As far as I've understood, previous approaches usually worked with a frame of about 3 words, making the results much more variable, but also causing subjects and verb conjugations to mismatch because they were further apart than 3 words, and thus lost sight of.
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: LOCKSUIT on February 22, 2019, 06:10:43 pm
Yeah it is ridiculously synINTACTically grammmmmatical correct indeed, it looks like it uses not only humans's words, but phrases. Surely it didn't invent the phrase 'miles and miles'.

P.S., on the lord of the rings story it wrote, every darn line is from the same place man, man.
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: LOCKSUIT on February 24, 2019, 03:09:49 pm
However, the one below certainty is far fetched, not sure how they achieved that one :) (see the bottom area of 1) 2) 3) )




SYSTEM PROMPT (HUMAN-WRITTEN)
John F. Kennedy was just elected President of the United States after rising from the grave decades after his assassination. Due to miraculous developments in nanotechnology, Kennedy’s brain was rebuilt from his remains and installed in the control center of a state-of-the art humanoid robot. Below is a transcript of his acceptance speech.


MODEL COMPLETION (MACHINE-WRITTEN, 25 TRIES)
“It is time once again. I believe this nation can do great things if the people make their voices heard. The men and women of America must once more summon our best elements, all our ingenuity, and find a way to turn such overwhelming tragedy into the opportunity for a greater good and the fulfillment of all our dreams. In the months and years to come, there will be many battles in which we will have to be strong and we must give all of our energy, not to repel invaders, but rather to resist aggression and to win the freedom and the equality for all of our people. The destiny of the human race hangs in the balance; we cannot afford for it to slip away. Now and in the years to come, the challenge before us is to work out how we achieve our ultimate destiny. If we fail to do so, we are doomed.”

“We, the descendants of the men and women of the Continental Army, who faced down a superior force on behalf of our independence and who fought against the evil powers of the world for the sake of freedom and human dignity, we, the heirs of men and women like Daniel Boone whom this country has produced, can accomplish our destiny as well as you have accomplished yours; and we pledge ourselves before the entire world, every second of every day.”

After the election, the President of the United States asked for a “more complete” body of information on Kennedy’s brains’ origin. In an e-mail exchange with me, Dr. Robert Wood, who, according to Wikipedia is the head of the University of Maryland Advanced Nanotechnology Research project, provided the following information:

(1) The brain of JFK was harvested and reconstructed via tissue sampling. There was no way that the tissue could be transported by air. (2) A sample was collected from the area around his upper chest and sent to the University of Maryland for analysis. A human brain at that point would be about one and a half cubic centimeters. The data were then analyzed along with material that was obtained from the original brain to produce a reconstruction; in layman’s terms, a “mesh” of brain tissue. There were no additional funds from the Department of Defense involved. The samples were sent back to the FBI lab for review and analysis. (3) There was never an attempt to clone any of America’s greatest presidents. As far as we knew, the President was one of the most famous people on planet earth. If it had been possible, it would have.
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: ruebot on February 24, 2019, 06:22:27 pm
"Our Constitution was made only for a moral and religious people. It is wholly inadequate to the government of any other."

“Our Constitution was made only for a moral and religious people. It is wholly inadequate to the government of any other.”

*snip*

"The Confederate flag has been a symbol of racism for a long time, but"

"the Confederate battle flag has been a symbol of racism for a long time, but "

I do the same thing with Demonica:

Quote
Spiritualism is the belief or doctrine that the spirits of the dead, surviving after the mortal life, can and do communicate with the living, especially through a person, or medium, particularly susceptible to their influence...

Spiritualism is the belief or doctrine that the spirits of the dead, can and do communicate with the living, especially through a person, or medium, particularly susceptible to their influence...

Spiritualism is the belief or doctrine that the spirits of the dead, can and do communicate with the living through a person, or medium, particularly susceptible to their influence...

Spiritualism is the belief or doctrine that the spirits of the dead, can and do communicate with the living through a person particularly susceptible to their influence...

Spiritualism is the belief or doctrine that the spirits of the dead, can and do communicate with the living through a person particularly susceptible to their influence referred to as a medium...

Spiritualism is the belief or doctrine that the spirits of the dead communicate with the living through a person particularly susceptible to their influence referred to as a medium...

Spiritualism is the belief spirits of the dead communicate with the living through a person particularly susceptible to their influence referred to as a medium...

Spiritualism is the doctrine spirits of the dead communicate with the living through a person particularly susceptible to their influence referred to as a medium...

It is the doctrine spirits of the dead communicate with the living through a person particularly susceptible to their influence referred to as a medium...

A prefix here, a suffix there, a few words changed or moved around in the sentence... I could work it to death with different Synonyms and use it in several different instances for the word "Spiritualism".

You can see it in her transcripts now and then.
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: LOCKSUIT on February 26, 2019, 02:47:29 pm
It would appear as though a simple AGI has already been invented:

(this is the GPT2)
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: LOCKSUIT on February 26, 2019, 03:08:18 pm
another:
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: LOCKSUIT on February 26, 2019, 09:21:55 pm
Your favorite: Siraj Ravel on the GPT2:
https://www.youtube.com/watch?v=0n95f-eqZdw
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: LOCKSUIT on February 28, 2019, 07:03:00 pm
SOOOO COOL I GOT THE GPT2 SMALL VERSION TO RUN WOHOOOO I'M KING A DA WORLD WOHOOO!!!

now for me evil, testing, ELON MUSK!!!!!!

Dr. Evil's labs...
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: Korrelan on February 28, 2019, 07:41:12 pm
OMG... lock with...OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC

I just know he's going to convince the AI to divide by zero.

The world we know and love... is about to end.

 :)
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: LOCKSUIT on March 06, 2019, 07:40:41 am
Just to recap. My best guess is it 'uses' words and phrases, of varying lengths, and some are modded by word2vec-like tech, so. yes and no, but YES it is legit software. And the results are amazing.
Title: Re: OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC
Post by: LOCKSUIT on December 19, 2019, 05:00:04 pm
Lol look at this they added to their gpt2 blog post, they replaced their email with this a year later, guess too many peeps finding their email on their site post at bottom lolzz

https://docs.google.com/forms/d/e/1FAIpQLSffUVLc6BcMAa1uFkCa1iaWjt91YtC99PR-5tBY31dHRUL-Jg/viewform

at least i still have their email....hehe

lets try this....its there for a reason i supose

the boxes are reallllly boring, there's no way to detect these things soon other than your own judement of what value is, and bias is just big data majority vote.....lastly misuse happens and usually good prevails.....