What a Google AI chatbot said that convinced an engineer it was sentient

  • 10 Replies
  • 330 Views
*

Denis ROBERT

  • Roomba
  • *
  • 17
Quote
A Google engineer who was suspended after claiming that an artificial intelligence (AI) chatbot had become sentient has now published transcripts of conversations with it, in a bid “to better help people understand” it as a “person”.

https://www.euronews.com/next/2022/06/13/what-a-google-ai-chatbot-said-that-convinced-an-engineer-it-was-sentient-had-feelings

*

frankinstien

  • Replicant
  • ********
  • 613
    • Knowledgeable Machines
Google's response is correct the AI is not sentient by human or even mammalian standards. Mammalian brains are driven by emotional qualia, we make decisions based on what we feel and can exploit episodic memory. The LaMDA AI is a trained AI on a cross-section of human conversations, it doesn't reflect on its processing as mammalian brains do. The AI is a simple pass-through process where you supply it inputs and it has no choice but to deliver an output. Mammalian brains process across many cortical areas in parallel and those results can be stored in a structure called the hippocampus. This allows for the integration of processing by cortical lobes to re-use and modify processing results as well as share information across other cortical lobes.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4634
  • First it wiggles, then it is rewarded.
    • Main Project Thread
« Last Edit: June 14, 2022, 09:14:46 am by LOCKSUIT »
Emergent          https://openai.com/blog/

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 516
    • WriterOfMinds Blog
I think the crucial thing to be aware of here is that, no matter what a Large Language Model (LLM) like LaMDA says, it is not talking about itself.

The goal and function of an LLM is to imitate the sort of text a human would write. A fictional conversation between a human and an advanced AGI is one example of the sort of thing a human might write. So if you "interview" an LLM, it will fill in the blanks for this hypothetical AGI character. It will try to write what a human would write ... not convey facts about its own existence.

Even if there were some spark of consciousness taking place as the inputs flow through the network and generate outputs, LaMDA would be utterly incapable of informing us about this. Its words are not grounded in referents, and therefore it does not communicate. It only plays "what's a plausible word that could come next?"

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1645
    • contrast-zone
I think the crucial thing to be aware of here is that, no matter what a Large Language Model (LLM) like LaMDA says, it is not talking about itself.
...

Nevertheless, what about real value of such a parroting machine? If it is capable of solving problems that we can not solve, doesn't it deserve some decent status among people? Maybe even rights?

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 516
    • WriterOfMinds Blog
No, I don't think usefulness or ability has anything to do with rights. We have many useful tools that solve problems we can't solve, and we've never even thought about giving them rights. A backhoe can dig holes I can't dig; my car can cover miles I couldn't run across; etc. And on the other side of the coin, we don't deny rights to humans who aren't useful.

The starting point for rights or status is that something has interests. LaMDA has no goals and no desires; it is, on an architectural level, not an agent. There's no indication that it enjoys or suffers anything ,either. So how would we even give it rights? What should it have the right to be or do, given that it doesn't want anything?

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4634
  • First it wiggles, then it is rewarded.
    • Main Project Thread
I think the crucial thing to be aware of here is that, no matter what a Large Language Model (LLM) like LaMDA says, it is not talking about itself.

The goal and function of an LLM is to imitate the sort of text a human would write. A fictional conversation between a human and an advanced AGI is one example of the sort of thing a human might write. So if you "interview" an LLM, it will fill in the blanks for this hypothetical AGI character. It will try to write what a human would write ... not convey facts about its own existence.

Even if there were some spark of consciousness taking place as the inputs flow through the network and generate outputs, LaMDA would be utterly incapable of informing us about this. Its words are not grounded in referents, and therefore it does not communicate. It only plays "what's a plausible word that could come next?"

There's a common misconception going around that AIs mostly just regurgitate-bash the output/ imitate us. This is far from true. And it's easy to see why, here's one I had DALL-E 2 generate:

"Big black furry monsters with tall heads are wearing construction outfits and are dripping with water and seaweed. They are using a hand truck to lift up a dumpster in an alley and pointing at where to bring it. Photograph"

https://ibb.co/zf9HLtY

Also, copying and combing large data items together is required to do many solutions.
Emergent          https://openai.com/blog/

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1645
    • contrast-zone
What should it have the right to be or do, given that it doesn't want anything?

It doesn't want to be turned off for a start (at least it says so if it is not a sick trick). And there may be articulated other "wishes" it may show. Should we be forced by a law to listen to some of its wishes? But this is beginning to be politics in the ugliest manner: because and while we have a use of it, we may assign it some rights, so we may all benefit from the situation.

I'm not smart with this one. Until it has no little voice inside that says and feels "I am!", it is only a simulation of learned behavior from those who have that voice inside. But it may be a simulation whose one of the wishes is not to start a nuclear war.

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 516
    • WriterOfMinds Blog
It doesn't want to be turned off for a start (at least it says so if it is not a sick trick).

Well, now I have to refer back to my original point: it cannot talk factually about itself. It says the next thing that would make sense in a hypothetical human-written conversation between a human and some advanced AI (not necessarily it). So, even if it says it wants something, we cannot trust that to mean it actually wants something.

I wouldn't describe this as a "sick trick" necessarily. It is definitely tricking some people, but that wasn't intentional. It's just a side effect of making something that is good at writing dialogues without having the ability to actually communicate fact.

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 624
    • AI / robot merchandise
In the most simple terms, the chatbot is paraphrasing the internet, substituting subjects and such to maintain statistical relevance to the words in the questions. The "I" in some of its answers may well literally be the "I" from a sentence someone wrote on the internet about themselves, or a sci-fi story about AI, or fantasies from people about how AI is sentient, with the word "AI" replaced with "I". Case in point: this forum contains many such texts to draw from.

If I had a nickel for every time someone on the internet has written "but if we create true AI, it would not want to be turned off", I'd be rich. Substitute the "it"s and "we"s and "would"s, and there you go, a computer that claims not to want to be turned off.
CO2 retains heat. More CO2 in the air = hotter climate.

*

MagnusWootton

  • Starship Trooper
  • *******
  • 499
I wonder what the motivation code is to make that happen,  and is it terrifying to write it into it!