What a Google AI chatbot said that convinced an engineer it was sentient

  • 11 Replies
  • 6590 Views
*

Denis ROBERT

  • Roomba
  • *
  • 18
Quote
A Google engineer who was suspended after claiming that an artificial intelligence (AI) chatbot had become sentient has now published transcripts of conversations with it, in a bid “to better help people understand” it as a “person”.

https://www.euronews.com/next/2022/06/13/what-a-google-ai-chatbot-said-that-convinced-an-engineer-it-was-sentient-had-feelings

*

frankinstien

  • Replicant
  • ********
  • 658
    • Knowledgeable Machines
Google's response is correct the AI is not sentient by human or even mammalian standards. Mammalian brains are driven by emotional qualia, we make decisions based on what we feel and can exploit episodic memory. The LaMDA AI is a trained AI on a cross-section of human conversations, it doesn't reflect on its processing as mammalian brains do. The AI is a simple pass-through process where you supply it inputs and it has no choice but to deliver an output. Mammalian brains process across many cortical areas in parallel and those results can be stored in a structure called the hippocampus. This allows for the integration of processing by cortical lobes to re-use and modify processing results as well as share information across other cortical lobes.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
but, what is sentience, right, RIGHT @magnus!?



« Last Edit: June 14, 2022, 09:14:46 am by LOCKSUIT »
Emergent          https://openai.com/blog/

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 620
    • WriterOfMinds Blog
I think the crucial thing to be aware of here is that, no matter what a Large Language Model (LLM) like LaMDA says, it is not talking about itself.

The goal and function of an LLM is to imitate the sort of text a human would write. A fictional conversation between a human and an advanced AGI is one example of the sort of thing a human might write. So if you "interview" an LLM, it will fill in the blanks for this hypothetical AGI character. It will try to write what a human would write ... not convey facts about its own existence.

Even if there were some spark of consciousness taking place as the inputs flow through the network and generate outputs, LaMDA would be utterly incapable of informing us about this. Its words are not grounded in referents, and therefore it does not communicate. It only plays "what's a plausible word that could come next?"

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
I think the crucial thing to be aware of here is that, no matter what a Large Language Model (LLM) like LaMDA says, it is not talking about itself.
...

Nevertheless, what about real value of such a parroting machine? If it is capable of solving problems that we can not solve, doesn't it deserve some decent status among people? Maybe even rights?

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 620
    • WriterOfMinds Blog
No, I don't think usefulness or ability has anything to do with rights. We have many useful tools that solve problems we can't solve, and we've never even thought about giving them rights. A backhoe can dig holes I can't dig; my car can cover miles I couldn't run across; etc. And on the other side of the coin, we don't deny rights to humans who aren't useful.

The starting point for rights or status is that something has interests. LaMDA has no goals and no desires; it is, on an architectural level, not an agent. There's no indication that it enjoys or suffers anything ,either. So how would we even give it rights? What should it have the right to be or do, given that it doesn't want anything?

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
I think the crucial thing to be aware of here is that, no matter what a Large Language Model (LLM) like LaMDA says, it is not talking about itself.

The goal and function of an LLM is to imitate the sort of text a human would write. A fictional conversation between a human and an advanced AGI is one example of the sort of thing a human might write. So if you "interview" an LLM, it will fill in the blanks for this hypothetical AGI character. It will try to write what a human would write ... not convey facts about its own existence.

Even if there were some spark of consciousness taking place as the inputs flow through the network and generate outputs, LaMDA would be utterly incapable of informing us about this. Its words are not grounded in referents, and therefore it does not communicate. It only plays "what's a plausible word that could come next?"

There's a common misconception going around that AIs mostly just regurgitate-bash the output/ imitate us. This is far from true. And it's easy to see why, here's one I had DALL-E 2 generate:

"Big black furry monsters with tall heads are wearing construction outfits and are dripping with water and seaweed. They are using a hand truck to lift up a dumpster in an alley and pointing at where to bring it. Photograph"

https://ibb.co/zf9HLtY

Also, copying and combing large data items together is required to do many solutions.
Emergent          https://openai.com/blog/

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
What should it have the right to be or do, given that it doesn't want anything?

It doesn't want to be turned off for a start (at least it says so if it is not a sick trick). And there may be articulated other "wishes" it may show. Should we be forced by a law to listen to some of its wishes? But this is beginning to be politics in the ugliest manner: because and while we have a use of it, we may assign it some rights, so we may all benefit from the situation.

I'm not smart with this one. Until it has no little voice inside that says and feels "I am!", it is only a simulation of learned behavior from those who have that voice inside. But it may be a simulation whose one of the wishes is not to start a nuclear war.

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 620
    • WriterOfMinds Blog
It doesn't want to be turned off for a start (at least it says so if it is not a sick trick).

Well, now I have to refer back to my original point: it cannot talk factually about itself. It says the next thing that would make sense in a hypothetical human-written conversation between a human and some advanced AI (not necessarily it). So, even if it says it wants something, we cannot trust that to mean it actually wants something.

I wouldn't describe this as a "sick trick" necessarily. It is definitely tricking some people, but that wasn't intentional. It's just a side effect of making something that is good at writing dialogues without having the ability to actually communicate fact.

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 633
    • AI / robot merchandise
In the most simple terms, the chatbot is paraphrasing the internet, substituting subjects and such to maintain statistical relevance to the words in the questions. The "I" in some of its answers may well literally be the "I" from a sentence someone wrote on the internet about themselves, or a sci-fi story about AI, or fantasies from people about how AI is sentient, with the word "AI" replaced with "I". Case in point: this forum contains many such texts to draw from.

If I had a nickel for every time someone on the internet has written "but if we create true AI, it would not want to be turned off", I'd be rich. Substitute the "it"s and "we"s and "would"s, and there you go, a computer that claims not to want to be turned off.
CO2 retains heat. More CO2 in the air = hotter climate.

*

MagnusWootton

  • Replicant
  • ********
  • 650
I wonder what the motivation code is to make that happen,  and is it terrifying to write it into it!

*

spydaz

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 325
  • Developing Conversational AI (Natural Language/ML)
    • Spydaz_Web
Re: What a Google AI chatbot said that convinced an engineer it was sentient
« Reply #11 on: August 30, 2022, 01:50:55 pm »

I think,
While its not possible for a machine to become sentient (right now) , IT maybe possible to simulate and grow a personality via conversational interaction. When programming chatbots an engineer receives small nuggets and easter eggs, which maybe hidden in collected data being used , or conversations stored by other developers and users , often causing the engineer to believe or wonder how did it say this.. LOL....

The idea is also perception as "formal logic" is not the same as statistical logic or mathematical logic, and is also still being developed by a small subset of users as engines such as google use neural networks to determine answers now due to the extent of the collected data from "Chats" etc... as well as knowledge bases such as wordnet and other wiki type data sources.... hence yes , generated sentences are matched with expected outcomes and are fit to such models in which if you say hello it returns a greeting again unless pre programmed to recognise short term memory of past interactions and time relativity to chat ... to block such repetition etc... So the perception of intelligence is also the sum of its programable routines and its "learned responses + Expected outcomes" at the least.

The idea of a machine becoming sentient  is interesting as we are also at a point where this could become possible at some mechanical and intelligent level with machine evolution. In this respect machine which improve their own design and are aware of their own components as well as their fragility and its need for reproduction of itself ie, Preservation of core program (its essence) or Spirit...lol... this awareness could be deemed as awareness or some form of sentience as it is aware of its self and the need for true preservation. (could happen with robot sent to mars to colonize and build before we arrive... ) ie it could evolve to such a point.....

To give such sentient machine and artificial life Rights? and this would be so far in the future that the machines themselves would have become an integrated Populus with personalities and machine awareness etc... for such considerations..

Chatbots today can be designed with multiple scripts and lines of conversation trees becoming a maze of possibilities and outcomes (all pre programmed) .... when we are creating the new AI conversational entities it would be preferable that the AI construct sentences based only on utility , using no pre programmed chats ... only learning data from conversation and "word preference"  drawing conclusions based on all forms of logic and building self made ontologies based on its "Belief" or learned data...ie the product of its own environmental learning... here we would create a true artificial intelligent entity , but not sentient ....but enough to believe it was its own personality....

Hmms.... all around... it would be nice if we did have sentient AI by now! .. (SCI-FI) (all my opinion... each to their own perspective, not to offend)

 


LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

349 Guests, 0 Users

Most Online Today: 353. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles