The Mimicry Game: Towards Self-recognition in Chatbots.

  • 10 Replies
  • 3560 Views
*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1372
  • Humans will disappoint you.
    • Home Page
The Mimicry Game: Towards Self-recognition in Chatbots.
« on: March 14, 2020, 11:36:31 pm »
https://arxiv.org/abs/2002.02334

Now here is a very interesting idea.

Quote
In standard Turing test, a machine has to prove its humanness to the judges. By successfully imitating a thinking entity such as a human, this machine then proves that it can also think. However, many objections are raised against the validity of this argument. Such objections claim that Turing test is not a tool to demonstrate existence of general intelligence or thinking activity. In this light, alternatives to Turing test are to be investigated. Self-recognition tests applied on animals through mirrors appear to be a viable alternative to demonstrate the existence of a type of general intelligence. Methodology here constructs a textual version of the mirror test by placing the chatbot (in this context) as the one and only judge to figure out whether the contacted one is an other, a mimicker, or oneself in an unsupervised manner. This textual version of the mirror test is objective, self-contained, and is mostly immune to objections raised against the Turing test. Any chatbot passing this textual mirror test should have or acquire a thought mechanism that can be referred to as the inner-voice, answering the original and long lasting question of Turing "Can machines think?" in a constructive manner.

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1178
Re: The Mimicry Game: Towards Self-recognition in Chatbots.
« Reply #1 on: March 14, 2020, 11:47:47 pm »
Nice find. So many implications to consider.

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 633
    • AI / robot merchandise
Re: The Mimicry Game: Towards Self-recognition in Chatbots.
« Reply #2 on: March 15, 2020, 03:58:03 pm »
It is certainly an interesting spin, but it sounds easy to game.
Is there anything to prevent me from hardcoding a specific question-answer combination by which a chatbot can easily identify itself? e.g. containing a specific typ0, a specific phrase at the tenth question, a morse code ID through punctuation, etc.. Frankly one could program a chatbot with an "inner voice" feedback loop that can recognise its own stock responses, without having it be capable of anything else.

The 2015 "robot passes self-awareness test" comes to mind. All the robots could do is draw a single inference based on a single prewritten response.
CO2 retains heat. More CO2 in the air = hotter climate.

*

Zero

  • Eve
  • ***********
  • 1287
Re: The Mimicry Game: Towards Self-recognition in Chatbots.
« Reply #3 on: March 15, 2020, 04:32:25 pm »
It's not that easy to game. The spec says the bot should "figure out whether the contacted one is an other, a mimicker, or oneself". If you know the program makes typ0 on purpose, it's easy to mimic it (to fool it), so it does not pass the test. If you really want to create a bot that can pass the test, you must make it very hard to mimic. In other words, a human that knows the source code of the bot should not be able to make it believe it's talking to itself.

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 633
    • AI / robot merchandise
Re: The Mimicry Game: Towards Self-recognition in Chatbots.
« Reply #4 on: March 15, 2020, 07:37:26 pm »
As I understand from the paper, the "mimicker" is a copy of the program itself, and the chatbot has to distinguish its copy from other chatbots.
Quote
the next stage replaces B with another
instance of A, called the mimicker of A. Therefore,
in  this  case  there  are  two  instances  of  the  same
program talking to each other. So the question then
is:  Will  the  agent  A  be  able  to  recognize  such  a
case and figure out (in an unsupervised manner) that
the  entity  contacted  is  in  fact  an  instance  of  itself
instead of being an instance of a distinct program?

Of course if the source code were known then you can't secretly cheat the game, but then there's also no need for an external behavioural test anymore.

Slightly more interesting is the third stage where the chatbot has to distinguish a copy of itself from its actual self, but that would not be too much of a challenge either.
CO2 retains heat. More CO2 in the air = hotter climate.

*

Zero

  • Eve
  • ***********
  • 1287
Re: The Mimicry Game: Towards Self-recognition in Chatbots.
« Reply #5 on: March 15, 2020, 08:43:42 pm »
Quote
Of course if the source code were known then you can't secretly cheat the game, but then there's also no need for an external behavioural test anymore.

Why? Source code alone can't prove humanness, can it?

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 633
    • AI / robot merchandise
Re: The Mimicry Game: Towards Self-recognition in Chatbots.
« Reply #6 on: March 16, 2020, 07:38:28 am »
Source code can not demonstrate a person's subjective experience of a chatbot's human-likeness, but it does reveal whether or not there are functional processes behind the facade that are similar to human thought processes. The paper is concerned with whether machines can think, lest I misread.

The same is true for Turing tests, the Winograd Schema Challenge, and any behavioural tests: There's no need to guess at a patient's affliction from their external symptoms when you can just analyse their blood and germs under a microscope.
CO2 retains heat. More CO2 in the air = hotter climate.

*

Zero

  • Eve
  • ***********
  • 1287
Re: The Mimicry Game: Towards Self-recognition in Chatbots.
« Reply #7 on: March 16, 2020, 09:22:05 am »
Ok I think I understand. This brings one last question: could you please help me find a few implementable descriptions of human thought processes?

*

krayvonk

  • Electric Dreamer
  • ****
  • 125
Re: The Mimicry Game: Towards Self-recognition in Chatbots.
« Reply #8 on: March 16, 2020, 11:16:49 am »
For a computer to recognize itself, it would have to be useful for it, before it would do it.  It would need to be a part of its goals?

*

Zero

  • Eve
  • ***********
  • 1287
Re: The Mimicry Game: Towards Self-recognition in Chatbots.
« Reply #9 on: March 16, 2020, 12:37:14 pm »
For a computer to recognize itself, it would have to be useful for it, before it would do it.  It would need to be a part of its goals?
At least, the program should be able to understand the meaning of concepts like "itself", "mimicker", ...etc.

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 633
    • AI / robot merchandise
Re: The Mimicry Game: Towards Self-recognition in Chatbots.
« Reply #10 on: March 16, 2020, 02:15:04 pm »
Ok I think I understand. This brings one last question: could you please help me find a few implementable descriptions of human thought processes?
That is the question AI researchers have been trying to answer since forever, I won't pretend to have all the answers, or that all people would agree with my descriptions. Some of the more easily identifiable thought processes are logical inference, generalisation, and association. The latter we see in biology-inspired artificial Neural Networks. The others are used in inference engines in Expert Systems for example. These are crude approximations because we lack detail about human thought processes, but they are recognisable from source code and/or explanation thereof. People have always taken a "I'll recognise it when I see it" attitude towards AI, regardless a multitude of proposed tests.
CO2 retains heat. More CO2 in the air = hotter climate.

 


Requirements for functional equivalence to conscious processing?
by DaltonG (General AI Discussion)
November 19, 2024, 11:56:05 am
Will LLMs ever learn what is ... is?
by HS (Future of AI)
November 10, 2024, 06:28:10 pm
Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Project Acuitas
by WriterOfMinds (General Project Discussion)
October 27, 2024, 09:17:10 pm
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

394 Guests, 1 User
Users active in past 15 minutes:
squarebear
[Trusty Member]

Most Online Today: 532. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles