The Mimicry Game: Towards Self-recognition in Chatbots.

  • 10 Replies
  • 1213 Views
*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1255
  • Humans will disappoint you.
    • Home Page
The Mimicry Game: Towards Self-recognition in Chatbots.
« on: March 14, 2020, 11:36:31 pm »
https://arxiv.org/abs/2002.02334

Now here is a very interesting idea.

Quote
In standard Turing test, a machine has to prove its humanness to the judges. By successfully imitating a thinking entity such as a human, this machine then proves that it can also think. However, many objections are raised against the validity of this argument. Such objections claim that Turing test is not a tool to demonstrate existence of general intelligence or thinking activity. In this light, alternatives to Turing test are to be investigated. Self-recognition tests applied on animals through mirrors appear to be a viable alternative to demonstrate the existence of a type of general intelligence. Methodology here constructs a textual version of the mirror test by placing the chatbot (in this context) as the one and only judge to figure out whether the contacted one is an other, a mimicker, or oneself in an unsupervised manner. This textual version of the mirror test is objective, self-contained, and is mostly immune to objections raised against the Turing test. Any chatbot passing this textual mirror test should have or acquire a thought mechanism that can be referred to as the inner-voice, answering the original and long lasting question of Turing "Can machines think?" in a constructive manner.

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1131
Re: The Mimicry Game: Towards Self-recognition in Chatbots.
« Reply #1 on: March 14, 2020, 11:47:47 pm »
Nice find. So many implications to consider.

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 613
    • Artificial Detective
Re: The Mimicry Game: Towards Self-recognition in Chatbots.
« Reply #2 on: March 15, 2020, 03:58:03 pm »
It is certainly an interesting spin, but it sounds easy to game.
Is there anything to prevent me from hardcoding a specific question-answer combination by which a chatbot can easily identify itself? e.g. containing a specific typ0, a specific phrase at the tenth question, a morse code ID through punctuation, etc.. Frankly one could program a chatbot with an "inner voice" feedback loop that can recognise its own stock responses, without having it be capable of anything else.

The 2015 "robot passes self-awareness test" comes to mind. All the robots could do is draw a single inference based on a single prewritten response.
CO2 retains heat. More CO2 in the air = hotter climate.

*

Zero

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1163
Re: The Mimicry Game: Towards Self-recognition in Chatbots.
« Reply #3 on: March 15, 2020, 04:32:25 pm »
It's not that easy to game. The spec says the bot should "figure out whether the contacted one is an other, a mimicker, or oneself". If you know the program makes typ0 on purpose, it's easy to mimic it (to fool it), so it does not pass the test. If you really want to create a bot that can pass the test, you must make it very hard to mimic. In other words, a human that knows the source code of the bot should not be able to make it believe it's talking to itself.
Google is a plague, a disease. It is the metastatic cancer of the human species.

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 613
    • Artificial Detective
Re: The Mimicry Game: Towards Self-recognition in Chatbots.
« Reply #4 on: March 15, 2020, 07:37:26 pm »
As I understand from the paper, the "mimicker" is a copy of the program itself, and the chatbot has to distinguish its copy from other chatbots.
Quote
the next stage replaces B with another
instance of A, called the mimicker of A. Therefore,
in  this  case  there  are  two  instances  of  the  same
program talking to each other. So the question then
is:  Will  the  agent  A  be  able  to  recognize  such  a
case and figure out (in an unsupervised manner) that
the  entity  contacted  is  in  fact  an  instance  of  itself
instead of being an instance of a distinct program?

Of course if the source code were known then you can't secretly cheat the game, but then there's also no need for an external behavioural test anymore.

Slightly more interesting is the third stage where the chatbot has to distinguish a copy of itself from its actual self, but that would not be too much of a challenge either.
CO2 retains heat. More CO2 in the air = hotter climate.

*

Zero

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1163
Re: The Mimicry Game: Towards Self-recognition in Chatbots.
« Reply #5 on: March 15, 2020, 08:43:42 pm »
Quote
Of course if the source code were known then you can't secretly cheat the game, but then there's also no need for an external behavioural test anymore.

Why? Source code alone can't prove humanness, can it?
Google is a plague, a disease. It is the metastatic cancer of the human species.

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 613
    • Artificial Detective
Re: The Mimicry Game: Towards Self-recognition in Chatbots.
« Reply #6 on: March 16, 2020, 07:38:28 am »
Source code can not demonstrate a person's subjective experience of a chatbot's human-likeness, but it does reveal whether or not there are functional processes behind the facade that are similar to human thought processes. The paper is concerned with whether machines can think, lest I misread.

The same is true for Turing tests, the Winograd Schema Challenge, and any behavioural tests: There's no need to guess at a patient's affliction from their external symptoms when you can just analyse their blood and germs under a microscope.
CO2 retains heat. More CO2 in the air = hotter climate.

*

Zero

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1163
Re: The Mimicry Game: Towards Self-recognition in Chatbots.
« Reply #7 on: March 16, 2020, 09:22:05 am »
Ok I think I understand. This brings one last question: could you please help me find a few implementable descriptions of human thought processes?
Google is a plague, a disease. It is the metastatic cancer of the human species.

*

krayvonk

  • Electric Dreamer
  • ****
  • 125
Re: The Mimicry Game: Towards Self-recognition in Chatbots.
« Reply #8 on: March 16, 2020, 11:16:49 am »
For a computer to recognize itself, it would have to be useful for it, before it would do it.  It would need to be a part of its goals?

*

Zero

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1163
Re: The Mimicry Game: Towards Self-recognition in Chatbots.
« Reply #9 on: March 16, 2020, 12:37:14 pm »
For a computer to recognize itself, it would have to be useful for it, before it would do it.  It would need to be a part of its goals?
At least, the program should be able to understand the meaning of concepts like "itself", "mimicker", ...etc.
Google is a plague, a disease. It is the metastatic cancer of the human species.

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 613
    • Artificial Detective
Re: The Mimicry Game: Towards Self-recognition in Chatbots.
« Reply #10 on: March 16, 2020, 02:15:04 pm »
Ok I think I understand. This brings one last question: could you please help me find a few implementable descriptions of human thought processes?
That is the question AI researchers have been trying to answer since forever, I won't pretend to have all the answers, or that all people would agree with my descriptions. Some of the more easily identifiable thought processes are logical inference, generalisation, and association. The latter we see in biology-inspired artificial Neural Networks. The others are used in inference engines in Expert Systems for example. These are crude approximations because we lack detail about human thought processes, but they are recognisable from source code and/or explanation thereof. People have always taken a "I'll recognise it when I see it" attitude towards AI, regardless a multitude of proposed tests.
CO2 retains heat. More CO2 in the air = hotter climate.

 


New challenge: Online Turing test
by chattable (AI News )
October 09, 2021, 11:47:42 am
Spooky?
by MagnusWootton (AI News )
September 24, 2021, 11:19:09 am
OpenAI Shuts Down Chatbot Project To Prevent 'Possible Misuse'
by LOCKSUIT (AI News )
September 23, 2021, 04:07:25 pm
Beautiful new work robot.
by MagnusWootton (Robotics News)
September 15, 2021, 01:00:31 pm

Users Online

127 Guests, 0 Users

Most Online Today: 150. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles