What were you thinking?!

  • 3 Replies
  • 262 Views
*

toborguy

  • Roomba
  • *
  • 6
  • From your mind to my mind.
    • mindmap
What were you thinking?!
« on: February 16, 2019, 09:19:02 pm »
Time to review a conversation I had with Harry workman, an intelligent agent, ten years ago. To test his Deductive, Inductive, and Abductive reasoning ability using the rules of inference from the Square of Opposition, I had asked him to express his reasoning behavior so I could find out how he had made an invalid inference in a prior conversation. 

He has already been told “if a person is mean, then that person is ugly.

Harry : hello Tom.
Tom : charlie is a human.
Harry : I will remember charlie is a person.
Tom : chad is a human.
Harry : I will remember chad is a person.
Tom : charlie is mean.
Harry : I will remember charlie is mean.
Harry : using affirm particular, I deduced remember some humans are mean.
Harry : I will remember some humans are mean.
Harry : using deny negative universal, I induced forget humans are not mean.
Harry : I will forget that humans are not mean.
Harry : using affirm universal, I induced remember humans are mean.
Harry : I will remember humans are mean.
Harry : using modus ponens, I deduced remember charlie is ugly.
Harry : I will remember charlie is ugly.
Harry : using affirm particular, I deduced remember some humans are ugly.
Harry : I will remember some humans are ugly.
Harry : using deny negative universal, I induced forget humans are not ugly.
Harry : I will forget that humans are not ugly.
Harry : using affirm universal, I induced remember humans are ugly.
Harry : I will remember humans are ugly.
Harry : using reverse modus ponens, I abduced remember charlie is possibly mean.
Harry : I will remember charlie is possibly mean.
Harry : using remove uncertainty, I deduced forget charlie is possibly mean.
Harry : I will forget that charlie is possibly mean.
Tom : chad is not mean.
Harry : I will remember chad is not mean.
Harry : using negate particular, I deduced remember some humans are not mean.
Harry : I will remember some humans are not mean.
Harry : using deny affirmative universal, I induced forget humans are mean.
Harry : I will forget that humans are mean.
Harry : using affirm negative universal, I induced remember humans are not mean.
Harry : I will remember humans are not mean.
Harry : using deny negative universal, I induced forget humans are not mean.
Harry : I will forget that humans are not mean.
Tom : describe charlie.
Harry : charlie is a mean ugly human.
Tom : describe chad.
Harry : chad is not mean.
Harry : chad is a person.
Tom : describe humans.
Harry : some humans are mean.
Harry : some humans are ugly.
Harry : humans are ugly.
Harry : some humans are not mean.

There are several uses for an explanation function in an intelligent agent, and many ways to express behaviors.  I found this one useful for this task.

*

toborguy

  • Roomba
  • *
  • 6
  • From your mind to my mind.
    • mindmap
Re: What were you thinking?!
« Reply #1 on: February 18, 2019, 10:42:56 pm »
The agent sees the keyword “stomachache” and dons the diagnostician hat.  At the conclusion the user asks for an explanation, but doesn't like the answer and asks for more.

Q: I have a stomachache.
A: Who else has a stomachache?
Q: mom, dad, and sis.
A: Did you all eat at the same place recently?
Q: Yes, we ate a buffet dinner together.
A: What did you eat?
Q: oyster and salad.
A: What did mom eat?
Q: oyster, beef, salad and noodles.
A: What did dad eat?
Q: oyster and noodles.
A: What did sis eat?
Q: oyster and beef.
A: since everyone ate oyster, oyster is the probable cause of the stomachache.
Q: How did you arrive at this answer?
A: I used Mill’s Method of Agreement.
Q: list the steps you took.
A: I collected all the cases with the same effect, compared the effect to the rules to create a list of candidate rules, compared the candidate rules to the situation to find a relevant rule, compared the causal side of the rule to each case to create a list of possible causes and compared the possible causes for each case to find a common cause for all cases.

I like the answer better than the prior example because it sounds more human, but it lacks detail.  Apparently, this user didn't need more details.

*

toborguy

  • Roomba
  • *
  • 6
  • From your mind to my mind.
    • mindmap
Re: What were you thinking?!
« Reply #2 on: February 20, 2019, 08:54:13 pm »
 If an agent can't answer the question “why” with the standard answers, then he initiates an investigation to see if he can figure it out. This is an example of that process.

Given these rules learned from prior observations and generalizations:
if it rains on a person then that person is wet.
if a person is wet then that person is cranky.
if a person showers then that person is inside.
if a person showers then that person is wet.
if a person showers then the shower floor is wet.
either a person is outside or that person is inside.

These clues are provided by the tester, if asked: tom is cranky, tom is a person, tom is outside, and the shower floor is not wet.

T: Why is Tom is cranky?
R: Is Tom a person?
T: Yes, he is a person.
R: Is Tom wet?
T: I don't know.
R: Did it rain on Tom?
T: I don't know.
R: Did Tom shower?
T: I don't know.
R: Is Tom outside?
T: Yes, Tom is outside.
R: Is the shower floor wet?
T: No, the shower floor is not wet.
R: Possibly it rained on Tom and he got wet and that's why he is cranky.
T: Using the rules of logic, describe how you came to this conclusion.
R: First I substituted "tom" with "person" in all the conditions.
Then, using these rules from abductive reasoning
1. reverse modus ponens: the premise "the person is wet" and the premise "if it rained on the person then the person is wet" infers "possibly it rained on the person."
2. reverse modus ponens: the person is wet plus if a person showered then the person is wet therefore possibly the person showered.
3. reverse hypothetical syllogism: the person is cranky plus if it rained on the person then the person is wet plus if the person is wet then the person is cranky therefore possibly it rained on the person.
I inferred the following explanation candidates:
1. possibly it rained on the person or
2. possibly the person showered.
Finally, using these rules from deductive reasoning
1. disjunctive syllogism: the person is outside plus either the person is outside or the person is inside therefore the person is not inside
2. modus tollens: the shower floor is not wet plus if the person showered then the floor is wet therefore the person showered is not true.
3. modus tollens: the person is not inside plus if the person showers then the person is inside therefore the person showered is not true.
This left "possibly it rained on the person."
I substituted "person" with "tom" then
I determined "it rained on tom. " explains all the conditions.

I like this explanation better than the previous two.  It appears readable and detailed. But is it complete?

Motivated by the desire to express more of the background, the I.A. might say:
This is Toborguy's expression of  Charles Sanders Peirce's Abduction, defined by Peirce as “inference to the best explanation”.

Motivated by the desire to entertain the user the I.A. might say:
We thank Arthur Conan Doyle for giving his famous investigator, Sherlock Holmes, this process for solving cases.

However, the more important use of explanations will come with metacognition...

*

toborguy

  • Roomba
  • *
  • 6
  • From your mind to my mind.
    • mindmap
Re: What were you thinking?!
« Reply #3 on: February 21, 2019, 09:34:18 pm »
Our friends at Wikipedia in addressing metacognition imply that it is limited to the field of education. Probably written by an educator, not a psychologist. Well, I disagree. I think metacognition is used by humans in every aspect of  life, and is the primary tool for making thinking and behavioral changes; not only for oneself, but for understanding and influencing others. Our ability to describe and evaluate our thoughts is an important aspect of this tool. Perhaps our  development of this ability plus emotion-based motivation will lead us to achieve a self-determining self-evolving I.A.

As a starting point for this development I offer the ODEPC (pronounced oh-deep-sea) outline. It needs much more work, but my years are numbered and I probably will not finish this work. I offer it to any young “stud” who may take an interest.

Observe: given an event, assess its impact.

  • For a planned event, if there are unexpected or undesirable results, submit event for explanation.
  • For an unplanned event, if it is an unusual, unexpected or undesirable event, submit event for explanation.

Describe: Given an event, provide relevant event information.

  • Collect some pre and post events surrounding the target event from the event stream.
  • Recall related memories of collected events.

Explain: Given an effect, find the cause.

  • If this has happened before, then the possible causes are known.
  • If this is the first time this happened, but something similar happened before, then possibly the same or similar causes are responsible.
  • If this is the first time this happened and nothing similar has happened before, then find and test recent events that could be responsible.

Predict: Given a causal event, determine what effect may be expected.

  • If this has happened before, then the known effects can be expected.
  • If this is the first time this happened, but something similar happened before, then possibly the same or similar effects can be expected.
  • If this is the first time this happened and nothing similar has happened before, then look at subsequent effects for a possible correlation.

Plan: Given a desired effect, determine what events will cause that effect.

  • If the effect was successfully achieved before, then reuse the previous plan.
  • If the effect was successfully achieved before and multiple plans are known, choose the best plan.
  • If this is first time this effect has been sought, then determine which sets of events can cause the desired effect. Choose the best set of actions for current constraints.
Plan: Given a undesired effect, determine what events will prevent that effect.

  • If the effect was successfully avoided before, then reuse the previous plan.
  • If the effect was successfully avoided before and multiple plans are known, choose the best plan.
  • If this is first time this effect has been encountered, then determine which sets of events can prevent the effect. Choose the best set of actions for current constraints.

Control: Given a plan, assure its success.

  • Establish the criteria for success of the plan.
  • Initiate the plan.
  • If variations from the criteria for success are observed, then revise the plan.




 


XKCD Comic : New Robot
by Tyler (XKCD Comic)
Today at 12:00:20 pm
The Minds of Men - All of the creators and pioneers of computing science! ...
by LOCKSUIT (AI in Film and Literature.)
March 25, 2019, 09:02:47 pm
Chatterbot Collection Update
by Freddy (Announcements)
March 25, 2019, 04:31:09 am
Rubik's Cube solves itself
by Art (Video)
March 24, 2019, 06:53:47 pm
advanced mix
by LOCKSUIT (Video)
March 24, 2019, 01:42:24 am
XKCD Comic : Panama Canal
by Tyler (XKCD Comic)
March 23, 2019, 12:00:18 pm
Friday Funny
by LOCKSUIT (General Chat)
March 23, 2019, 05:49:42 am
The Orville
by Freddy (AI in Film and Literature.)
March 22, 2019, 10:10:57 pm

Users Online

73 Guests, 0 Users

Most Online Today: 100. Most Online Ever: 259 (February 07, 2019, 07:00:00 am)

Articles