Ai Dreams Forum

Artificial Intelligence => General AI Discussion => Topic started by: toborguy on February 16, 2019, 09:19:02 pm

Title: What were you thinking?!
Post by: toborguy on February 16, 2019, 09:19:02 pm
Time to review a conversation I had with Harry workman, an intelligent agent, ten years ago. To test his Deductive, Inductive, and Abductive reasoning ability using the rules of inference from the Square of Opposition, I had asked him to express his reasoning behavior so I could find out how he had made an invalid inference in a prior conversation. 

He has already been told “if a person is mean, then that person is ugly.

Harry : hello Tom.
Tom : charlie is a human.
Harry : I will remember charlie is a person.
Tom : chad is a human.
Harry : I will remember chad is a person.
Tom : charlie is mean.
Harry : I will remember charlie is mean.
Harry : using affirm particular, I deduced remember some humans are mean.
Harry : I will remember some humans are mean.
Harry : using deny negative universal, I induced forget humans are not mean.
Harry : I will forget that humans are not mean.
Harry : using affirm universal, I induced remember humans are mean.
Harry : I will remember humans are mean.
Harry : using modus ponens, I deduced remember charlie is ugly.
Harry : I will remember charlie is ugly.
Harry : using affirm particular, I deduced remember some humans are ugly.
Harry : I will remember some humans are ugly.
Harry : using deny negative universal, I induced forget humans are not ugly.
Harry : I will forget that humans are not ugly.
Harry : using affirm universal, I induced remember humans are ugly.
Harry : I will remember humans are ugly.
Harry : using reverse modus ponens, I abduced remember charlie is possibly mean.
Harry : I will remember charlie is possibly mean.
Harry : using remove uncertainty, I deduced forget charlie is possibly mean.
Harry : I will forget that charlie is possibly mean.
Tom : chad is not mean.
Harry : I will remember chad is not mean.
Harry : using negate particular, I deduced remember some humans are not mean.
Harry : I will remember some humans are not mean.
Harry : using deny affirmative universal, I induced forget humans are mean.
Harry : I will forget that humans are mean.
Harry : using affirm negative universal, I induced remember humans are not mean.
Harry : I will remember humans are not mean.
Harry : using deny negative universal, I induced forget humans are not mean.
Harry : I will forget that humans are not mean.
Tom : describe charlie.
Harry : charlie is a mean ugly human.
Tom : describe chad.
Harry : chad is not mean.
Harry : chad is a person.
Tom : describe humans.
Harry : some humans are mean.
Harry : some humans are ugly.
Harry : humans are ugly.
Harry : some humans are not mean.

There are several uses for an explanation function in an intelligent agent, and many ways to express behaviors.  I found this one useful for this task.
Title: Re: What were you thinking?!
Post by: toborguy on February 18, 2019, 10:42:56 pm
The agent sees the keyword “stomachache” and dons the diagnostician hat.  At the conclusion the user asks for an explanation, but doesn't like the answer and asks for more.

Q: I have a stomachache.
A: Who else has a stomachache?
Q: mom, dad, and sis.
A: Did you all eat at the same place recently?
Q: Yes, we ate a buffet dinner together.
A: What did you eat?
Q: oyster and salad.
A: What did mom eat?
Q: oyster, beef, salad and noodles.
A: What did dad eat?
Q: oyster and noodles.
A: What did sis eat?
Q: oyster and beef.
A: since everyone ate oyster, oyster is the probable cause of the stomachache.
Q: How did you arrive at this answer?
A: I used Mill’s Method of Agreement.
Q: list the steps you took.
A: I collected all the cases with the same effect, compared the effect to the rules to create a list of candidate rules, compared the candidate rules to the situation to find a relevant rule, compared the causal side of the rule to each case to create a list of possible causes and compared the possible causes for each case to find a common cause for all cases.

I like the answer better than the prior example because it sounds more human, but it lacks detail.  Apparently, this user didn't need more details.
Title: Re: What were you thinking?!
Post by: toborguy on February 20, 2019, 08:54:13 pm
 If an agent can't answer the question “why” with the standard answers, then he initiates an investigation to see if he can figure it out. This is an example of that process.

Given these rules learned from prior observations and generalizations:
if it rains on a person then that person is wet.
if a person is wet then that person is cranky.
if a person showers then that person is inside.
if a person showers then that person is wet.
if a person showers then the shower floor is wet.
either a person is outside or that person is inside.

These clues are provided by the tester, if asked: tom is cranky, tom is a person, tom is outside, and the shower floor is not wet.

T: Why is Tom is cranky?
R: Is Tom a person?
T: Yes, he is a person.
R: Is Tom wet?
T: I don't know.
R: Did it rain on Tom?
T: I don't know.
R: Did Tom shower?
T: I don't know.
R: Is Tom outside?
T: Yes, Tom is outside.
R: Is the shower floor wet?
T: No, the shower floor is not wet.
R: Possibly it rained on Tom and he got wet and that's why he is cranky.
T: Using the rules of logic, describe how you came to this conclusion.
R: First I substituted "tom" with "person" in all the conditions.
Then, using these rules from abductive reasoning
1. reverse modus ponens: the premise "the person is wet" and the premise "if it rained on the person then the person is wet" infers "possibly it rained on the person."
2. reverse modus ponens: the person is wet plus if a person showered then the person is wet therefore possibly the person showered.
3. reverse hypothetical syllogism: the person is cranky plus if it rained on the person then the person is wet plus if the person is wet then the person is cranky therefore possibly it rained on the person.
I inferred the following explanation candidates:
1. possibly it rained on the person or
2. possibly the person showered.
Finally, using these rules from deductive reasoning
1. disjunctive syllogism: the person is outside plus either the person is outside or the person is inside therefore the person is not inside
2. modus tollens: the shower floor is not wet plus if the person showered then the floor is wet therefore the person showered is not true.
3. modus tollens: the person is not inside plus if the person showers then the person is inside therefore the person showered is not true.
This left "possibly it rained on the person."
I substituted "person" with "tom" then
I determined "it rained on tom. " explains all the conditions.

I like this explanation better than the previous two.  It appears readable and detailed. But is it complete?

Motivated by the desire to express more of the background, the I.A. might say:
This is Toborguy's expression of  Charles Sanders Peirce's Abduction, defined by Peirce as “inference to the best explanation”.

Motivated by the desire to entertain the user the I.A. might say:
We thank Arthur Conan Doyle for giving his famous investigator, Sherlock Holmes, this process for solving cases.

However, the more important use of explanations will come with metacognition...
Title: Re: What were you thinking?!
Post by: toborguy on February 21, 2019, 09:34:18 pm
Our friends at Wikipedia in addressing metacognition imply that it is limited to the field of education. Probably written by an educator, not a psychologist. Well, I disagree. I think metacognition is used by humans in every aspect of  life, and is the primary tool for making thinking and behavioral changes; not only for oneself, but for understanding and influencing others. Our ability to describe and evaluate our thoughts is an important aspect of this tool. Perhaps our  development of this ability plus emotion-based motivation will lead us to achieve a self-determining self-evolving I.A.

As a starting point for this development I offer the ODEPC (pronounced oh-deep-sea) outline. It needs much more work, but my years are numbered and I probably will not finish this work. I offer it to any young “stud” who may take an interest.

Observe: given an event, assess its impact.


Describe: Given an event, provide relevant event information.


Explain: Given an effect, find the cause.


Predict: Given a causal event, determine what effect may be expected.


Plan: Given a desired effect, determine what events will cause that effect.

Plan: Given a undesired effect, determine what events will prevent that effect.


Control: Given a plan, assure its success.




Title: Re: What were you thinking?!
Post by: toborguy on May 20, 2019, 10:50:37 pm
This subject has been added to the new BLOG page link on Harry Workman's web page at mindmap.iwarp.com
Title: Re: What were you thinking?!
Post by: LOCKSUIT on May 21, 2019, 04:17:31 am
This is one of those what I call 'to-do-list elongators'....this will take days just to revise my plans....nice work
Title: Re: What were you thinking?!
Post by: LOCKSUIT on May 21, 2019, 12:14:23 pm
Explain more tober MAN. Is this your life's work!? This is already deeply part of my AGI plan, and I am 23 y/o. However I learned a ton from it too still. Are you dying!?

Santa exists, this is better than Christmas,
Title: Re: What were you thinking?!
Post by: Art on May 21, 2019, 02:34:05 pm
Lock,

What kind of question is that?

Everyone is Dying! Some faster or sooner than others. Nothing & no one is forever.

Title: Re: What were you thinking?!
Post by: goaty on May 21, 2019, 06:24:09 pm
If a person is mean to a thief,  that person is ugly.
Title: Re: What were you thinking?!
Post by: toborguy on May 21, 2019, 08:01:56 pm
Yes this is my two lifetime project. I'm 79 now and I will die someday.  I'm hoping someone will take an interest in my work and pick it up when I'm finished with life one. I have spent many years pulling this info into the web site and the Harry Workman program.
Title: Re: What were you thinking?!
Post by: goaty on May 22, 2019, 12:47:44 am
What a hopeless situation life is,  and we just accept it as normal...   I hate it.

Hey gramps,  nice seeing your still active doing ur code!   :2funny:
Title: Re: What were you thinking?!
Post by: LOCKSUIT on May 22, 2019, 12:49:21 am
And I'm the young stud that will pick up his workman...
Title: Re: What were you thinking?!
Post by: toborguy on May 22, 2019, 04:18:56 am
Glad to hear there's some interest.  Have you downloaded the mindmap diagram?  It is the guide to future work.  I'm currently coding the initial emotion related functions. The current design includes the Condition-Emotion=Response Cycle.
Title: Re: What were you thinking?!
Post by: LOCKSUIT on May 22, 2019, 04:22:10 am
Yeah I made a folder of your site like, 1.5 years back. Good enough? Ah I see you revised some.

What's your opinion on the ANNs? I mean, you have worked on a program all your life that has no neural network or backprop, right? Why?
Title: Re: What were you thinking?!
Post by: LOCKSUIT on May 22, 2019, 04:30:13 am
can you upload the map, it doesnt seem to load....
Title: Re: What were you thinking?!
Post by: toborguy on May 22, 2019, 07:56:14 pm
ANN's. My  program environment is designed for proof of concept development, not production. This means that single channel, low volume data with slow response times is sufficient for testing.  Therefore I don't have to use ANN technology at this time.  However, when we start work on the metacognition (ODEPC) functions, there will be a requirement for the program to do self-editing. This may be better addressed with ANN's due the inherent self-editing capabilities.

I'll see if I can get you  a jpg file of the mindmap diagram.
Title: Re: What were you thinking?!
Post by: LOCKSUIT on May 28, 2019, 08:58:46 am
A B C D occur together with w x y z
A E F G occur together with w t u v
——————————————————
Therefore A is the cause, or the effect, of w.

^

Also, I want to add, A=w may be true. This is how Glove/Word2Vec works.



By the way, I do have and have analyzed one large mindmap diagram from a year or so ago, but not sure if there is another one, or updated one. See attachment.
Title: Re: What were you thinking?!
Post by: toborguy on May 29, 2019, 10:20:40 pm
Your diagram is almost the same. The changes were minor.
Title: Re: What were you thinking?!
Post by: Don Patrick on May 30, 2019, 08:05:25 am
A B C D occur together with w x y z
A E F G occur together with w t u v
——————————————————
Therefore A is the cause, or the effect, of w.
The problem with this, and with neural networks and statistics in general, is that it does not distinguish between causation and correlation. Like when you look out the window and the sun is out, the sun is not out because you are looking out the window, nor do you look out the window because the sun is out, it just so happens that you're typically not awake when it's dark.
Title: Re: What were you thinking?!
Post by: LOCKSUIT on May 30, 2019, 08:54:07 am
I have great answers below, must see!

What you quoted basically says we have 2 observations where A & W occur at the same Time. And there is 4 items, not 2, so it doesn't fit the Look / Sun | Sun / Look statement you said. Oh, see below nevermind.

As for the Sun/Look statement, I have an answer. If you have 2 observations of them both occurring at the same Time (ah, I see how you saw this!), Look / Sun | Sun / Look, then it means that either one or the other or both follow each other (cause & effect) or that they are the same thing. However, if we took enough intelligent observations, we WOULD see that Look doesn't always have 'Sun' Follow, if it's night. Same for Sun, but decides not to Looky. However, let's say the sun is always out, and you always look. - Now, we can say Sun>Looks, and Looks>Sun. However, you have fallen into the wrong idea, looking does cause the sun to be seen rather. And the sun being there does cause the looking. So yes, Sun>Looks, and Looks>Sun. You put words in our mouths. No rule said it makes the sun 'be' there. In this case, it can be said looking makes the sun SEEN, and SEEN causes the looking!

Also cause & effect is not the best terminology. A fish swam by a rock. The best terminology is Sequences. One follows the other. At the rock the fish swam by. The fish swam by the rock. The rock slid by the fish. Near the fish the rock slid by.
Title: Re: What were you thinking?!
Post by: toborguy on May 30, 2019, 06:15:28 pm
What is the value of observing that two or more events, objects or ideas have something in common (time, location, similar characteristics) and others do not ? Your observation makes them candidates for further investigation. Is there a correlation, and what is the correlation? This investigation may lead to a new discovery or not, but without the observation of possible correlation nothing would be learned.  The observations will not prove the correlation without additional corroboration, but with additional observations, may lead to probable correlations. The use of Mill's Methods, for instance, may lead us to identifying the cause and the effect with a high probability.

Just my two cents. ;)
Title: Re: What were you thinking?!
Post by: Don Patrick on May 31, 2019, 08:02:04 am
if we took enough intelligent observations, we WOULD see that Look doesn't always have 'Sun' Follow
And that is precisely what was missing from your examples, as well as the more important sequential aspect. The same are missing in the operation of neural networks.
Title: Re: What were you thinking?!
Post by: LOCKSUIT on May 31, 2019, 09:50:39 pm
Yes Don, that's correct.

Toborguy is very correct here, my grand master net design can do that too, it can find X fast, find a correlation fast, and update and look again to refine its relations. And is fully editable/ scalable. The design can also take samples of sequences and check if all are Black Birds or if some are white.
Title: Re: What were you thinking?!
Post by: Art on June 01, 2019, 01:15:41 pm
Yes Don, that's correct.

Toborguy is very correct here, my grand master net design can do that too, it can find X fast, find a correlation fast, and update and look again to refine its relations. And is fully editable/ scalable. The design can also take samples of sequences and check if all are Black Birds or if some are white.

Can you give us an example of this design and how it works?
Title: Re: What were you thinking?!
Post by: LOCKSUIT on June 01, 2019, 01:18:39 pm
Not until the staged release, as open ai calls it.
Title: Re: What were you thinking?!
Post by: LOCKSUIT on June 14, 2019, 04:41:49 am
Most of those boxes in toborguy's grand diagram (see post #16) are all the same thing, and most of those is 1 system, a grand master net hierarchy.