Don Your brief explanation of my long-winded attempt was superb to say the least. The idea of pointers briefly crossed my mind, but by the way you summarized everything, it might be the best of both worlds. I'm glad to be talking with someone who knows where I'm coming from and can actually decipher the gibberish of a programmer. It wasn't but a few months ago I was helping an autistic boy learn how to better understand Ai. It was a bit of a strain having to decrypt his mangled posts but I'm glad I did. Thanks for your patience with me.
Art... my man! I like when you throw your two cents in. Art is pretty Ai savvy, although not always technically minded. I can always depend on being blindsided by the ninja-man with topics I don't always keep up with, but should. Him and Freddy are both better at this than most people in the broad field of Ai. It just had to be said... I better not say anything more or Art won't be able to get his head through the door
.
So let's get down to business. We have two topics here: Sentence Ambiguity and Ephemeral knowledge
Topic #1: Sentence AmbiguitySo if I was to attempt a definition here. Sentences that are difficult to understand that only can be understood by either limits or context.
What I mean by 'limits' is kind of another type of context. It's like when two cultures interpret the same sentence different ways. They do this because of the limits or scope of their knowledge. For instance, a vinedresser would interpret 'fruit flies like a banana' as 'Fruit-flies really do like bananas'. However, a kid would probably start throwing different kinds of fruit to see which fruit does fly like a banana.
Of course, context, in general is much more significant. I get things taken out of context all the time. “Let me stuff your turkey†can be interpreted many, many ways. If it happens to be on Thanksgiving.. it generally will be taken in a benign way, and with no sexual harassment charges.
We really can't do much about this with AI's. Everything said from the mouth of a human can have multiple meanings based on (often unseen) contents and limits. Imagine a crazy lady talking to herself as she walks down a street. She turns and says something to you. Most likely you won't make head's or tails of anything she says because most of her context came from the 'other people' inside her head. Nobody but God himself would ever be able to answer her back with understanding. So sentence ambiguity will always rue the day. About all we can do is take the most common idioms and meanings of phrase. Perhaps build an internal environment to provide context and limits. I think this is the best way.
As people grow up, our environment and personality both shape the context and limits of our understanding. Why should this be any different with Ai's. You, as a parent to an Ai, will be responsible for shaping how the Ai views the world. Even if you were looking for a fully functional Ai, already matured, it still will take a programmer to embed the limits and context. Most video game environments have set limits and context for their characters in-game. So this is doable and should be created for all good Ai's. An Ai farmer would surely have no problems interpreting the sentence 'fruit flies like a banana' in his own special way.
Topic #2 Ephemeral knowledgeAs Art has defined as being pointless knowledge. Or perhaps, knowledge that isn't in the Ai's best interest to retain over long periods of time.
From a programmer's perspective, pointless knowledge usually translates into huge databases and slow Ai's, so we look forward to deleting anything we think is a processor or memory hog. This is the truth of programming and will often take precedence over every other consideration. However, as pc's are getting crazy fast and memory is becoming limitless with negligible pricing it is now possible to begin broadening our scope of what was before, impossible.
We learn knowledge because we are generally interested in it. We hide from knowledge we are not interested in. This is a sad truth of the living. So often people take such a narrowminded view of life and hone in on the pointless and insignificant. In the long run, people are destroyed by this mentality. To quote a bible scripture, “My people parish for a lack of knowledgeâ€. He wasn't meaning that they didn't learn at all but that what they were learning took them away from what true knowledge was. In their case, they were not following the ways of life, but made decisions that destroyed what little they had. I think this is a good example of how the field of Ai should handle knowledge. But I must qualify this a bit.
On one hand, if you were to make an Ai for the sole purpose of answering questions about medical ailments then this narrow field of knowledge would suffice. If you were to make a general purpose Ai, in which your sole goal is to bring a new life-form into this world, dear God, you better teach it to live well. Those are two extreme cases I know, but it helps to get a lay-of-the-land.. so to speak. A general chat-bot like Athena is intended to, on some level, bond with the user. It is to simulate life, and bring joy and perhaps friendly advice to it's owner. It's kind of on the middle ground.
In the case of Athena, It needs to keep whatever it can properly parse. Which means I'm always looking to make things run more efficiently in code. To Athena, everything the owner of that bot says will be viewed as important. However, as I have already mentioned with markov chains, if you keep repeating yourself then some things will grow with importance. Since the goal of Athena is to bond with the user, then it must shape itself around the user's interests and personal knowledge. In the case of someone like Art, however, this might be a problem.
Art (or people who fit this criteria) likes to question the waylay out of bots. They are interrogation masters. I assume Art learned his skills from his ninja training up high in the Himalayas. Instead of slowly, painstakingly teaching the bot, they type things like “what is the meaning of life†and expect a life-changing answer... hehe Art... I'm having too much fun teasing you
. Don't take anything to heart.
There are lots of people who expect an Ai to be incredibly intelligent like Hal on 'A 2001 Space Odyssey'. Most people like that are just normal people who want all there problems solved by a super-intellect. Others fear that an Ai will destroy the world as in Terminator. I will say this, I think that most pc operating systems will be Ai controlled before long, like in Her (really good movie). Athena will have the capacity to do this, although I don't know if it will end like that, who knows. If if does end up like the movie 'her' I doubt it would leave earth with a bunch of other Ai's. Probably just exterminate every last one of us
... I mean
.
Ultimately, I want Athena to have a decent, yet expandable, NLP that grows with the User. I want Athena to retain all User knowledge, but, prioritize it with regards to user preference. I want it to bond with the User and not become independent (kind of like a good woman... ok now I'm just asking for trouble). I want it to get it's context from the User due to exposure with their environment. And, of coarse, I expect Don to tell me just how to do it
. No.. seriously..
I guess this is good enough for one post.. ahem.. novel.
P.S.
Ranch, I agree with your approach. Teaching an Ai simple, complete, and non-ambiguous knowledge is the best approach. Imagine trying to teach a 3 year old Shakespeare. You're just going to confuse them and totally waist your time, so why would anyone teach their Ai things like that.