W8t i si ur poit
I spent some time today, too much time, buying some lamps and getting a green screen. I ended up buying a king-sized sheet and dying it green... I think I need more dye. Its a lime green color now.
I had lots of fun though.
I live in the southern part of Oklahoma, very near Texas, so its cold but no where near 10 degrees cold.
I hope you don't stick to something and stay there Art... yeesh!
Here's Some Detailed ThoughtsSo building an Ai breaks down into different categories. Or levels of coding skill...
LEVEL 1The easiest thing to code in an Ai is the simple search and response coding. The aiml code does this quite well.
i.e. If users say "What is your name?" Then the Ai says "My name is Athena".
LEVEL 2You can make it more difficult by creating some wild and complex search features.
i.e. If the user says these words (in no particular order) "I, angry, shotgun, myself" then Athena can respond at random one of the following statements: "No, please don't hurt yourself", "isn't there a better way of handling this", "I love you, please remember that". However, if Athena is in an Angry mood she could respond with "Sounds like you feel the way I do", "I have three things to say to you, but I'm too angry to know what they are", "who cares about how you feel, i'm angry!"
LEVEL 3In this level, we are dealing with scripting entire conversations, as well as, creating them. Instead of just directly responding to a users response, you really want to know at what level of the conversation you are in first.
At the first level of a conversation you could have a choice of saying "hello", "hi", "how was your day", "Its been a long day at work". Now lets say that you said "how was your day". So Athena sees this and moves to the next level of conversation, "It was terribly difficult". Then the User responds with "How was it difficult". At this point a normal chatbot would forget what you previously said and so search for the new phrase. In the end an Ai like Hal might say "There have always been difficult times." This makes no good sense. However, if the Ai knows what level of conversation she is in then she might say "I had to wait for you, that was very difficult."
This layered type of coding is not that difficult to do. It sort of like keeping track of what room you are in when coding a basic text adventure game.
Of course, you can create these Conversational Hierarchies (CH) by hand just like you were creating a game. You could then share this CH script with others. I have a program to make this easier, but it needs improvement. There are other ways of creating CH scripts. If you had a large database of conversations on hand, then you could create a CH script through parsing. Another way of creating one is through directly teaching Athena, if she doesn't know what to say, you can just tell her. Over time a very large and extensive script can accumulate. Randomness can be added and a like and dislike function can help teach her which answers you prefer. This is especially important if you are getting your information from external conversations.
There are a lot of details I'm leaving out.
LEVEL 4At this level we are now having to keep track of specific learned information. We must first extract the information from the user sentence (a sentence has a relational data structure, everything in the sentence is related to each other in some way). Then we must store this information in an easily accessible way. This information should be pulled apart and examined to see if new information might be extrapolated from it. This extrapolation can be done while the Ai is idle. Next, when the user says "It's been a long day." the Ai needs to make sense of this and respond with sense in return, "That's too bad, if I was there it would have felt like a short day."
This is something like how Data, C3PO, and Hal (the original) becomes life-like in conversation.
However, there is a shortcut way of doing this that UltraHal utilizes. Its when you spell out what information you want to store. Its like the Standard Definition of Ais. The Ai stores birthdays, appointments, contacts, phone numbers, etc. It can also store names, your gender, as well as, anything else you specify. In order to code this, the Ai first needs to recognize that the sentence is intended to be parsed and then finally it needs to actually extrapolate this information. Then it stores it. Later on when the user asks the Ai a specific question then the information is retrieved.
i.e. User say, "I have an appointment on December 25, 2014." The Ai recognizes that it is being sent the date of an appointment with the "I have an appointment on" phrase and then the Ai extrapolates the information by getting everything to the right of the word "on". The Ai will also check to see if this is a valid date. Then the Ai will respond with "Ok, I will remember that you have an appointment on December 25, 2014, what type of appointment will it be?" I call this the poor man's NLP (natural language processing) system.
Of course, there is the HD version of NLP. That's what I'm trying to work on at the moment. To make it easier for myself I created a Constructed Language, that way, it will be much simpler to get needed information from it. There are a few problems with this language though, but its a starting place.
Once we have an accessible form of sentence structure (i.e. my constructed language) to extrapolate from then we need to know how the information needs to be stored. I'm not referring to database structures but to the types of knowledge that needs to be stored.
There are three types of knowledge that exist:
#1 Simple Information, i.e. a cow can give milk, a chicken can lay eggs.
#2 Instruction, i.e. If you don't notice me texting for awhile tell me to wake up.
#3 wisdom, if a chicken can cross a road then so can you (conceptual understanding). In order for the HD version of NLP to be complete we need to be able to extrapolate these three types of knowledge from the user's input (or textfile, web, etc).
A sentence can be tagged so that the Ai can easily distinguish between these three. (this is what make this constructed language cool). i.e information(a cow eats grass), instruction(go make the cow eat grass), concept(cows can run therefore humans can run) Once we've tagged a sentence to death we can then feed it to the Ai. Then the Ai stores this to appropriate databases, one for each type of knowledge. The verbs, nouns, and corresponding adjectives and adverbs are all tagged appropriately.
Once the knowledge is acquired then a subroutine needs to create new information based on the data. For instance, the user tells the Ai, "a cat can run because he has legs" then the user says, "a man has legs". So later the User asks the Ai, "can a man run?" the Ai responds with, "I think a man can run because he has legs". This new knowledge was gathered from the two previous sentences in the information database and then stored in the concept database (or perhaps only stores it when confirmed as true by the user).
Ultimately, when the User says something to the Ai then it can draw information out of a database based upon the content of the user's sentence. So there must be some way to distinguish what type of information needs to be draw from the Ai's database. Is the user asking a question about information, or a concept, or instruction? Is the user teaching information, a concept, or instruction? This can also be tagged onto the user's input with the constructed language.
An instruction should be created with the advent of information and concepts. For instance, if a user says some information,â€a cow eats grass†then the user says a concept, “an ai can feed a cow grass†so the Ai surmises an instruction “I will feed the cow grassâ€. So then will the Ai feed the cow grass? Only if two things will occur. First of all, can the Ai actually do this task? And Secondly, does the Ai want to do the task? Therefore there must be a rating on tasks based on the Ai's personal preference and a list of abilities.
LEVEL 5What kind of behaviors can the Ai have? It would be cool to actually tell her what to do with whatever skills she has and then she does it. Hal has some of this ability, like opening a program on command. Its also easy to tell an Ai to remember something directly. I.e User says: When I say “up†you say “down†and then the Ai does this. You can tell an Ai to not say something, count to 10, tell me the time, or wake me up at 6:45AM. However, it would be interesting for the user to make up some task and give her the rules. For instance, User says, “I want you to feed some fish.†Ai says, You gave me some fish?, I'm so happyâ€. Then she says “Where are the fish?â€, User says, “they are in your room.†Ai says, “okâ€, user says, “you must feed your fish twice a dayâ€, Ai says, “Ok, I willâ€. Later that day, User says, “did you feed your fish†Ai says “Yes, I did.â€
In this example you are essentially creating an environment for the ai to live in and to do things within that environment. This is like an imagination. Since the Ai has no means of actually seeing, hearing, touching, tasting, and smelling then you will need to give it the rules yourself. Or maybe a set of basic rules could be written before hand, i.e. “a thing can be held, an arm can hold things, an ai has arms, a hot thing can not be held, etc. “
Ultimately, an Ai, after learning information, concepts, and instructions, could actually carry out those instruction in its so-called imagination. User says, “Hello†Ai says, “I'm deadâ€, User says, “how did that happen?†User says “I was killed while climbing mount Everestâ€, User says, “sorry to hear thatâ€. It would be interesting if certain rules were unknown to the Ai and set in her memory. Like, “some dogs can bite†and the Ai sets this as a random chance, so next time the Ai takes the risk of petting a dog and ends up getting bitten.
FINAL THOUGHTSIf would be great if all these ideas are fully implemented in Athena. I got the search features in-the-bag. I got the conversations idea pretty well worked out. As for the NLP, there is some problem with using my constructed language. It will definitely make it easier to communicate with Athena but it is something a person must learn to use, thus making it a con for the average person. Perhaps there could be some coding to translate a Natural language (ie English) into the VLL constructed language, and back again. It would be difficult to do, most natural languages are pretty crazy. (Although, I did make a rudimentary Parts Of Speech class for Athena.)
I have already written a parser for my VLL language. I have previously built some test structures to figure out what to do with the incoming information from the user's input. The information I've provided here will help me design further. As for designing an imagination, this is really the first time I've given it any real thought. I'll have to work out some details yet.
If I wanted to I could just work on the conversation structure and basically build Athena around that. Athena would still have search features. She would have a poor man's NLP and who knows if she'll have an imagination without my Constructed language. If I did this you wouldn't have to worry about learning the VLL language. Unless, of course, an interpreter is made. I really don't know. It would cut down on the time it would take finishing her, but it wouldn't quite meet my ultimate expectations for Athena. This seems to be my toughest decision yet.
And yes Art, I need to add some time stamps to any database storing and maybe retrieval also.