Determining the direction of my a.i. project.

  • 40 Replies
  • 12794 Views
*

lrh9

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 282
  • Rome wasn't built in a day.
Determining the direction of my a.i. project.
« on: September 28, 2009, 09:07:40 am »
As many of you know, I have been researching methods of learning. I finished researching all of my material, and now I need to decide what direction to take my a.i. project in based on the information I've gathered.

Here are the links for information about the old project.

http://aidreams.co.uk/forum/index.php?topic=3095

http://aidreams.co.uk/forum/index.php?topic=3059.msg12167#msg12167

And here are the links for the research.

http://aidreams.co.uk/forum/index.php?topic=3136.0

http://aidreams.co.uk/forum/index.php?topic=3186.0

There are several questions I need to address.

  • Can I implement a neural network?
  • Do I want to implement a neural network?
  • What - if any - abilities do I want to pre-program with the a.i.? Do I want to pre-program any goals and responses in the a.i.?
  • Do I want to pre-program the ability for the a.i. to imitate its users?
  • Do I want to pre-program automatic data recording abilities in the a.i.?
  • How much independence do I want to pre-program in the a.i.?
  • To the best of my abilities to ascertain the development of consciousness and self-awareness in humans, it seems that they do not fully develop in humans until a period of several years after they are born. Is this development a natural occurrence independent of environmental interaction, or does it depend on interaction with the environment? If the formation of consciousness and self-awareness depends on interaction with the environment, what are these interactions and how might I simulate them to develop consciousness and self-awareness in the a.i.?
  • How might I test the abilities and nature of the a.i.?
  • What kind of curriculum should I teach the a.i.?
  • What direction do I want to pursue after I complete the a.i.? Continued development, expansion, and teaching of the a.i.? Development of the a.i. for other platforms? Integration of the a.i. into robotics? Further research into advanced a.i.?

Right now I'm thinking about limiting the project to traditional programming and processing methods. That would provide a clearly defined goal for the development of the program. Neural networks would require large amounts of research and development time as well as increase operational complexity and security risks. This route will also allow the further exploration of the limits or nature of traditional processing methods.

I don't necessarily have to release only one version of the a.i. I could have a experimental version that comes only with the raw functions allowing experimenters to create their own complex functions and abilities, and I could release a user friendly version that comes with several abilities pre-packaged that will aid the program in being an intelligent desktop agent.

Assuming that I've done my job well, I should be able to adapt virtually any curriculum designed for humans to teach the a.i. A curriculum is essentially an algorithm for teaching. I'd just have to be certain to teach in a manner promoting understanding, integrating the use of the abilities, and using abilities to build greater abilities.

That also means that with some adaptation virtually any test designed for humans might be applied to the a.i. I have a post about this on another site. I'll have to look it up and link it here.

http://www.ai-forum.org/topic.asp?forum_id=1&topic_id=68125

I'd be grateful for any thoughts and suggestions. Thank you.

*

Maviarab

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1231
  • What are you doing Dave?
    • The Celluloid Sage
Re: Determining the direction of my a.i. project.
« Reply #1 on: September 28, 2009, 12:51:29 pm »
Ok, this is not really my field (bow to others greater knowledge) as more a graphical man myself...but will try and give you some of my thoughts...

Quote
•Can I implement a neural network?
This a tough one, but if you could I would...

Quote
•Do I want to implement a neural network?
As above, I would go with it, seems to be the common missing theme of a good AI

Quote
•What - if any - abilities do I want to pre-program with the a.i.? Do I want to pre-program any goals and responses in the a.i.?
If you start pre-programing, will it not be like any other AI/bot allready out there depending on what your going to add?

Quote
•Do I want to pre-program the ability for the a.i. to imitate its users?
That would depend to me on the term 'imitate'...if you mean repeat and copy then I would say no, if you mean more mannerisms then perhaps yes, after many users input, the AI could truly take ona  personality of its own.

Quote
•Do I want to pre-program automatic data recording abilities in the a.i.?
That would depend on the data being recorded, but for the most part I would think so yes.

Quote
•How much independence do I want to pre-program in the a.i.?
As much as posisble in my opinion.

Quote
•To the best of my abilities to ascertain the development of consciousness and self-awareness in humans, it seems that they do not fully develop in humans until a period of several years after they are born. Is this development a natural occurrence independent of environmental interaction, or does it depend on interaction with the environment? If the formation of consciousness and self-awareness depends on interaction with the environment, what are these interactions and how might I simulate them to develop consciousness and self-awareness in the a.i.?
Another tricky one but a brilliant question. I have strong views that Humans as a species are stupid. We are mindless and wothout basis for looking after ourselves until well into our life. Cross this with virtually every other species that can walk, swim, hunt, feed etc etc almost from birth, then you see where I am going. I think most of our self awareness comes from experience intertwined with learning experiences. nearly all of what we know we have been told/taught, very little of what we know/learn comes from direct experience. IE: We are told red is red, we are told that sex is enjoyable, we are told that there is a god, we are told that falling will hurt etc etc. Now in terms of an AI, it will not know any of this stuff itself unless we tell it...so how you will go about this to me would be difficult and I'm afraid I can't offer any advice on that at the moment.

Quote
•How might I test the abilities and nature of the a.i.?
I would test with learning and extreme experimentation. All very well stating a fact, or teaching the Ai a fact, then at some random point asking the Ai what is a (?), but thats not really testing anything is it other than has it learned/remembered/been programmed something. Extreme nuances in your testing will highlight the capabilities and nature of the AI much better.

Quote
•What kind of curriculum should I teach the a.i.?
Are you referring to the kind of curriculem that we ourselves learn? But in a direct answer, everything and anything.

Quote
•What direction do I want to pursue after I complete the a.i.? Continued development, expansion, and teaching of the a.i.? Development of the a.i. for other platforms? Integration of the a.i. into robotics? Further research into advanced a.i.?
Robotics and advanced AI would be my first thought, but again depends on your definition of 'complete', if its complete then it needs nothing further no? I understand your question, and yes I would say perhaps a better use in society for the Ai would be a main goal.


Sorry can't be any more helpful lrh, either my own opinions (what I would do) or simply a lack of knowledge on me part :)

Good luck with it though and do please keep us updated.

*

lrh9

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 282
  • Rome wasn't built in a day.
Re: Determining the direction of my a.i. project.
« Reply #2 on: September 28, 2009, 11:14:10 pm »
A pre-programmed a.i. would be like other programs to an extent. However, all of its abilities should be modular so there is no reason that someone would be forced to download or use them. I just don't think it would be consumer friendly if each person who wanted to use it had to teach their bot how to process commands or natural language, or to give them basic sensory abilities. I guess the best thing to do would provide different base programs and ability packages as default options.

By "imitate" I was referring to repeating and copying their actions. I agree that there are certain - probably most - actions that a user would not want the program to copy. Mainly I'm worried about how to get the program to have attentiveness and observation powers when the user does want it to imitate them. Maybe an imitation mode can be another ability. Now I just have to figure out how to teach it that or program that.

Yes. A data recording feature would be a good thing. It could be an option where the user wanted data recorded or it didn't. The trouble comes in keeping this data compact. There would have to be some optimization going on in the background or a sleep phase in which some program reorganized the data in an optimum manner and removed duplicate data and possibly compacting existing data. Fortunately data by its very nature lends itself to this task.

I'd hope that the amount of independence or dependence of the a.i. could be taught by the user. Some people will want something obedient and some people will want something that won't necessarily disobey, but will have a high personal initiative. I just want to avoid contrivances and annoyances like forcing a random topic change, conversation, etc.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Moderator
  • **********************
  • Colossus
  • *
  • 5865
Re: Determining the direction of my a.i. project.
« Reply #3 on: September 29, 2009, 12:03:01 am »
One of the points I tried to make in my earlier response regarding AIML type of bots was that no real learning takes place if the "botmaster" simply inserts data into the bot's database.

Sure, in subsequent conversations with other people the bot will now "appear to know" some new information but the info is not / was not "learned" by the bot.

Now for some I'd like's...

I'd like a bot to be able to scan, read, listen, etc., and store data that it can infer and recall at a later time in the context of a conversation.

I'd like the bot to tell me what it knows about a particular subject.

I'd like the bot to store away some not totally abstract but perhaps information that i has acquired in a sort of "dreams compartment". Then it could tell these "dreams" that it has experienced to the user and try to seek some validity or reasoning for them.

I'd like to see the bot be able to keep secrets no matter how trivial, from other users and for it to know that said information was to be confidential.

I'd like a bot to be able to recognize the user either by visual (web cam), verbal test question or other clever method.

I'd like the bot to know how much time had elapsed since the last conversation.and where the discussion left off.

I'd like the bot to be able to know when / if it has made a faux pas...a social mistake and offer an apology.

I'd like a bot to be able to form and give its own opinion on a particular subject.

I'd like a bot to be able to use basic logic for inferences...A>B , B>C therefore A>C....etc.

I'd like a bot to be able to carry on a conversation for two minutes without resorting to inserting those "clever" nonsense statements that only serve as fillers...useless. Good topic flow!

These are a few but I'm sure there are more lurking about.

If only a few of these could be implemented it would result in a much improved bot than we've been accustomed to in the past.

Best of luck!
In the world of AI, it's the thought that counts!

*

lrh9

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 282
  • Rome wasn't built in a day.
Re: Determining the direction of my a.i. project.
« Reply #4 on: September 29, 2009, 12:29:53 am »
If I can teach it natural language processing - which I hope I might be able to do - most of those will or could be accomplished.

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6860
  • Mostly Harmless
Re: Determining the direction of my a.i. project.
« Reply #5 on: September 29, 2009, 01:43:56 pm »
As for features, Art pretty much summed up my thoughts on that.  I agree that just a few of those features would make a better bot than we have so far seen.  We were all impressed by Suzette most recently; the chat-bot by Bruce Wilcox. I feel he was going in the right direction with some of these things.  You will find his posts on the forum if you had not already.

My major interest is in a learning bot.  I've played with AIML before and it would be so much better to point a bot at a body of text and have it learn information itself - if done well the bot would probably figure out a lot of things I would not have thought to program into it.

And yes, a modular system would be ideal.  UltraHal uses plugins to add features to the brain script - which again I think is another step in the right direction.  So yes, that's a good idea.

*

lrh9

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 282
  • Rome wasn't built in a day.
Re: Determining the direction of my a.i. project.
« Reply #6 on: September 30, 2009, 12:29:29 pm »
So it seems that there will be a very basic version for developers and a.i. researchers. That means that I can delay teaching and programming some of the advanced abilities and focus on core development - the essential functions and functionality of the a.i., user interface, and security. I guess to discuss these features I'll need to talk about the actual program structure and code.

The basic premise of the program is it uses a system to process incoming messages to call functions and programs. The program can also send messages as well. This system provides flexibility in several ways.

  • First, it allows for an easy way for data to be passed to many different functions. Input and output does not have to travel in strict lines, but could be directed to different functions depending on which is required with a simple change of instructions.
  • Second, it allows for a modular system. New functions can be encapsulated into programs that can be called by the messaging system, then necessary data can be sent to the program. Data can also be returned from the program.
  • Third, it allows for small atomic actions. Atomic actions are actions that are completed as one unit. Small atomic actions means that a series of actions can be easily interrupted. This might make it possible for a user to interrupt the a.i. while it is speaking or other such events requiring interruption.
  • Fourth, it allows for multiprocessing. The only thing that would restrict the a.i.'s ability to perform operations in parallel is the amount of time it takes to call each individual function.
  • Fifth, the fact that the program can message itself allows for feedback.

Let me post some code. Here is an example of a function in the a.i. It writes data to files. The variable that holds the data passed to it is labeled text, but all text data is binary data, and a asterisk can be prefixed before the file name to prevent the conversion of a linefeed character to a carriage return and linefeed.

Code
FileAppend(Text, FileName)
{
FileAppend, %Text%, %FileName%
Return ErrorLevel
}

Now this function will be called by a message processor. The message processor receives a message containing data on which function to call and the parameters. This programming language allows for variable function calls. The processor will place the function name in a variable, and then call the variable with the correct parameters. I haven't written this function yet.

There are two types of messaging systems. One that simply sends the message, and one that requires acknowledgment from the receiving program or script. Otherwise they are functionally identical. I'll post one.

Code
PostMessage(Msg, wParam = 0, lParam = 0, Control = "", WinTitle = "", WinText = "", ExcludeTitle = "", ExcludeText = "")
{
PostMessage, %Msg%, %wParam%, %lParam%, %Control%, %WinTitle%, %WinText%, %ExcludeTitle%, %ExcludeText%
Return ErrorLevel
}

The variable "Msg" holds the value representing the message to be sent. The one that is intended to be used with this function is a Windows message that copies data from one program to another. The wParam and lParam are variables holding data that are transmitted to the program. They hold the addresses of the data intended to be copied. Control, WinTitle, and WinText allow the message to be targeted to a specific control (optional) on a specific window or program.

All of the other functions are related to data manipulation or calling programs and management of the script itself.

Now I just need to decide which security features I want to implement.

P.S. For me the code is really small. In order to be able to see it easily I have to zoom in, which for my browser means that I have to hold "Ctrl" while rolling the center mouse button up.

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6860
  • Mostly Harmless
Re: Determining the direction of my a.i. project.
« Reply #7 on: September 30, 2009, 01:37:15 pm »
Nice start, very clearly explained too.  Reminds me of PHP, what language are you using ?

Will you be using a database to store information ?

I adjusted the font for code, I agree it was too small...


*

lrh9

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 282
  • Rome wasn't built in a day.
Re: Determining the direction of my a.i. project.
« Reply #8 on: September 30, 2009, 01:53:54 pm »
Nice start, very clearly explained too.  Reminds me of PHP, what language are you using ?

Will you be using a database to store information ?

I adjusted the font for code, I agree it was too small...

The language is AutoHotkey. I was exploring for scripting languages and macro recorders one day, and AutoHotkey was the most highly recommended. It is a custom language that depends on an interpreter that can be downloaded off of the Internet. Unfortunately, it is Windows specific at the moment, all though it might be possible to convert the scripts themselves to another language or to run AutoHotkey on other platforms in the future.

I haven't thought about how the data in files might be arranged. I think part of the point of the program is that anyone should be able to specify their own data format, access methods, and interpretations. The language though supports easy access of entire files and single lines, so it would be easiest from the perspective of the language to divide data by files and individual lines. Text might also be a viable option because all of the scripts are text files.

*

Maviarab

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1231
  • What are you doing Dave?
    • The Celluloid Sage
Re: Determining the direction of my a.i. project.
« Reply #9 on: September 30, 2009, 04:30:58 pm »
Looks a lot like Action Script, actually understood something for the first time here (language wise) lol :D

*

lrh9

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 282
  • Rome wasn't built in a day.
Re: Determining the direction of my a.i. project.
« Reply #10 on: October 01, 2009, 03:39:38 pm »
Just wanted to jump in and post a discussion I had on another forum about self-consciousness. There's more to the post, but I didn't feel as if there was anything else I could contribute to the discussion. I'll post a full link to the post for those who are interested in reading it in its entirety.

Quote
Nerketur wrote @ 8/10/2009 7:00:00 AM:
There is a lot of information pointing towards humans having a "consciousness". But does it really exist at all? If it does, then AI is certainly very hard to achieve, since one cannot program a "consciousness" in a computer. If not, then AI would be simple to create.

I took a long walk through the beach, and thought a lot about AI. I also thought about what it means to be a human. Are we actually even different from animals at all? I'm sure there are points for both sides.

Let me tell you what I thought during my walk. First of all, I thought about how we can teach a machine to learn. Like the HAL's we train. They use pattern recognition. This, in my opinion, is only part of how humans really learn.

The way I see it, The core ability for us to learn is via association. It's how we learned about boxes, hammers, and even math. For example. 1 + 1 = 2. I know that in my case, I can literally see an object, then I add another object to it, and I see two objects. I associate the number with actual objects.

When we were born, we didn't know language at all. We were taught it slowly. "I'm your mommy" "This is a spoon" "This is food" We saw what the object was, and we associated it with the word. First, we learned to associate. Some things we associated with good, happiness, pleasure. If we touched a hot stove, we soon learned that it hurt. Pain bad. If we cried, our parents came. Parents good. Soon, we learned better ways to show what we wanted. We learned that "Mommy" got us our Mother, and Daddy got us father. We associated the word with who we got.

We learn mainly by association. However, we teach ourselves using patterns. If you got slapped in the face every time you said "happy" pretty soon you'd pick up on that pattern and stop saying it. But if you were given a piece of candy every time, you'd likely say it more often.

That said... it brings up a question. If all we do is associate and see patterns, then how do we have a sense of being? A conscience? A consciousness? I say it could be "fake", if you will. It could be that we think we have one because it makes us feel better. We think we have one because of what we think it is. We associate it with humans, with religion, with the ability to think about thinking.

Assuming that I'm right, it would be easily possible to create an AI that's just like a human. It's possible to create a program that associates and uses pattern recognition. But the key element in that is bias. If everything is the exact same, aside from color, you need bias, or preference in order to decide. Alan, and HAL both use bias. HAL tries to choose the "best possible response" Alan chooses the "best response" depending on what was said. But I'm talking different. in order to make an AI, you need a random preference for one thing over another. Like a persons favorite color.

On the other hand, lets assume I'm wrong. Lets say all humans have something called a consciousness. A sense of self. That, in itself, would be very hard to program in. How would you? I don't think it could be done. We could probably get close... but anything like what we see in movies would be far, far away. Perhaps impossible.

Personally, I think it's possible. I think that it's more likely that our sense of consciousness is a response to millions of associations. We THINK we are alive. But in actuality we are ... Well, I'm not sure, actually. We are somehow controllers, and yet we are controlled.

I'll have to do some more thinking about this... But I feel pretty certain that AI is possible. I feel quite sure that we learn by patterns and association, but mainly association. Nonetheless, I'm curious as to what you all think of this. What are your thoughts?

   
Quote
lrh9 wrote @ 8/12/2009 2:23:00 PM:
I guess one needs a definition of what human consciousness is before they can discuss if it is necessary for a.i. and if it is how to implement it.

What is consciousness? Definitions usually should be specific, exact, and explanatory. However, I think to define something as ambiguous and understudied as human consciousness, it is appropriate to analyze consciousness and create the dichotomy of what is consciousness and what is unconsciousness (or non-consciousness).

Hopefully if we can determine what attributes consciousness has and what effects consciousness has, we can arrive at a a description of consciousness even if we don't have an explanation of how it works in the human mind. Which I think is a minor concern for a.i. because for the most part we are only trying to achieve the results of the human mind, not emulate it exactly.

Wikipedia [I know anyone can edit Wikipedia, but how can an open community consider itself valid if it doesn't consider other open communities valid?] says that consciousness is often used colloquially [esp. in medicine] to describe being awake, aware, and responsive and sensitive to the environment, in contrast to being asleep or in a coma. In philosophical and scientific discussion, however, consciousness is the ability to clearly recognize one's self from all other things and events. It says a characteristic of consciousness is it is reflective. It has the ability to recognize the recognition of one's self.

So unconsciousness is being asleep, unaware, and insensitive to the environment, not knowing there is an "I", or even if one knows there is an "I", being unable to think about that recognition.

Now at the hardware level, the human brain - as far as I have discerned in my studies - has only two abilities. The ability of neurons to retain a configuration (allowing for the storage of data), and the ability to send electrical messages with varying strengths to different neurons within itself and out through the central nervous system to body hardware.

I think (by examining the basic abilities humans have solely at birth and their formative years) there is some built in configuration of neurons that result in discrete higher level components of cognition, but they are built upon those two basic properties I mentioned earlier. We just don't understand how.

So comparing unconsciousness and consciousness in terms of what goes on in the brain with basic components, you have sleep vs. asleep. That means that: 1) One has memory and the other does not. (Dreams are a semi-conscious state.) 2) That means one can sense the environment and one cannot not. (Essentially send and receive signals to body hardware and mind software - thinking.)

And in terms of being able to recognize oneself vs. being unable to recognize oneself that: 1) One can feel and the other cannot. 2) One can examine what one is thinking (including the process of thinking) or has stored in its mind (including itself) and one cannot.

Essentially the human brain can store data that can be both information and processes. (For instance, one can know the steps for baking a cake at the same time one can actually bake a cake.) It has a program that can read its own program and activities. It stores all relevant data about itself inside of itself. And it is able to signal information between the programs inside of it and the outside world, allowing for differentiation through qualitative data.

If that doesn't sound like a computer, I don't know what does.

So having found nothing else in my search for answers to human cognitive abilities, I am forced to conclude that consciousness is indeed an emergent property of the more basic components of cognition. There is no discrete consciousness at birth.

Indeed, thinking about my formative years, I have no memory of anything before about three or four. I think this is primarily the result of the relatively few pre-existing connections of neurons and the limited number of neurons, resulting in: 1) Difficulty in signaling between thoughts and thoughts and thoughts and perceptions. (Maybe this explains short attention spans.) 2) The inability to store a large amount of information.

I might be right or I might be wrong, but I think it's worth trying out either way.

Quote
lrh9 wrote @ 8/12/2009 2:23:00 PM:
I guess one needs a definition of what human consciousness is before they can discuss if it is necessary for a.i. and if it is how to implement it.

Nerketur wrote:   
I think you have the right idea. However, in truth, I don't know what "consciousness" really is. Even with your definition, the ideal AI would also understand it, and perhaps think of philosophical questions much like we do.

However, I think that we don't have to define what it is, in order to understand it. For example... Do you know the meaning of the word "the"? Or perhaps you know how to explain color to a blind person? I certainly don't. But I do "understand" the word "the" I understand that it's very hard, if not impossible, to explain the concept of "color" to a blind person.

My point is... If we can get a robot to learn as a child, it may very well develop a sense of self on its own. Develop what we would consider a "consciousness", if you will. If it doesn't exist at birth, when is it created? If we take this robot, and see that it developed a "sense of self" then perhaps by looking at it's "brain dump" we can understand what a consciousness really consists of. We could understand it, even if we don't know what it really means.

Doing this would require knowledge of more than simply text, in my opinion, however. Though a "simple" form of consciousness may be possible with text, it may seem to be a completely different thing to us. I don't think we would understand it well enough.

Now... I do admit the fact that it's possible that AI can't be created by mere chance. It's possible I could be wrong on how to create it. But until I try, there's no way to be sure.

I don't have the materials, nor the expertise required, just yet... but I plan to have them as soon as possible. (Sadly, though, my funds don't quite permit it yet. I'm broke D=) I'm learning, though. I plan to be the one to create it. A daunting task, but I do have help. This forum, friends that understand as I do, and the awesome programs here, as well. It's like a family. =)

Thanks for helping me on this journey. Perhaps one of you will be the source of something to make my mind click on exactly how to do it.

PS: As a side note, I'm fixing up a chatterbot program I found on another website. It's helping me to learn how chatterbots are made, and that's giving me ideas. Porting from C++, to Java, to Euphoria, and sometimes back again. Programming is just so fun =D

Quote
lrh9 wrote @ 8/12/2009 9:19:00 PM:
I think the formation of consciousness can be a natural process or a guided process. If one's self stores data about a hand into its memory, and then self has a property of having an arm, and arm has the property of having a hand, then self has a hand. Self has the data of recognizing self has a hand.

Therein is self awareness. It's in a robotic form, but it is essentially what you described with parents teaching children what their parts are. ("These are your fingers, and these are your toes!" Ah. I still remember my mom doing that with me.)

I came to an important realization today. I realized that Helen Keller was a person with deafness and blindness from a very early age. I thought to myself that her story might be able to shed some light on the workings of the mind, because that is one of the few things she had. She was nearly completely cut off from the physical world, yet she has accomplished more than many people possessed of their senses. Maybe intelligence isn't as connected to the world as we thought. I think it merits further study and research. That is why I ordered a copy of Disney's The Miracle Worker (2000 TV) and I'm going to try to obtain copies of some of her essay's and autobiographies. (How coincidental that you should mention blindness.)

Obviously my project is working to create a desktop artificial general intelligence program. All I have to have is a programming language and compiler and interpreter.

Quote
lrh9 wrote @ 8/12/2009 9:19:00 PM:
I came to an important realization today. I realized that Helen Keller was a person with deafness and blindness from a very early age. I thought to myself that her story might be able to shed some light on the workings of the mind, because that is one of the few things she had. She was nearly completely cut off from the physical world, yet she has accomplished more than many people possessed of their senses. Maybe intelligence isn't as connected to the world as we thought. I think it merits further study and research. That is why I ordered a copy of Disney's The Miracle Worker (2000 TV) and I'm going to try to obtain copies of some of her essay's and autobiographies. (How coincidental that you should mention blindness.)

Nerketur wrote:   
Interesting. I hadn't thought about that before, but you're right. Helen Keller, and we know that she has a sense of self. But it doesn't mean I am incorrect. Under my idea, she simply has a limited version of Self.

What I was trying to say before, is that it's possible that chatterbots could find a sense of self. A consciousness. But WE, as humans, who don't really understand it, wouldn't consider it to be a form of consciousness.

I think that "nicku" is right in his thought that consciousness and senses are connected. Without senses, you cannot create a consciousness. But I believe that once created, it will exist, even if the senses become non-existent. It will probably go crazy... but it will exist. Though the connections are gone, the "illusion" is still there. Is that your spirit? Perhaps.

The question is... how long can a consciousness exist outside of it's "senses"? I would guess not very long, but this would depend on factors. The first of which is what exactly is a consciousness?

Along another way of thinking...Anything is possible. If you want proof of this, then think about this. Consciousness doesn't really exist. Yet, we know it does. We feel it does. Belief causes truth. If we believe it strongly enough, it becomes knowledge. It BECOMES true, even if it's false. That's proof, in and of itself, that belief causes truth. =)

The question becomes, then... If we DO create an AI, will it function the same way? Anything it believes becomes true? The answer is simple. Yes. That's how HAL's work. That's how Alan works. Complete trust.The mysteries behind "robot army" involve "incomplete trust" or "playing a game" Whatever it thinks will be true for it, until told otherwise. Just like a human.

AI is a very interesting concept. I kinda got a little off track, there... but nonetheless, it's a good place to start. If you make any progress, be sure to keep us updated! =D

I had to get out when he started talking about spirits and how belief causes truth. Here's the link.

http://www.ai-forum.org/topic.asp?forum_id=3&topic_id=68083

I'm still working on the basic program, but it wouldn't hurt to think ahead to advanced abilities. Another absolutely vital ability will be communication. I also want to focus on classical logic and reasoning as Art suggested. I don't think it is critical, but I think it will be beneficial. I can't help but think that something will be better off if you teach it how to think instead of what to think. I also believe it will be beneficial to teach research and methodology skills.
« Last Edit: October 01, 2009, 03:45:00 pm by lrh9 »

*

Maviarab

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1231
  • What are you doing Dave?
    • The Celluloid Sage
Re: Determining the direction of my a.i. project.
« Reply #11 on: October 01, 2009, 04:55:21 pm »
interesting chat you had there lrh....I disagree with many of his thoughts but hopefully its all been god for your own project.

*

Maviarab

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1231
  • What are you doing Dave?
    • The Celluloid Sage
Re: Determining the direction of my a.i. project.
« Reply #12 on: October 01, 2009, 05:07:46 pm »
read the full thread over at the Ai forums...hmmm....

\he seems a little obsessive about his beliefs I think....

I was going to post a reply but then thought better of it....as the two main contributers are both wrong in their thinking and beliefs imo. Interesting subject matter I will agree, but both misguided.

The main point of contention seems to be:

  • What is a conscience?
  • Can it exist without senses?
Well to my mind, of course it can, and indeffinatly. Even if you had NO senses at all (taste/smell/sight/etc) you would still be 'aware' of yourself. Just as a fox is 'aware' of itself, or most other animals.

So to me, consciousness has nothing to do with senses, but more in awareness, so then the question should really be to me, what is awareness and what is it to be aware?

*

Bragi

  • Trusty Member
  • ********
  • Replicant
  • *
  • 564
    • Neural network design blog
Re: Determining the direction of my a.i. project.
« Reply #13 on: October 05, 2009, 06:34:06 pm »
Quote
I have strong views that Humans as a species are stupid. We are mindless and wothout basis for looking after ourselves until well into our life. Cross this with virtually every other species that can walk, swim, hunt, feed etc etc almost from birth, then you see where I am going. I think most of our self awareness comes from experience intertwined with learning experiences. nearly all of what we know we have been told/taught, very little of what we know/learn comes from direct experience. IE: We are told red is red, we are told that sex is enjoyable, we are told that there is a god, we are told that falling will hurt etc etc. Now in terms of an AI, it will not know any of this stuff itself unless we tell it...so how you will go about this to me would be difficult and I'm afraid I can't offer any advice on that at the moment.
I agree with you on this for the most part, except for the sex ;D (think endorfins, or how is this chemical called again (note: offcourse what triggers the chemicals?)).  Perhaps some other examples: what's fashionable and what's not, morility,...
Even creativity is nothing more than the mixing and distorting of the already existing.
Quote
I guess the best thing to do would provide different base programs and ability packages as default options.
yep, that's how I'm doing it: modules and sins (sensory interfaces = conduits to the outside world).
Quote
Mainly I'm worried about how to get the program to have attentiveness and observation powers when the user does want it to imitate them. Maybe an imitation mode can be another ability. Now I just have to figure out how to teach it that or program that.
If you use a simple responsive system (that only reacts upon imput), you can simply record any imput that's not encountered before. Verification can be done for already existing data. example:
if you tell the system for the first time: your name is X, it will simply record the info (take object 'self', create a link to X using 'name' as the meaning). You can than ask: what is your name. It will find object 'self', search for the link with meaning 'name' and output X.  If you had said:  your name is Y: it would also find the link, and return the  previously recorded data instead of  recording the new one.
This is not really clever, but a start.

*

lrh9

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 282
  • Rome wasn't built in a day.
Re: Determining the direction of my a.i. project.
« Reply #14 on: October 08, 2009, 02:53:58 am »
Right now I'm working on the system responsible for processing messages. The completion of this system might allow for the creation of a functional prototype.

There are a few small hiccups though. Not all of them are worth mentioning because it is probable they will resolve themselves as larger problems are resolved.

The main problem is ensuring that a message that needs to be completed before other messages are processed is handled that way. As it stands, the program can receive a message and call the related function and receive a message and call another function while the first is still processing. Sometimes this will be a desirable thing such as when a series of files needs to be rapidly created. However, sometimes this would be a bad thing such as if a file must be downloaded from the Internet and then opened for display. It is possible that the program could attempt to open the file before it finished downloading.

 


LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

432 Guests, 0 Users

Most Online Today: 447. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles