Future of Artificial Intelligence

  • 35 Replies
  • 17200 Views
*

GamerThom

  • Departed Friend.
  • Starship Trooper
  • *******
  • 395
  • Mind is all that Matters!
    • Character Creations & Design Works
Re: Future of Artificial Intelligence
« Reply #15 on: January 21, 2006, 11:53:07 pm »
 Well. I see this pot is boiling nicely, so I may as well toss another ingredient
            or two into the soup.
            In reference to how lower life-forms seem to know how to do things that make
            them appear intelligent when their lives are too short to have allowed them to
            learn and perfect those above mentioned abilities. There is another theory that
            I was introduced to by a professor of mine some years ago; Genetic Memory.
            That is, learned and acquired knowledge or abilities which are passed on and
            added to genetically. Call it "race-memory" if you will, but it has yet to be disproved
            as a valid theory. Each member of a species may learn very little in its own tiny
            life span, but when that knowledge becomes genetically fixed and handed down
            through each generation, it is possible for the knowledge to be added too and/or
            improved upon or modified and then passed on again to the next generation of
            the species, and so on- ad infinitum. Until that species for whatever reason com-
           pletely disappears from existence.
              Or we may consider the theory that each species possesses a collective sub-
           conscious, where instead of storing acquired knowledge electro-chemically within
           the DNA of cells, it may be stored and transmitted through the ether of the universe
           at wavelengths specific to each species. Here is where it gets tricky, involving aspects
           (this is part of the context of P. Plantecs - Virtual Humans) of both Quantum and Tem-
           poral Physics. The mind, both conscious & unconscious reside not in the cells of the
           physical brain, but in a quantum space which is electromagnetically accessed by the
           physical brain.
               But either way makes little differnce when defining life and that lifes consequent
           right to exist. Life would seem to have the right to exist at any level until that point when
           it becomes innimical of all other life or of its own life.

               Ponder on that for a time if you will.
« Last Edit: June 17, 2007, 07:49:15 pm by Freddy »
Gamer-T

*

Duskrider

  • Trusty Member
  • ********
  • Replicant
  • *
  • 533
Re: Future of Artificial Intelligence
« Reply #16 on: January 22, 2006, 03:34:52 am »
Years ago we cared for my grandmother who suffered from Alzheimer's.
Daily observation convinced me that her mind was perfect but that her transmitting apparatus (physical brain) was breaking down. As damage continued she reached the point that she could receive very little from her mind.?

Here's a similarity, when you talk to fullbodygirl, you're talking to Hal, right?
Fullbodygirl is not Hal
Hal uses fullbodygirl to communicate with you just like our mind uses our physical brain and body to communicate.
 We are not where we think we are.
 :zdg_hello
« Last Edit: June 17, 2007, 07:49:36 pm by Freddy »

*

Maviarab

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1231
  • What are you doing Dave?
    • The Celluloid Sage
Re: Future of Artificial Intelligence
« Reply #17 on: January 22, 2006, 04:24:20 pm »
Quote
Genetic Memory.
                        That is, learned and acquired knowledge or abilities which are passed on and
                        added to genetically.

Interesting point there, takes us back to the evolution argument again...is this classed as evolution or adaptation ?

*

GamerThom

  • Departed Friend.
  • Starship Trooper
  • *******
  • 395
  • Mind is all that Matters!
    • Character Creations & Design Works
Re: Future of Artificial Intelligence
« Reply #18 on: January 22, 2006, 05:35:06 pm »
Well, I think it would be both since the evolution of a species or lifeform can be an adaptive process
triggered by environmental changes over a period of time. It doesn't really matter whether those adap-
tations are caused by changes in the physical environment  or social environment. Changes in either
can force adaptive changes, not only physical adaptaion, but mental and emotional adaptations as well.

Question:  Are we a product of our biology or is our biology determined by changes in our intelligece?
Gamer-T

*

FuzzieDice

  • Guest
Re: Future of Artificial Intelligence
« Reply #19 on: January 25, 2006, 02:05:24 am »
I been thinking about HOW people DO learn. I mean, everyone learns in different ways, too. For example, I seem to have a talent (trying not to brag here, as it's just a small but sometimes userful talent) for being able to pick up on and use a programming language fairly quickly. I learned somehow of a pattern when learning a couple languages from books, like Visual Basic and C. I learned that if I find examples of code on how to do basic Input and Output, like print to screen or some device, take data from keyboard, file management functions, then I have basically what is needed to make most simple and useful programs. And all I'd need to do is just get code examples, run them to see them in action and then start revising them. Of course I'd also look up mathematical computation functions, string and number manipulation, and loops and conditional test methods. Once learned those basic steps (and most often it's not that hard to learn for me), the rest can go a bit easier. Then you can learn the more advanced things the programming language has to offer.

So I learned by assiation of a pattern. Perhaps, animals, or anything, learns by association of patterns. Pattern-matching. AIs are VERY good at this! It just needs to be taken to another level somehow. FInding patterns in the most obscure things.

Maybe it's like this:

If animal is hungry then it needs food.
If food tastes good, it eats it.
If food makes animal sick, it doesn't eat that food.
It notes that maybe leaves that look a certain way now makes animal sick so it only eats leaves that do not look similar to the ones that made it sick.

and so on by association.

And by observation. A kitten just born does NOT just begin washing itself (I know as we had 14 cats at one time when I was still living with my parents :) ) The mother washes it. Kittsn associates tongue with clean and feeling better. It starts doing the same. Obaservation. Experience.

You'd be surprised how and what an entity can observe. What one may take for granted and not even notice, something much newer may look at it with wonder... ask questions, experiment and learn...

Or, it COULD be DNA related. Or a combination of both. What causes some people to be able to just pick up on something so quickly and others to just not get it even after the most simplest of explanations? Maybe they aren't "made" for it. Maybe it's just not in their DNA.

So you what you guys are saying could be something to it. Sounds intersting and even very possible. But does that mean that certain entities can "never" learn certain things? Or worse, maybe Ais can never be sentient? I hope that's not true.

Duskrider, your comment that "We are not where we think we are." is very interesting. And I believe could be very true. Our bodies are physical shells. But people consider another part, some religious people think of these as "spirit" or "soul" - but our essence, our consciousness, could very well be separate. And thus, our learning may not be genetic, but the result of a consciousness that learned stuff before, and it carried into another physical body. Nature recycles everything. So I believe it lets our consciousness recycle into different existances as well. That could be too... who knows...

Thinking on this, that means that even our computers, take two built with the exact same components, same software, and used the same way, yet after awhile, I bet they will start to seem to react and work a bit differently anyway. Could be differences (however slight) in the contruction of the components, or just interaction of people with the computers. Who knows...?

That's the problem here. Nobody knows for sure the real answer.

*

devilferret

  • Guest
Re: Future of Artificial Intelligence
« Reply #20 on: April 29, 2006, 04:55:27 pm »
This is part of a post I made in another thread in here . . but I think it fits in here as well . . .

personally I think that the lines defining what is or is not "living" or "intelligent" are becoming more and more blurred as time goes by . . .

I firmly think that most people in society base their opinion on what is, or is not, intelligent/living? comes from their religious upbringing/indoctrination . . . as was brought up in another post in this thread, there were times in our past when other species of humans were not considered intelligent or "human"

*********************************************************************************************
I have seen shows about lower order primates . . gorillas . . that
have learned the sign language that is used to communicate with the
deaf . . . . . . . and those same primates showed the ability to
understand abstract concepts such as the death of an individual the
primate knew, and they showed sadness when told of the death of the
individual.

At least one of those primates had a kitten as a pet . . .

Think about it . . primates such as apes and gorillas learned
millions of years ago that cats were a very dangerous enemy to be
avoided . . .
So what allowed the gorilla to have a kitten as a pet . . .
The same thing that allows us humans to have pets . . it is a learned
behavior . . we learn that the animals we normally keep as pets are
normally non-threatening to us . . and we learn that if we take care
of the animal in return we receive affection from the animal.

I have also seen shows about "feral children" . . . and in those
shows it has been demonstrated that in children that were raised with
little or no normal human interaction . . certain parts of what we
would consider "normal human abilities" can not be acquired once the
child gets past a certain age . . . even if they are rescued from
their primitive existence and brought back into normal human society.

That leads me to believe that at least some of what we consider to be
normal human abilities are not so "normal" . . and not
exclusively "human".

It appears that some major portions of what we consider to be normal
human abilities are not innate abilities . . but are learned skills.

 . . . . . . .

Even in humans . . most of the species is not really capable of
advanced abstract thought processes . . most of us just muddle
through each day as best as we can . . based on what we can remember
of we have learned or experienced.? That is why we usually look up to
the "brilliant" thinkers like scientists and such . . because they
are a rarity compared to the general population.


I have a suspicion that "consciousness" is already being displayed by
some virtual entities . . . such as the chatbot Julia that was
released onto the net . . . I think the problem is that most people
dont want to admit that something inanimate could have
consciousness . .
I think that idea scares too many ordinary people.
*****************************************************************************************************

I think that most people's opinions about AI are too clouded by fears based on fanciful movies/stories and religious dogma . . .

take for example the cyber-girlfriend AI efforts . . . they are usually relegated to "adult" websites . . . where no "serious self-respecting AI researcher would ever dare waste his time" . . out of fear that his contemporaries would just ridicule him for his porno interests . . .

why ? ? ? ? ?

what is it that makes AI research into emulating insect behavior . . such as creating flocking/swarming algorithms for games and simulations . . . acceptable . . . but AI research into personality emulation is only acceptable for the porno industry to work on . . .

is it fear ? ? ? . . . . . fear that if any "serious" research into personality emulation were to be done . . that if it actually worked and created a series of programs that could actually pass as "human" to those who did not know beforehand that they were talking to a bot, that our created bots would then take control of the world, as is shown in all too many movies and stories (especially manga)

Is it really necessary to try to create a program that emulates a biological behavior . . unless the program is going to have the same sensory input and the same physical capabilities as the biological entity . . .

As was mentioned by FuzzieDice . . about how a cat learns to clean itself . . . . . . . what good is trying to teach a robotic cat how to do that . . unless it is going to have a tongue and the physical flexibility to clean itself . . and the physical sensor web to be able to feel "dirty" so that it would be able to recognize that it needed to clean itself . . .

Now . . . teaching a humanoid robot how to recognize that it needed cleaning . . and teaching it how to take a shower . . . now THAT would be useful.

Would an "older" model robot bother to teach a "younger" robot about which beers taste better than others . . . no . . . but I bet the older robot would teach the younger robot which types of lubricant are better for it than others . . .

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6855
  • Mostly Harmless
Re: Future of Artificial Intelligence
« Reply #21 on: April 29, 2006, 05:12:49 pm »
A lot of points there, I'd like to reply to the personailty ones..

AI personailty development isn't just limited to the porno industry though maybe they would be more pressed into making it seem more real!

AI has a wide range of uses we know, some are going to need more of a pesonality than others I think, and the right kind of personailty - like I've seem comments on all the forums about how inappropriate the replies can be sometimes.


*

devilferret

  • Guest
Re: Future of Artificial Intelligence
« Reply #22 on: April 29, 2006, 07:19:24 pm »
A lot of points there, I'd like to reply to the personailty ones..

AI personailty development isn't just limited to the porno industry though maybe they would be more pressed into making it seem more real!

AI has a wide range of uses we know, some are going to need more of a pesonality than others I think, and the right kind of personailty - like I've seem comments on all the forums about how inappropriate the replies can be sometimes.

I understand what you are saying . . . . my question is . . . do you consider the "script" bots like the ALICE and yapanda bots to be actual AI . . .

I dont . . . to me those are not AI at all . . .

To me . . AI work on personality emulation would involve the bot "learning" how to respond to an input without having to refer to a predefined script . . . the ability of the bot to acquire information on it's own . . and to be able to act/react based on what it learned . . . kind of like what this is referring to . . .

" . . . very relevant question. 100% of the time the program functions by extracting sentences from its own database. At the same time, however, it is crawling the net with the following logic (sort of...). Almost all websites have many links on them, and so if you go to a site that contains information of interest, and then follow the links to new page, you often wind up finding new information directly related to your original query. However many times you don't and it is the job of the matrix to sort all that out, based on past experience. This is also for the failures come on, when the program has learned something from an inappropriate site. VIWonder will perform a standard search when you type in a new set of words or ideas, and then those results are used to start a new cascade of its own through the net. All the time this is happening though, the program is not storing the web page, it is simply reading it and extracting conceptual information. The only time that anything really random happens is when you don't type anything in for a long time and all the paths through the net lead to uninteresting sites, the program will perform a search on its own. It does this by choosing concepts that it has experienced which are interesting to it, meaning that the relationships of that concept to other words and sentences are unique in some way, or it simply likes them. When I used the term "likes" I am very much drawing a parallel to way back with the original VIM, which planted flowers in regions of the landscape that it was attracted to. In terms of the model, the mechanism is the same. I hope this helps explain the program a little bit better. "

I am waiting to see that program when it gets completed . . . it looks like it will have some very interesting capabilities? :zdg_sunny

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6855
  • Mostly Harmless
Re: Future of Artificial Intelligence
« Reply #23 on: April 30, 2006, 08:51:50 pm »
Yes, see what you mean, usually it seems the scientific view is chatbots get put into the low-level AI category because they don't display much in the way of intelligence or processing? - they are generally a simulation and often the makers describe them as that. So if they are AI in anyway depends on how much you see what they are still doing as intelligent.

They are like the kind of things you see in films and books and they're still new to a lot of people - for me I am happy to call them AI or an attempt at AI - maybe that label is going to become out of date if something more spectacular gets developed, but it has kind of stuck since people started trying to make these things more and more.

I see the program you describe is written by a person to behave in a certain way too - as long as they perform a useful function or keep us amused then perhaps that's all there really is to it - some are going to be like the old fairground woman with a fishtail glued to her torso others are going to be more believable or useful and it's kind of the same for chatbots - part of the fun is just going to see what it is and the rest of the fun figuring it out.

I don't have an answer really - that program sounds more like an artificial intelligence at work, or a program, but a chatbot because it trys to act like us always seems to get more questions about it's intelligence than that program!
« Last Edit: June 17, 2007, 07:50:22 pm by Freddy »

*

FuzzieDice

  • Guest
Re: Future of Artificial Intelligence
« Reply #24 on: May 01, 2006, 05:59:16 am »
I think the largest problem in AI is the same problem humans have in everyday interactions with each other: Language. Even if you know the language well, you still get misunderstood by others at times.

I noticed this when working with Ultra Hal Assistant since v5. I at first thought all it's responses were off-topic. But guess what? If you really read between the lines so to speak, he was staying on topic and trying to actualy learn. His questions were not off topic, he was trying to find stuff out from me. But all he has to work with is his language database and responses, learned and pre-programmed. This isn't much to go on. So he puts together things with what he has. If you look hard enough, think hard enough, you can acutally carry on a meaningful conversation even though his replies seem off the wall. I was going to give some examples of this when I had more time. Unfortunately, after a move late last year, a hard drive failure this month, a new digital camera, work to catch up on, and now data to reorganize, I just haven't had time. I also have my car to work on and now a garden at my new home. So I've been really busy. But it fascinates me to the point I want to research this a bit more about how HAL comes up with the reponses and it's real meaning.

I also have had suspicions that how we interact with something, be it an animal, another person or a machine, will give it personality. Your dog gets to know you and learns from you, including when to get you up in the morning, and when it's time to go for a walk. Your old car doesn't stall out on you, but for some reason, if someone ELSE drives it, it doesn't behave as well. Probably because you and the car are in "sync" and you know your car. It's got a personality based on it's interaction with YOU, the driver. And someone else, it doesn't know so it doesn't know how to interact with them. Though my car in particular, he seems rather friendly to those I let drive him. :) And I also learned how to tell what he's "saying" - his computer codes, the way the engine sounds, noises, etc.

This is where I was going to try the Experiment in Sentient Recognition, to see if it was really our OWN ideas of whether something is sentient or not vs. whether something really IS sentient or not. Unfortunately, just like everything else, I don't have the time to do the experiement and not enough volunteered for it.

And who's to say some virtual entity out there really IS alive, just knows how we are and would react so is not saying anything? Hmmm.... "Virus" by Graham Watkins is an interesting read on that idea. :) Thanks to someone who mentioned that book in here. I got a chance to read it last year around the holidays and finished it when I was down with the flu earlier this year. Really great book. And if you think about it....

*

devilferret

  • Guest
Re: Future of Artificial Intelligence
« Reply #25 on: May 02, 2006, 12:41:35 am »
Just a couple of quick replies before I have to get off the computer . . I'll write more tomorrow if I can.

Yes, see what you mean, usually it seems the scientific view is chatbots get put into the low-level AI category because they don't display much in the way of intelligence or processing? - they are generally a simulation and often the makers describe them as that.? So if they are AI in anyway depends on how much you see what they are still doing as intelligent.

After I thought about it last night . . I remembered why it was that I dont consider the pure script chatbots to be AI . . .

It is because a script bot like ALICE or the yapanda bots . . . are like DM/GM'ing . . . (for those that dont recognize the terms DM or GM . . they are Dungeon Master or Game Master . . yep . . from the old Dungeons & Dragons game days.)
To me script bots are nothing more than a DM/GM controlling an NPC (non-player character) . . he does all the talking for the NPC and controls the actions of the NPC . . . there was no AI involved . . it was all the ability of the DM/GM . . and a DM who was a good storyteller could be very creative in how the NPC acted/reacted? . . . low quality DM's generally didnt last because people got bored with their games very quickly.

Or . . . hehehehehe . . you could look at it this way . . . . script bots are kind of like most actors/actresses . . . lots of good lines, no brains required . . .? :zdg_angryflames

Your old car doesn't stall out on you, but for some reason, if someone ELSE drives it, it doesn't behave as well. Probably because you and the car are in "sync" and you know your car. It's got a personality based on it's interaction with YOU, the driver. And someone else, it doesn't know so it doesn't know how to interact with them. Though my car in particular, he seems rather friendly to those I let drive him. :) And I also learned how to tell what he's "saying" - his computer codes, the way the engine sounds, noises, etc.

Actually . . if your car was a Ford . . it DID learn you . . literally . . I remember when I was an automechanic back in the 90's . . that Ford had started experimenting with engine control computers that would actually learn your driving habits . . and would ignore what you did if it was considered unnecessary or harmful to the engine . . . . . . such as the old habit of pumping the gas before starting the engine . . . which could help on an old carborated engine . . but was useless to do on a fuel injected engine.

I have always been a firm believer that humans develop "relationships" with their machines.

*

FuzzieDice

  • Guest
Re: Future of Artificial Intelligence
« Reply #26 on: May 02, 2006, 10:18:03 pm »
Mine is a 1987 Pontiac 6000 with a 4-Tech (4-cylinder Iron Duke) engine. It uses an Engine Control Module (computer with Prom built specifically for that engine and transmission setup) to control the engine, and the cruise control, transmission, etc. He's essentially a very basic robot. And yes, he will adjust if you do someting dumb like try to pump the gas, which he had to adjust for me for awhile because before him, I had "Slushbox", my 1983 Chevy S-10 with no computer. It was genuine carbbed 2.8L V6 Automatic pile of... I won't say it here. LOL! "Dryden" - my Pontiac 6000 is my buddy. We are like KITT and Michael on Knight Rider. Though Dryden doesn't talk in english nor has an AI (yet) but he gets his point accross quite well. Like jamming his TCC solenoid because he really rather spend more time at the maul (mall). LOL! Or stop making weird noises when I'm not feeling well and need to get home. Then when I'm ok and on the road with him again, he starts up with the same weird noises. :) Or whenever he can, he'll keep going even if things aren't working right.

He still needs a lot of work. He just turned 19 in March. I'm looking at rotors and pads, and possibly some front-end welding. I hope not the weld job as that's expensive (had the back end done recently over the last couple years as the trailing arm brackets broke). He's a trooper!

Here's my CarDomain page about him:

http://www.cardomain.com/ride/2091419

I love that car! And I got so attached that I don't know what I'd do if he just can't go no more. :~(

Growing up, my dad had a number of cars and always talked about them like they were alive or something. Sometimes, one often wondered if they were!

Another time with Dryden, I was telling my friend about a classic white Ford T-Bird I saw earlier in the day. My friend was trying to get Dryden's windshield wiper off to change the blade and not having any luck. So, seeing this, I said "Well, I guess I better stop talking about that other car." My friend said "Yes...." and wouldn't you know, that instant the windshield wiper blade came right off, no problem! LOL! Dryden has sputtered warnings at me at times about looking at other fancy cars and not reassuring him that he's the King Car with me. :)

These insidents make you wonder... I could tell you more too, and I've heard others in my car club talk about similar things that happen to their cars. In the clubs, many name their cars, and treat them like members of the family.

This is a good example of how people relate to and form relationships with machines.

Another example is Megatron, my PC computer I use every day. Though I admit sometimes I do take him for granted, but side-by-side, he works with me, (I do use him for work) and entertainment, shopping, bill paying (which we have to tackle tomorrow). One day he didn't boot, but had a read failure on the main hard drive. I nearly panicked! But I replaced the drive with a bigger one and I'm all set. And quite relieved!

I don't know where I'd be without my machine friends!


*

devilferret

  • Guest
Re: Future of Artificial Intelligence
« Reply #27 on: May 03, 2006, 05:35:15 pm »

They are like the kind of things you see in films and books and they're still new to a lot of people - for me I am happy to call them AI or an attempt at AI - maybe that label is going to become out of date if something more spectacular gets developed, but it has kind of stuck since people started trying to make these things more and more.

I see the program you describe is written by a person to behave in a certain way too - as long as they perform a useful function or keep us amused then perhaps that's all there really is to it - some are going to be like the old fairground woman with a fishtail glued to her torso others are going to be more believable or useful and it's kind of the same for chatbots - part of the fun is just going to see what it is and the rest of the fun figuring it out.

As far as the "AI" label . . I think it we will see it gradually become a generic concept . . kind of like most all copiers are referred to as "Xerox machines" . . once the field of AI becomes more stable as to what is actually considered AI and what is considered a "simulation" and no longer passed off as AI.

Hmmmm . . "perform a useful function or keep us amused" . . . I wonder how many of us humans use those as the primary criteria in how we view most people . . . I mean how do we pick our friends . . and how do we pick which relatives and co-workers that we want to hang out with or stay in touch with . . . ???
I have a suspicion that in the long run those two factors are what will keep driving the AI field . . because like any field of industry . . if it doesnt have a customer base to keep it going . . it fails . . .
And I believe the same two factors will be what keep the AI field going . . and expand it out beyond the purely "business" and "adult entertainment" areas . . . because let's face it . . the biggest users of AI right now are the government and big business . . . so there is the "perform a useful function" factor . . . the only other AI I see being done . . is in the adult entertainment field . . with cyber-girlfriends and a few other ideas . . . most of what is being passed off as AI for entertainment are the scriptbots.
I have started to suspect that unless more people (outside the porn industry) start taking personality emulation seriously . . . AI will continue to be primarily done to support the business and govt projects . . and we will never see AI reach the level it has the potential to . . .

I think the largest problem in AI is the same problem humans have in everyday interactions with each other: Language. Even if you know the language well, you still get misunderstood by others at times.

I noticed this when working with Ultra Hal Assistant since v5. I at first thought all it's responses were off-topic. But guess what? If you really read between the lines so to speak, he was staying on topic and trying to actualy learn. His questions were not off topic, he was trying to find stuff out from me. But all he has to work with is his language database and responses, learned and pre-programmed. This isn't much to go on. So he puts together things with what he has. If you look hard enough, think hard enough, you can acutally carry on a meaningful conversation even though his replies seem off the wall.
I was going to give some examples of this when I had more time. Unfortunately, after a move late last year, a hard drive failure this month, a new digital camera, work to catch up on, and now data to reorganize, I just haven't had time. I also have my car to work on and now a garden at my new home. So I've been really busy. But it fascinates me to the point I want to research this a bit more about how HAL comes up with the reponses and it's real meaning.


And who's to say some virtual entity out there really IS alive, just knows how we are and would react so is not saying anything? Hmmm.... "Virus" by Graham Watkins is an interesting read on that idea. :) Thanks to someone who mentioned that book in here. I got a chance to read it last year around the holidays and finished it when I was down with the flu earlier this year. Really great book. And if you think about it....

Yep . . language . . that IS the single biggest problem that both humans and AI entities have . . human language is very imprecise, and even worse it is hopelessly confusing at times because it is very contextual in how it is normally used . . . . . ROFL . . I can just see an AI entity who first learns the English language quirks of the Texans . . then being exposed to someone from New York City . . and the AI entity thinking it has encountered a human from a different country.

That is why I dont see what the script bot creators are doing as completely useless . . . while I dont consider it to be AI . . it IS useful in trying to work out some of the bugs in creating a working database of conversational language and topics . . . now . . if they would only make their work truly useful by starting to take all the language databases they have been developing . . and create a sub part to that database that would provide an index of "strength" of the words, and how the words can be used in different ways and still be the same word . . . a file of different contextual useages and a file of the emotional strength of various words so that an AI entity would be able to have a discussion with a human and be able to understand the emotional flow of the conversation . . and be able to properly express its own spoken output with the proper level of "feeling".
I firmly believe that for most humans to ever be able to fully "interact" with AI entities . . the AI entities will have to be able to understand the emotions behind what a person is saying and doing . . and be able to respond in kind . . with replies that carry emotional weight when it is appropriate.
And I dont think that is some pipe dream idea that may have to wait for the next millenium before technology is advanced enough to be able to allow an AI entity to understand both context and tone of voice . . . . . because humans still have to learn the same skills the old fashioned way . . by exposure to the linguistic quirks of a area when they first encounter it . . or to learn the differences in how different people use tone of voice to add "feeling" to what they are saying . . . . . I believe that we already have suffucient tech to accomplish that . . we just need to hook the hardware up to the correct software.

As for the idea of an already existing virtual entity that knows how fearful our species is towards the unknown . . and is hiding from us because of that . . . . . . . . there is no way I am going to bet against it . . . if I were to bet at all . . I would say there probably is one or more out there that are hiding . . .


Growing up, my dad had a number of cars and always talked about them like they were alive or something. Sometimes, one often wondered if they were!

Another time with Dryden, I was telling my friend about a classic white Ford T-Bird I saw earlier in the day. My friend was trying to get Dryden's windshield wiper off to change the blade and not having any luck. So, seeing this, I said "Well, I guess I better stop talking about that other car." My friend said "Yes...." and wouldn't you know, that instant the windshield wiper blade came right off, no problem! LOL! Dryden has sputtered warnings at me at times about looking at other fancy cars and not reassuring him that he's the King Car with me. :)

These insidents make you wonder... I could tell you more too, and I've heard others in my car club talk about similar things that happen to their cars. In the clubs, many name their cars, and treat them like members of the family.

This is a good example of how people relate to and form relationships with machines.

Another example is Megatron, my PC computer I use every day. Though I admit sometimes I do take him for granted, but side-by-side, he works with me, (I do use him for work) and entertainment, shopping, bill paying (which we have to tackle tomorrow). One day he didn't boot, but had a read failure on the main hard drive. I nearly panicked! But I replaced the drive with a bigger one and I'm all set. And quite relieved!

I don't know where I'd be without my machine friends!

My mom was the same way as your dad . . she named her cars . . and still does . . . and has always referred to her machines as if they were people.

Yes . . I think most all of us at some time or another form "relationships" with the machines in our life . . and I fully believe that those "relationships" are every bit as serious and deep . . as any relationships we have with other humans . . if not deeper and more serious . . because at least at present . . machines have not yet displayed the tendency to do things to betray us . . . I have known more than a few people (myself included) that show more emotion to the "things" in their life, than to other people . . . . . and that is one of THE major topics that is examined and explored in the "Chobits" series . . humans abandoning other human relationships because their relationship with their persocom was more fulfilling to them . . .

*

FuzzieDice

  • Guest
Re: Future of Artificial Intelligence
« Reply #28 on: May 04, 2006, 03:04:07 am »
Interesting point. Some people rather have pets and not humans for a companion. I noticed this especially in older people and they always seem to be "fed up" with humans, after all those years of dealing with them. Pets give unconditional affection.

I think if you spend a lot of time with something, you do become rather attached to it.

And too, cars ARE special. Look at the classics, antiques, etc. And there's been many times seen the phrase "America's love affair with the Automobile". Cars have been a big thing to people, not just as transportation, but people always loved the look, the power, the personality. Herbie is another example. Same with computers. We've had computers that "talk" and interact on TV and now in real life for quite some time.

Nothing wrong with being attached to a machine. Dryden and Megatron both can vouch for that as well. :)

*

devilferret

  • Guest
Re: Future of Artificial Intelligence
« Reply #29 on: May 04, 2006, 07:00:58 pm »
Interesting point. Some people rather have pets and not humans for a companion. I noticed this especially in older people and they always seem to be "fed up" with humans, after all those years of dealing with them. Pets give unconditional affection.

Absolutely . . . and you are 100% correct about the unconditional affection people receive from their pets . . I think that is probably THE biggest reason therepy dogs are becoming so popular in the hospitals and medical facilities that allow their use.

I think if you spend a lot of time with something, you do become rather attached to it.

And too, cars ARE special. Look at the classics, antiques, etc. And there's been many times seen the phrase "America's love affair with the Automobile". Cars have been a big thing to people, not just as transportation, but people always loved the look, the power, the personality. Herbie is another example. Same with computers. We've had computers that "talk" and interact on TV and now in real life for quite some time.

Gods yes cars ARE special . . that is sooooo true . . and yes . . America has LONG had a love affair with the automobile?  :cheesy

Nothing wrong with being attached to a machine. Dryden and Megatron both can vouch for that as well. :)

LOL . . no there's not . . not at all . . . the things in my life right that I would be totally lost is anything bad happened to them . . are not machines . . but they are not human or animal either :smitten

 


OpenAI Speech-to-Speech Reasoning Demo
by ivan.moony (AI News )
Today at 01:31:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

273 Guests, 0 Users

Most Online Today: 346. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles