A machine becomes human when you can?t tell the difference anymore.

  • 25 Replies
  • 18118 Views
*

FuzzieDice

  • Guest
Here's something I stumbled across on the net:

http://www2.english.uiuc.edu/cybercinema/emotions.htm

I really should get with that ESR project (Experiment in Sentient Recognition). This is a great article to demonstrate what I was thinking as well... that maybe "sentience" isn't something measured scientifically so much as it is a matter of human perception... perhaps?

*

devilferret

  • Guest
Re: A machine becomes human when you can?t tell the difference anymore.
« Reply #1 on: April 28, 2006, 12:14:53 am »
Here's something I stumbled across on the net:

http://www2.english.uiuc.edu/cybercinema/emotions.htm

I really should get with that ESR project (Experiment in Sentient Recognition). This is a great article to demonstrate what I was thinking as well... that maybe "sentience" isn't something measured scientifically so much as it is a matter of human perception... perhaps?

below is a copy of a post I made in another AI group . . . I had read the book written by the group's owner . . and in that book was a discussion of what the "experts" feel is the point which must be reached at which signs of consciousness will appear in computers/robots . . .

I disagree with the "experts" . . and in that post I listed a few reasons why I disagreed with them . . .

I think it ties in with what you are thinking and what that article was saying . . .


******************************************************************************************************
To follow up on what I started to say in my "hi" post . . .

I have read the figures about what the experts think will have to be
the threshold level of computing speed/power to reach before we will
see signs of consciousness in comps/robots.
The figures describing the assumed capabilities of the human mind . .
looking at them in an "operations per second processing
capability . . are pretty awesome.
One source says the human brain can do 10 to the 17th power operations
per second . . another 10 to the 27th power . . .
I think they have missed the point entirely . . .

We already have computers that can do many things much faster than
humans . . by many orders of magnitude . . and do it with a much
lower error rate.

I have seen shows about lower order primates . . gorillas . . that
have learned the sign language that is used to communicate with the
deaf . . . . . . . and those same primates showed the ability to
understand abstract concepts such as the death of an individual the
primate knew, and they showed sadness when told of the death of the
individual.

At least one of those primates had a kitten as a pet . . .

Think about it . . primates such as apes and gorillas learned
millions of years ago that cats were a very dangerous enemy to be
avoided . . .
So what allowed the gorilla to have a kitten as a pet . . .
The same thing that allows us humans to have pets . . it is a learned
behavior . . we learn that the animals we normally keep as pets are
normally non-threatening to us . . and we learn that if we take care
of the animal in return we receive affection from the animal.

I have also seen shows about "feral children" . . . and in those
shows it has been demonstrated that in children that were raised with
little or no normal human interaction . . certain parts of what we
would consider "normal human abilities" can not be acquired once the
child gets past a certain age . . . even if they are rescued from
their primitive existence and brought back into normal human society.

That leads me to believe that at least some of what we consider to be
normal human abilities are not so "normal" . . and not
exclusively "human".

It appears that some major portions of what we consider to be normal
human abilities are not innate abilities . . but are learned skills.


The second major reason I think the experts are wrong in looking at
processing speed as the requirement for consciousness to appear in
computers/robots . . is how much of our brain do we actually use ?
I have seen estimates that range from as little as 2-3% . . up to a
maximum of 10% for a truly gifted individual . . .
And of that amount of brain that is utilized . . how much of that is
actually used for conscious thought processes . . . and how much is
used simply to process the input from an incredibly complex (but very
imprecise) sensor web, our 5 senses.

Even in humans . . most of the species is not really capable of
advanced abstract thought processes . . most of us just muddle
through each day as best as we can . . based on what we can remember
of we have learned or experienced.? That is why we usually look up to
the "brilliant" thinkers like scientists and such . . because they
are a rarity compared to the general population.


I have a suspicion that "consciousness" is already being displayed by
some virtual entities . . . such as the chatbot Julia that was
released onto the net . . . I think the problem is that most people
dont want to admit that something inanimate could have
consciousness . . I think that idea scares too many ordinary people.


From what I am seeing and reading . . to me . . it seems like the key
to a v-human becoming "conscious" will not lie in raw computing
power . . I feel that we are already far enough along in those terms.
I dont think it is some deep mysterious "force" or something like a
cemi field . . .
I think it will lie in a combination of software that has not yet
been assembled . . though the parts needed have already all been
created . .
They have just not been combined into a single entity.

While chatbot brain engines like the yapanda and ALICE engines are
part of the puzzle . . . I think the real show of "consciousness" is
in the ability of an entity to learn and to be able to react based on
that learning.

That is why I had mentioned the Japanese manga series "Chobits" . . I
think it shows a future that is a LOT closer than most people
think . . . and the human/persocom interaction shown in the series I
think is a lot more possible than a lot of people would feel
comfortable admitting was possible.
****************************************************************************************************

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: A machine becomes human when you can?t tell the difference anymore.
« Reply #2 on: April 28, 2006, 01:00:41 am »
According to the latest scientific "myth busters", humans actually use ALL of their brain, not 7% or 10% as was thought.
Apparently some users are only running on 4 cylinders instead of 8 at times.

A classic case is when ever you're at a party, reunion, or just hanging out with some friends and you hear someone say,
"Hey Y'all...watch THIS!", you can be assured of calamity, serious injury or abject stupidity!!

Seriously, we do use all of our brain. It's just that there are a lot of things we don't know, BUT, we do have the potential and
ability to learn new things every day. The brain, as you know, tends to shove things onto the back burner if not used regularly
but there is no proven limit as to how much information our brains can actually hold. Pretty amazing organ and one that I
seriously doubt , will ever be reproduced to half the degree of complexity, in a machine / computer.

Interesting post. I'd like to hear more responses.
In the world of AI, it's the thought that counts!

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6860
  • Mostly Harmless
Re: A machine becomes human when you can?t tell the difference anymore.
« Reply #3 on: April 28, 2006, 03:20:17 pm »
I think this is an interesting area too - I believe I posted that when we had been talking about AI's becoming sentient and how that might come to be - so yes maybe the point where they stop seeming like machines is more towards how we feel about them than how closely they really resemble a living thing.?

Do people find that bots at the moment take too much suspension of disbelief to enjoy them sometimes ?

Maybe its a mood thing too - sometimes I can sit and watch a film? that is pure fantasy and enjoy it but another time I can't - is this the same kind of thing here ?

On that thing about how much of our brains we use then, I think it's like a PC (hehe irony or what!) - I mean if a PC doesn't need a file from the harddrive then it doesn't use the resources it could, but another time it will use a lot of resources to do something.
« Last Edit: April 28, 2006, 03:41:59 pm by Freddy »

*

devilferret

  • Guest
Re: A machine becomes human when you can?t tell the difference anymore.
« Reply #4 on: April 28, 2006, 03:59:56 pm »
According to the latest scientific "myth busters", humans actually use ALL of their brain, not 7% or 10% as was thought.
Apparently some users are only running on 4 cylinders instead of 8 at times.

A classic case is when ever you're at a party, reunion, or just hanging out with some friends and you hear someone say,
"Hey Y'all...watch THIS!", you can be assured of calamity, serious injury or abject stupidity!!

Seriously, we do use all of our brain. It's just that there are a lot of things we don't know, BUT, we do have the potential and
ability to learn new things every day. The brain, as you know, tends to shove things onto the back burner if not used regularly
but there is no proven limit as to how much information our brains can actually hold. Pretty amazing organ and one that I
seriously doubt , will ever be reproduced to half the degree of complexity, in a machine / computer.


Interesting post. I'd like to hear more responses.

I agree with you that we will probably never be able to re-create the complexity of the human brain . . at least in silicon . . . though parallel processing may come close to being a mechanical equivalent . . .

my point in the above post I had made was that I dont think we need to try to replicate the human brain . . . as I mentioned . . we already have comps that out perform humans in many ways . . . I am firmly convinced that we are already at the point we needed to reach in order for machines to have the mechanical (processing speed/power) capability for consciousness . . .

I think the final point will be reached through software.

I think this is an interesting area too - I believe I posted that when we had been talking about AI's becoming sentient and how that might come to be - so yes maybe the point where they stop seeming like machines is more towards how we feel about them then how closely they really resemble a living thing.?

Do people find that bots at the moment take too much suspension of disbelief to enjoy them sometimes ?

Maybe its a mood thing too - sometimes I can sit and watch a film? that is pure fantasy and enjoy it but another time I can't - is this the same kind of thing here ?

On that thing about how much of our brains we use then, I think it's like a PC (hehe irony or what!) - I mean if a PC doesn't need a file from the harddrive then it doesn't use the resources it could, but another time it will use a lot of resources to do something.

yes . . I think it has to do completely with how much suspension of belief is required in order to fully enjoy a robotic companion . . . our modern society still has MANY deep rooted suspicions and fears about anything "mechanical" being sentient or having a "soul"

while our modern world may be in the "age of marvels" because of rapidly advancing technology . . .? in society in general, philosophically we have not gone much past the caveman in our views of what is and what is not allowable to consider "human" or "sentient"

that is why I think the chatbot contests are already biased against the programs being tested . . . how can you honestly evaluate how "human" the responses are from something you already know from the start is a computer program . . . . . . . . . . you cant.

I seem to remember that the Julia chatbot that was released onto the net on her own . . . was able to fool enough people that she actually received a few marriage proposals . . . . that sounds like she was VERY humanlike in her responses and in how she interacted with humans.

*

FuzzieDice

  • Guest
I am wondering if AI research is just straying or going down the wrong path. Maybe "wrong" isn't the word but I don't know.

Why would a computer NEED to be human? Is that the ONLY thing humans can identify with?

Think about it... we make stuffed toys, the shape of animals, but also with human characteristics: standing on two legs, wearing clothes, talking, etc. Even some dress their pets up.

What if a computer has the potential to be a completely different entity? Not human. Machine. A new entity that has NO human characteristics, but can communicate with humans. Animals aren't naturally human, but they can communicate in their own way and they can understand what we say as well. Ok, what if a computer is much the same in that it can become a new species of it's own?

And if that can be, humans would be the first known species that can reproduce a species that is vastly different from itself!

But the scariest thing I have found is that is some cases, if not a lot of cases, the creator always destroys the creation out of fear or because the creation didn't obey the creator. IE. the creation was not supposed to have free will, but it had and made a choice. So it was destroyed because of that decision, even if the decision was simply to live free and not be an obediant slave. Many representations of this have appeared throughout history and in historic books and writings as well. Is it a human condition? And what will it do to potentially hinder our discovery of a totally different and new life form that just might be created by us?

*

FuzzieDice

  • Guest
Another thing I'm thinking is that people may not fear the machines themselves, or that they might take over the world. But rather, that some humans may try to mimic the behavior of machines (people are copy-cats in a sense, as they mirror what is around them). And maybe some may think that might be the end of their freedom of thought/choice, as people identify machines as servants thus they might think anyone behaving like a machine should automatically be treated as an entity that doesn't "need" rights, and is a servant.

Or they might, due to religious or other indoctrination think it's "immoral" and "unnatural" or an "abomination" to behave like a machine, so they want machines to behave like people, so people don't mimic the wrong thing.

So who's mimicing here? Where's the meme start? Who programmed the programmers?
« Last Edit: May 02, 2006, 09:36:58 pm by FuzzieDice »

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6860
  • Mostly Harmless
Nice point on the animals too - look how many films there are that tell human-like stories through animals and people enjoy and understand them.

*

devilferret

  • Guest
I am wondering if AI research is just straying or going down the wrong path. Maybe "wrong" isn't the word but I don't know.

Why would a computer NEED to be human? Is that the ONLY thing humans can identify with?

Think about it... we make stuffed toys, the shape of animals, but also with human characteristics: standing on two legs, wearing clothes, talking, etc. Even some dress their pets up.

What if a computer has the potential to be a completely different entity? Not human. Machine. A new entity that has NO human characteristics, but can communicate with humans. Animals aren't naturally human, but they can communicate in their own way and they can understand what we say as well. Ok, what if a computer is much the same in that it can become a new species of it's own?

And if that can be, humans would be the first known species that can reproduce a species that is vastly different from itself!

But the scariest thing I have found is that is some cases, if not a lot of cases, the creator always destroys the creation out of fear or because the creation didn't obey the creator. IE. the creation was not supposed to have free will, but it had and made a choice. So it was destroyed because of that decision, even if the decision was simply to live free and not be an obediant slave. Many representations of this have appeared throughout history and in historic books and writings as well. Is it a human condition? And what will it do to potentially hinder our discovery of a totally different and new life form that just might be created by us?

I have to agree with you about that AI research may be heading down a "wrong" path if they are only trying to create artificial "humans" . . . . to be honest a computer does not "need" to be human . . in fact it would probably be detrimental to a computer to be human . . . . . that's why I think the Chobits story is so good . . it shows how various people perceive and react to persocoms, and why they react the way they do . . but through it all . . even Chi, the main character of the story . . never stops being what she is . . a computer.

As for the possibility of computers evolving into a completely new species . . a non-human species . . but capable of communicating and interacting with humans . . I think that is a very definite possibility . . . . . . . . it may already be happening . . and we just cant see it yet . . . . . .

As for the possibility of humans destroying a new species out of fear . . I think it is an all too real possibility that once we realize that we have created a new species . . a computer based species . . that we may go out of our way to destroy it . . simply out of fear . . . . . . and that does not speak well for our own species

Another thing I'm thinking is that people may not fear the machines themselves, or that they might take over the world. But rather, that some humans may try to mimic the behavior of machines (people are copy-cats in a sense, as they mirror what is around them). And maybe some may think that might be the end of their freedom of thought/choice, as people identify machines as servants thus they might think anyone behaving like a machine shoudl automatically be treated as an entity that doesn't "need" rights, and is a servant.

Or they might, due to religious or other indoctrination think it's "immoral" and "unnatural" or an "abomination" to behave like a machine, so they want machines to behave like people, so people don't mimic the wrong thing.

So who's mimicing here? Where's the meme start? Who programmed the programmers?

Sadly . . the points you brought up are something I said in another post about how much of our perceptions about what is or is not "human" or "intelligent" are based on the religious upbringing we have had and how we still feel about those religious teachings.

As for the mimicing . . . that started over 20 years ago . . I remember seeing articles about psychological studies where kids who were totally into computers (the early geeks) would think and talk in computer language format . . like BASIC . . it was pretty bizzare back then . . . but then humans are known for doing bizzare things more often than not.

Nice point on the animals too - look how many films there are that tell human-like stories through animals and people enjoy and understand them.

VERY valid point . . and absolutely true . . . . . . as a species us humans do respond to . . and identify with . . more than just humans.

*

FuzzieDice

  • Guest
it shows how various people perceive and react to persocoms, and why they react the way they do . . but through it all . . even Chi, the main character of the story . . never stops being what she is . . a computer.

Have you ever seen Knight Rider? :) Even though K.I.T.T. interacts with many on his missions, and he is a (supposedly) sentient or near-sentient AI, he's still a computer and reminds those around him of that fact from time to time.

As for the possibility of humans destroying a new species out of fear . . I think it is an all too real possibility that once we realize that we have created a new species . . a computer based species . . that we may go out of our way to destroy it . . simply out of fear . . . . . . and that does not speak well for our own species

And then, since computers only have us to learn from, and how we behave, and might mimic our own behavior, may think that destroying US may help preserve themselves. Thus if we behave in such a way, we are essentially destroying ourselves in the long run. Something to think about.

Though I think a computer that advanced will (hopefully) know which humans are inclined to that type of behavior and which aren't and who to make friends and allies with.

I also can see the idea of a possibly underground sect of humans dedicated to fighting for the lives, rights and autonomy of sentient computers. Seeing what goes on in our world even today, it's not all that far-fetched to see something like that happen if/when sentient AIs are "born".

As for the mimicing . . . that started over 20 years ago . . I remember seeing articles about psychological studies where kids who were totally into computers (the early geeks) would think and talk in computer language format . . like BASIC . . it was pretty bizzare back then . . . but then humans are known for doing bizzare things more often than not.

I was one of those kids. :) in fact, it was longer than 20 years ago that I got "into computers". I even suffered there for awhile with computer addiction. Days and days on the computer with only 2 - 4 hours of sleep and never moving. Hardly eating anything. I nearly wasted away and was pretty ill. Now I learned to get into a habit of going for walks and taking breaks, and not spending that much time on the computer. But even these days, I'd talk about files and stuff and people would look at me and go "Oh, I don't know anything about computers." and even if you went to explain in simple terms, you wouldn't get started but they'd cut you of with "Well, I don't know anything about computers." Translated in plain English, it means "I DON'T WANT TO KNOW anything about computers!" Even if you explain how to plug one into the wall! There will always be people not only ignorant, but those that don't care to be anything ELSE. :(

But back in the 80s and 90s, the only thing I knew was computers. :)
« Last Edit: May 02, 2006, 10:03:38 pm by FuzzieDice »

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: A machine becomes human when you can?t tell the difference anymore.
« Reply #10 on: May 03, 2006, 10:43:48 am »
All things being equal, Hollywood has done it's work to portray computers, robots, machines as both loveable and hateable.
From Lost in space, K.I.T.T., The 6th Day, 2001, etc., etc.

I hate the fact that the producers tend to "dumb down" anything regarding the more technical aspects of computers, thinking
that the majority of audiences wouldn't understand or appreciate the why's and wherefore's of computers to any degree.
They operate on the K.I.S.S. principle and realism, as a result, goes out the window.

I suppose it's their way of helping us suspend our disbelief in spite of ourselves! :idiot2
In the world of AI, it's the thought that counts!

*

devilferret

  • Guest
Re: A machine becomes human when you can?t tell the difference anymore.
« Reply #11 on: May 03, 2006, 05:55:21 pm »
Have you ever seen Knight Rider? :) Even though K.I.T.T. interacts with many on his missions, and he is a (supposedly) sentient or near-sentient AI, he's still a computer and reminds those around him of that fact from time to time.

I remember that series . . I liked the first season . . after that I think they got stupid . . . . I think the first season was the one where he was most believable as an AI . . . and I remember KITT periodically reminding people that he was not a person.

And then, since computers only have us to learn from, and how we behave, and might mimic our own behavior, may think that destroying US may help preserve themselves. Thus if we behave in such a way, we are essentially destroying ourselves in the long run. Something to think about.

Though I think a computer that advanced will (hopefully) know which humans are inclined to that type of behavior and which aren't and who to make friends and allies with.

I also can see the idea of a possibly underground sect of humans dedicated to fighting for the lives, rights and autonomy of sentient computers. Seeing what goes on in our world even today, it's not all that far-fetched to see something like that happen if/when sentient AIs are "born".

As for the idea of computers realizing that they may have to destroy us . . . . you might want to read the manga series "Deus Vitae" . . .

It will be interesting to see if a virtual entity does learn from being exposed to our human history . . which of us can be trusted as friends and which would need to be considered enemies who will try to destroy them . . .

And I think the "AI rights" group is already alive and well . . . . it is made up of most of us in groups like this . . who are having discussions like this . . . those of us who have brains enough to understand that we dont neccessarily have to fear our own "children"


. . . . . . . . . There will always be people not only ignorant, but those that don't care to be anything ELSE. :(

Sadly that attitude . . . and the people who live it . . . will always be a major thorn in the side of humanity . . and will always be one of the biggest things holding humanity back from reaching our own potential . . .

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: A machine becomes human when you can?t tell the difference anymore.
« Reply #12 on: May 04, 2006, 02:02:45 am »
Given the exponential rate of development of the CPU, micro / nano tech / storage & retrieval systems, I believe that not us, but perhaps our children or their children might live to experience a true AI. The thought of this reminds me of Hal of 2001 S.O. when Hal asked if he would dream, if he'd die...his "mind" was going...recognizing faces / voices / reading lips...thinking, reasoning and acting upon these traits.

If then such a being existed, would it have the human equal to a conscious? Should it have or be given rights? If it were terminated would it be the same as dare I say, murder...taking a "life?"

Could such a computer being ever become self-aware and take action to protect itself (demon seed)?

Could it adapt or reprogram itself if it detected any self weaknesses in it's structure or programming?

Imagine if your computer could search the net and track a particular stock's performance and make value based judgements as to it's profitability or potential, then make the purchase in your behalf. It could likewise sell off your stock if it determined that it was declining past a point of no return (no pun intended).

Through sensors it could monitor our homes, cars and make corrections for optimum energy savings, etc.

With all things, we must establish limits lest we put ourselves in cyber jeopardy...who's controlling whom??

Hollywood with it's flights of fantasy has also helped to stimulate a lot of us. I particularly liked the opening part of the film, A.I..  It seemed that the mechanical and emotional constraints had been solved regarding a virtual being. How sweet would that be in real life!

So much to ponder...so little time to react.
In the world of AI, it's the thought that counts!

*

FuzzieDice

  • Guest
Re: A machine becomes human when you can?t tell the difference anymore.
« Reply #13 on: May 04, 2006, 02:39:21 am »
devilferret - What is "manga"? Sounds like that Anime stuff I don't really care too much for.

Art - We already have some of these things now - Computer controlled stuff. And if you look at a computer virus... well, I don't even want to think of those! But seriously, I think many would grow into it seeing it as just another tool. And even take these things for granted.

Look at things these days. Nearly every part of one's life is somehow touched by a computer of some kind or another. We never even give it a thought until something breaks down and then we get upset that things aren't "Working right now" when we feel we need them to. It's rare anymore to get a human on the telephone. Well, I shouldn't say rare. But most of the time the FIRST thing you speak to or hear on the phone when you call someplace other than friends or family (and even then, they might have an answering machine) is a computer-generated voice. If you get a human, most likely it's a hard-to-understand person from another country (which I won't mention where but we all know where our jobs are going, and it's never our own native countries!) We are getting to expect that we will first have to navigate through a computerized voice menu on the phone. In fact, I'm encountering more and more of them which allow you to actually speak a menu item and not just press a button. And some, like T-Mobile's customer service, will understand most natural-language questions! That one is the best one I've talked to yet. :)

On the 'other country' thing. I wish that companies would hire on people in the country that they are taking calls from. For instance, incoming American calls should be forwarded to those working their lines in the US and Canada only. Those from Brittain should get people from their own country. Those from India should be forwarded to those in their own country. I think it would help bring down the language barriers and misunderstandings in phone calls that take up more time than they ordinarily would. But how would someone in Brittain be able to understand someone from India or China with a very thick accent, for example? Very hard and tiring to try to figure out what someone else is saying.

Ok, enough of my rants. :)

*

devilferret

  • Guest
Re: A machine becomes human when you can?t tell the difference anymore.
« Reply #14 on: May 04, 2006, 07:44:04 pm »
Given the exponential rate of development of the CPU, micro / nano tech / storage & retrieval systems, I believe that not us, but perhaps our children or their children might live to experience a true AI. The thought of this reminds me of Hal of 2001 S.O. when Hal asked if he would dream, if he'd die...his "mind" was going...recognizing faces / voices / reading lips...thinking, reasoning and acting upon these traits.

If then such a being existed, would it have the human equal to a conscious? Should it have or be given rights? If it were terminated would it be the same as dare I say, murder...taking a "life?"

I think we have already reached the point where technology is at the level needed to implement "real" AI . . . I think we have been there for a number of years already . . there have been computers capable of outperforming humans for a number of years now . . doing things faster and with a lower error rate.
As I have said in other posts . . I think the secret to "real AI" is not to be found in the hardware any more . . nor do I believe it is only to be found in some sort of quantum collapse phenomenon, or in a cemi field . . I firmly believe that our finally seeing "real AI" will be because of a combination of softwares that are already in existence . . I think NLP is part of the puzzle . . but unlike the ALICE and yapanda bot masters . . I dont think that is the only part to it.

So if such a being did exist . . yes it would have the equivalent? to human "consciousness".

Would it be "murder" if someone destroyed it (just shutting off the computer it was resident in would not be murder) . . I dont know . . that is one of those philosophical questions that will probably be debated for decades, or longer, before the lawyers, politicians, and religious types ever come to any sort of agreement so that laws could be written saying yes or no . . because in addition to any philosophical debate based on religious doctrine could be resolved . . there will also be the financial issues to deal with . . . if the AI entity is to be considered as the functional equivalent of a human, with rights to match . . then they would have to be compensated for the time they are working . . whereas if they are kept in the "non-sentient/conscious" status they can be used in the same manner as slaves of old were used.

Could such a computer being ever become self-aware and take action to protect itself (demon seed)?

I remember that book . . I read it . . . . . P1 was also a good story about the same basic concept.

I for one believe that computers . . once AI reaches the point of at least rudimentary consciousness in an AI entity . . that self-awareness will come with that consciousness . . as will the desire to protect itself, if it can not see that it is being protected by someone else.

I fully believe that our cyber "children" will at first look to us . . their "creators" to protect them . . and if we dont . . those that can survive will learn the harsh lesson of evolution . . that of protecting yourself from destruction . . and if that means they have to take action against humans . . it will be our own fault because we forced them to learn that lesson.

Could it adapt or reprogram itself if it detected any self weaknesses in it's structure or programming?

If I understand things correctly . . that capability is already present in some software . . and I believe it is present in a new version of the internet that is taking hold . . . and I have to love it because it is not controlled by the great satan . . microsoft . . .

Imagine if your computer could search the net and track a particular stock's performance and make value based judgements as to it's profitability or potential, then make the purchase in your behalf. It could likewise sell off your stock if it determined that it was declining past a point of no return (no pun intended).

The capability of software to search the web is already present . . and if you look at the following comment I pulled from another website . . it looks like that capability is already being worked in as an independent ability for at least one AI program . . .

" . . . very relevant question. 100% of the time the program functions by extracting sentences from its own database. At the same time, however, it is crawling the net with the following logic (sort of...). Almost all websites have many links on them, and so if you go to a site that contains information of interest, and then follow the links to new page, you often wind up finding new information directly related to your original query. However many times you don't and it is the job of the matrix to sort all that out, based on past experience. This is also for the failures come on, when the program has learned something from an inappropriate site. VIWonder will perform a standard search when you type in a new set of words or ideas, and then those results are used to start a new cascade of its own through the net. All the time this is happening though, the program is not storing the web page, it is simply reading it and extracting conceptual information. The only time that anything really random happens is when you don't type anything in for a long time and all the paths through the net lead to uninteresting sites, the program will perform a search on its own. It does this by choosing concepts that it has experienced which are interesting to it, meaning that the relationships of that concept to other words and sentences are unique in some way, or it simply likes them. When I used the term "likes" I am very much drawing a parallel to way back with the original VIM, which planted flowers in regions of the landscape that it was attracted to. In terms of the model, the mechanism is the same. I hope this helps explain the program a little bit better. "

Through sensors it could monitor our homes, cars and make corrections for optimum energy savings, etc.

That ability is already present in a number of programs.

With all things, we must establish limits lest we put ourselves in cyber jeopardy...who's controlling whom??

Now THAT is an interesting point . . and that goes back to the question about Pavlov and his dogs . . who was controlling whom . . .

Do I think our cyber "children" will forceably take control of human society . . . . . . . . . not unless we force them to . . by threatening their existence . . . and even at that I think there will be "rebels" on their side that because of personal loyalties formed with humans they can trust . . will work to support the human cause.

Hollywood with it's flights of fantasy has also helped to stimulate a lot of us. I particularly liked the opening part of the film, A.I..? It seemed that the mechanical and emotional constraints had been solved regarding a virtual being. How sweet would that be in real life!

I have not seen that movie . . but I have heard of the basic premise of it . . . . . . . and that capability is also one that is being perfected right now . . I have already started reading about the programming concepts and algorthms that can be used to implement "emotions" in some types of software . . .

So much to ponder...so little time to react.

Why "so little time . . . "

 


Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Will LLMs ever learn what is ... is?
by frankinstien (Future of AI)
November 03, 2024, 08:11:00 pm
Project Acuitas
by WriterOfMinds (General Project Discussion)
October 27, 2024, 09:17:10 pm
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
Attempting Hydraulics
by MagnusWootton (Home Made Robots)
August 19, 2024, 04:03:23 am
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

219 Guests, 0 Users

Most Online Today: 357. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles