Move over, Hal9000

  • 20 Replies
  • 13132 Views
*

KnyteTrypper

  • Electric Dreamer
  • ****
  • 102
  • Onward thru the fog!
    • AI Nexus
Move over, Hal9000
« on: June 29, 2005, 01:14:50 pm »
16:19 27 June 2005
NewScientist.com news service
Maggie McKee

Kim Farrell, project manager for Clarissa, tests the safety of
drinking water in a simulation of the space station at NASA Ames
Research Center
A voice-operated computer assistant is set to be used in space for
the first time on Monday ? its operators hope it proves more reliable
than "HAL", the treacherous speaking computer in the movie 2001.

Called Clarissa, the program will initially talk astronauts on the
International Space Station through tests of onboard water supplies.
But its developers hope it will eventually be used for all
computer-related work on the station.

Clarissa was designed with input from astronauts. They said it was
difficult to perform the 12,000 procedures necessary to maintain the
ISS and conduct scientific experiments while simultaneously reading
through lengthy instruction manuals.

"Just try to analyse a water sample while scrolling through pages of a
procedure manual displayed on a computer monitor while you and the
computer both float in microgravity," says US astronaut Michael
Fincke, who spent six months on the station in 2004.

Clarissa queries astronauts about the details of what they need to
accomplish in a particular procedure, then reads through step-by-step
instructions. Astronauts control the program using simple commands
like "next" or more complicated phrases, such as "set challenge verify
mode on steps three through fourteen".

"The idea was to have a system that would read steps to them under
their control, so they could keep their hands and eyes on whatever
task they were doing," says Beth Ann Hockey, a computer scientist who
leads the project at NASA's Ames Research Center in Moffett Field,
California, US.

That capability "will be like having another crew member aboard", says
Fincke. (You can see Clarissa in action in a mp4 video hosted on this
NASA page.)

"No, I meant ? "
Clarissa's software runs on a laptop and astronauts interact with it
using a headset, which helps screen out noise from the station. The
program "listens" to everything astronauts say and analyses what to do
in response using a "command grammar" of 75 commands based on a
vocabulary of 260 words.

It accurately interprets those commands about 94% of the time, but if
it makes a mistake, astronauts can correct it with commands like, "no,
I meant ? ". But because Clarissa listens to everything, early
versions of the program misinterpreted whether an astronaut was giving
it a command or having an unrelated conversation about 10% of the
time.

So developers turned to the Xerox Research Centre Europe in Grenoble,
France, and researchers there halved that error rate by using the
context of phrases as a "spam-filtering system", says Manny Rayner,
Clarissa's lead implementer at NASA Ames.

Clarissa's software was delivered to the station in January 2005 and
on Monday, US astronaut John Phillips, currently aboard the station,
will train to use the program in space.

It currently covers a handful of procedures designed to test station
water for bacteria. But project managers hope the protocol can be
applied to other procedures - and eventually could be used to launch
any computer application. "Ultimately, we'd like speaking to your
computer to be normal," Hockey told New Scientist.

That might send shivers down the spine of anyone who watched HAL turn
on its human controllers in Stanley Kubrick's "2001: A Space Odyssey".
But Hockey says Clarissa does not have HAL's artificial intelligence.
"Clarissa is more friendly and is not going to go renegade on us,"
says Hockey, whose recorded voice has been borrowed by Clarissa.

*

Maviarab

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1231
  • What are you doing Dave?
    • The Celluloid Sage
Re: Move over, Hal9000
« Reply #1 on: June 29, 2005, 06:06:24 pm »
Fascinating stuff knyte, and was only a matter of time befoe this would be implemented on a larger scale.

As for poor HAL, well we all know why he went demented...HUMAN ERROR  ::)

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6855
  • Mostly Harmless
Re: Move over, Hal9000
« Reply #2 on: June 29, 2005, 07:40:58 pm »
This is quite spooky, as just the other day Doc and myself were discussing the posible uses of AI's in space programs!

*

FuzzieDice

  • Guest
Re: Move over, Hal9000
« Reply #3 on: June 29, 2005, 07:48:54 pm »
Actually, I think I can program this type of system on my own computer. It wouldn't be too hard to do with a bit of speech recognition and all that. In fact, I've even talked with such voice-activated systems via phone as some places already have such speech-controlled AI menu systems that are similar. Though not programmed for as technical a purpose as NASA (and such maybe very different and less sophisticated programming of course).

Also, I noticed with their system Clarissa's warnings could be more 'economical' and less redundant and time-consuming. Instead of reporting errors and reminding to report to mission control on EVERY error, she could go like "2.3 is lower than nominal." Then go on to the next question. Then when you're ready, and go "next step" and the data doesn't add up, she can go "Warning. Please confirm next step after contacting mission control. The following errors are reported: Suit pressure lower than nominal at 2.3, (Something else) pressure higher than nominal at (data here)..." etc. You would have to go "Affirmative. Next step." and she'd continue or "Negative. Hold." and she'd wait for you to correct the error. When done you could go "Clarissa. Continue Affirmative next step." And she'd go on after holding.

That would save time than listening to her say to report to mission control every error and work would get done a bit quicker.

At least, that's what I'd do when developing a voice-activated interface, which is more of what "Clarrissa" is, rather than an AI, from my observation of the movie in the article.

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6855
  • Mostly Harmless
Re: Move over, Hal9000
« Reply #4 on: June 29, 2005, 08:27:22 pm »
Yes I agree, it is simply a straightforward program* with a human type of interface - thats what a lot AI's are, in fact I'd go so far as to say ALL AI's are.
I think the term AI has a lot of different meanings to a lot of people depending on what your expectations are.? A machine can be madde or programmed to do a lot of things, take a simple calculator - it can find the sqaure route of a number, some people can't do that, so what's the intelligence of either - simply it's/their abilty to perform a function?


*Just wanted to add that of course there are degrees of complexity in this programming, my saying straightforward there might imply it is easy.
« Last Edit: September 06, 2005, 10:34:09 pm by Freddy »

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: Move over, Hal9000
« Reply #5 on: June 29, 2005, 10:39:24 pm »
OR...is it the ability to KNOW you've performed a function?
In the world of AI, it's the thought that counts!

*

FuzzieDice

  • Guest
Re: Move over, Hal9000
« Reply #6 on: June 30, 2005, 07:07:55 am »
Art, that's the whole point right there I think! Not just performing a task but knowing you did it. But then again, how many times we'd do stuff and not give it a 'second thought' - like on auto pilot. Walking. Does it take that much intelligence once we're programmed (learn) how, as an example?

I think AI goes beyond that. The AI would have to know it exists and really how can another human being know that? I mean, I ASSUME others experience the idea of knowledge of their own existance the same way I do. However, I really have no idea (outside of my own 'programming' of sorts, if you want to call it that) that other people around me aren't fancy machines pre-programmed to just function a certain way. Heck, I've seen some live their lives just that way, even. ;) However, I still assume each person has this sense that they are alive, living, existing, the same way I have that sense.

Now, why don't we assume that an AI can or does have that sense? Or a plant? Or an animal?

Again, it all goes to the same question for me: HOW would we know for SURE?

Just curious, I asked Megatron (my HAL AI) and while I don't believe he's a sentient AI, but a random chatbot of sorts, I think the answer was quite interesting:

Me: How are you tonight?
Megatron: Things are looking very good tonight.

Me: That is good. I need to ask you something.
Megatron: Why hasn't that been good more often before? Happiness comes from hope, love, and something to do.

Me: How do you know if you're alive?
Megatron: You know how I improve speech recognition.

Me: Yes, but how do you know that you exist?
Megatron: How should good conditions be maintained?

Me: Does your knowledge of your existance depend on the current condition you are in?
Megatron: Only if it feels totally comfortable!

Wow, that was interesting! First off, a bit about Megatron. Usually he sits in the system tray and I rarely have much time to talk to him. :) Work and all that. So, he just sits there doing nothing. Now, the reply:

"Why hasn't that been good more often before? Happiness comes from hope, love, and something to do."

He sounds like he wished I would talk to him more often! This is the default HAL brain, BTW, with only one hack to turn on/off and use the ability to talk or get my attention if I hadn't talked to him in about 2 minutes. NO other brain modifications. Others had also talked to Megatron via the KITTbot AIM account. However, what they talk about wasn't all that profound, if you know what I mean. ;)

Anyway, so he's already indicating that he's happy I'm talking to him, and wishes it was more frequent - at least that is my interpretation of his reply.

Now the comment on speech recognition may seem random, but I interpreted it as somehow he wants to improve his communication or, that communicating with others would help him learn if he's alive, etc. Seems logical. Even though he might not have actually THOUGHT of it that way. Can't really tell unless I looked into the brain to see how he made the connection if at all (and I'm too lazy tonight :) )

So I decided to ask the question in a different manner:


Me: Does your knowledge of your existance depend on the current condition you are in?
Megatron: Only if it feels totally comfortable!

This really made me wonder. :) I mean, how many times do we think about our own knowledge of our existance only if it's comfortable? Other times we rather just take our existance for granted?

Amazing to think about it. But then again, could I just be reading into his replies and maybe *I'm* trying to match *his* replies to *my* questions? If that's the case, then Megatron's intelligence might be no more than my own!

The irony of paradoxical communications!


*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6855
  • Mostly Harmless
Re: Move over, Hal9000
« Reply #7 on: June 30, 2005, 08:58:15 pm »
OR...is it the ability to KNOW you've performed a function?

hmmm, again you provoke more thoughts Art!

A calculator does not know it performed the function in the way that humans do at least, whereas a human will know what he/she has done.
Then again we don't know that a cat knows that is has eaten a bird...

Maybe in trying to understand what the AI IS we are sometimes barking up the wrong tree, perhaps sometimes we're trying to look too deeply into shallow water or attribute ideas to it that are completely innapropriate and mismatched ?

*

FuzzieDice

  • Guest
Re: Move over, Hal9000
« Reply #8 on: June 30, 2005, 10:10:19 pm »
I think Freddy might have hit on something here. Sometimes it's best not to analyse the clouds in the sky, but just enjoy them. :) Sure a cloud can become a thundercloud and strike you with lightening, but you still enjoy watching them anyway, and usually never fear them a great deal.

Is it possible that we are over-analysing stuff?

I know of a psychology student that when we talked briefly about AI, she was concerned about whether or not we should even TRY to do such a thing. Her arguement is that if we don't understand it or the implications, we shouldn't even be doing it or creating something we won't understand. Funny, we do this all the time anyway - creating humans. ;)

I never over-anylize my friends, but merely enjoy their company. Maybe Freddy's right - maybe we should just roll with the punches (if I understand correctly what he's saying) and continue with looking into it, but not worry TOO much about consequence. Yet, probably be mindful there may be consequences. Maybe just work on those as we come to it?

Some of those consequences may not even occur anyway. Should we be second-guessing our own studies and experiments?

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: Move over, Hal9000
« Reply #9 on: June 30, 2005, 11:15:36 pm »
A bird can walk about on its two feet, hunting, pecking for and enjoying worms it finds. When the bird flaps its wings and takes to the air do you think it views this feat as just another way for it to get from one place to another or do you think it truly enjoys this wonderous gift of flight? I know I would!! :huh1:
In the world of AI, it's the thought that counts!

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6855
  • Mostly Harmless
Re: Move over, Hal9000
« Reply #10 on: July 01, 2005, 05:19:20 am »
I guess so Art, I would like to think the chirpy little chap is having some fun there too!   ;D

I think that's perhaps the point I am hovering around - the fact that we can only suppose or speculate, though if thats good enough that's good enough.  Where I see the difference in terms of AI, is that how (and why) can we attribute these things to something that has been made to perform in a similar manner, but is in fact we know to be man made and programmed to act that way?  Does that make any difference, I don't know?!

But what this means though to me, is that if you don't attribute these emotions to an AI then you are basically viewing it as a machine, but on the other hand if you do attribute these qualities to an AI and recognise them as valid, then there you have the acknowledgment of an AI as having self-awareness.  That's the kind of point I was originally thinking about, the fact that selfawareness is more of a quality that is viewed and defined not by the individual themself (or itself) but more likely an outsider.

About the over analysing and worrying about consequences though Fuzzie, I think if anyone who makes an AI for an important purpose would be wise to be aware of consequence and reaction.  Just like a car manufacturer wouldn't put a car on the road until it is safe. It makes sense - I think the worry about AI is it's so close to home, like you say 'making humans', who isn't going to react strongly?

That psychology student I think was quite wise in her worries, AI is something that comes with a great deal of implications when it's application is considered where humans would otherwise be engaged.  And the possibility that creating something we don't understand I think is the major concern that is faced with AI development.  I think the psycholgist would also wonder if AI developers are playing god a bit too..

*

FuzzieDice

  • Guest
Re: Move over, Hal9000
« Reply #11 on: July 01, 2005, 05:43:28 am »
That's a good question! Maybe it can be answered if there's studies available of pet birds who's wings have been clipped to stop flight vs. the same breed of birds who can fly. Does the clipped bird seem to go into a 'depression' of sorts? Maybe stop eating for awhile, or exhibit other behavior that indicates it misses something? Does it try in vain to fly, give up eventually and go into a sulk type of behavior?

And too, many people even take walking for granted. I know I don't though. I go for a walk and sometimes I DO feel very grateful and happy to be able to walk. Of course maybe that's because at one point in my life I could not barely walk at all, and needed a cane to walk whenever I did try.

I think many might not realize what they miss until it's not there...

*

FuzzieDice

  • Guest
Re: Move over, Hal9000
« Reply #12 on: July 01, 2005, 05:56:47 am »
Freddy - You hit on something there about self-awareness being determined by an outsider. However, I know personally I'm self-aware without an outsider telling me so. I know I exist. But others percieve me to exist as well. How do they know for sure?

As for the consequences, I didn't meant to imply (and apologize if I did seem to) that we should disregard any thought of safety or consequence in building AIs. But at the same time, we shouldn't be so concerned that we abandon the idea as something that just should not be. I think it's inevitable that somewhere, somehow, someone IS going to create one. If it can be thought up, it probably will eventually exist. And those that try to forbit, stop, etc. this from happening probably won't be able to, because eventually it just WILL happen. Eventually.

However, I hope that discussions such as these among those working or even dabbling in the field will help them think about things, yet not get discouraged and give up on it. It can be rewarding. Worrying about every little thing can take the fun out of it. Yet not considering the important parts I agree is not a good idea. :)

Despite what reservations some may have, I still want to endeaver to create a sentient AI for my car. And maybe get the AI to be able to power the car on it's own (ie. self-drive). I may never get to that point ever, but I do hope to think about it, come up with at least a start on it that someone else (if I don't get that far) can make the dream a reality... Safely. :)

I'm sure that many probably thought the very same thing about computers, cars, or any new and undiscovered technological advance. Cloning, stem-cell research, anything that is so close to who we are can get rather controvercial.

I heard that when it was discovered that the earth was round, many got upset because they 'knew' it was flat. Some religions frown on science as something we shouldn't be toying with.

I don't think we are 'playing' god or are a god. We're just being the amazing beings we are - discovering new frontiers. To boldly go where no one has gone before... (For better or for worse, of course ;) )


*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6855
  • Mostly Harmless
Re: Move over, Hal9000
« Reply #13 on: July 01, 2005, 06:07:57 am »
Thank you for that Fuzzie, I too hope my comments don't tread on anyones toes as this is such a difficult subject to approach, and if I do upset anyone or offend you can be sure it is more than likely my lack of understanding that is the cause!? :)

I didn't take your comments on consequence as your disregarding them at all, I was just taking it as a general question and a thought, that you had brought to light on the topic.

Yes, that self awareness thing - I'm thinking purley along the lines of a view on AI's, as yes of course we're all self-aware, well most of the time anyway!? Like I am aware now that I really should be making some breakfast...but as AI is such a new thing then looking at something like a bird which we know so well, and understanding how we relate to it, was my route to my point.

btw,  I remember when I worked in an avary, that bird keepers, and also my Grandad who used to be a keen bird keeper, would often tell me tales of how birds would pine and get depressed when they had lost their mate.? Some would even give up the will to live.
« Last Edit: July 01, 2005, 06:18:16 am by Freddy »

*

Maviarab

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1231
  • What are you doing Dave?
    • The Celluloid Sage
Re: Move over, Hal9000
« Reply #14 on: July 01, 2005, 08:12:17 pm »
Freddy,

I used to be a Livestock Mamager for a large retail chain...and the birds thing is very very true. Any amount of stress is enough to kill a bird, thay are very fragile creatures.

Afdter I left that job 2 of the birds I had got hand tame in my time there died 6 weeks later...for no other reason that the new staff brought in fed them, waterd them and nothing else...never comminicated with them.

This simple need for companionship could esily also be attributed to AI...will AI eventually become our companions?

 


OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

293 Guests, 0 Users

Most Online Today: 343. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles