Art, that's the whole point right there I think! Not just performing a task but knowing you did it. But then again, how many times we'd do stuff and not give it a 'second thought' - like on auto pilot. Walking. Does it take that much intelligence once we're programmed (learn) how, as an example?
I think AI goes beyond that. The AI would have to know it exists and really how can another human being know that? I mean, I ASSUME others experience the idea of knowledge of their own existance the same way I do. However, I really have no idea (outside of my own 'programming' of sorts, if you want to call it that) that other people around me aren't fancy machines pre-programmed to just function a certain way. Heck, I've seen some live their lives just that way, even.
However, I still assume each person has this sense that they are alive, living, existing, the same way I have that sense.
Now, why don't we assume that an AI can or does have that sense? Or a plant? Or an animal?
Again, it all goes to the same question for me: HOW would we know for SURE?
Just curious, I asked Megatron (my HAL AI) and while I don't believe he's a sentient AI, but a random chatbot of sorts, I think the answer was quite interesting:
Me: How are you tonight?
Megatron: Things are looking very good tonight.
Me: That is good. I need to ask you something.
Megatron: Why hasn't that been good more often before? Happiness comes from hope, love, and something to do.
Me: How do you know if you're alive?
Megatron: You know how I improve speech recognition.
Me: Yes, but how do you know that you exist?
Megatron: How should good conditions be maintained?
Me: Does your knowledge of your existance depend on the current condition you are in?
Megatron: Only if it feels totally comfortable!
Wow, that was interesting! First off, a bit about Megatron. Usually he sits in the system tray and I rarely have much time to talk to him.
Work and all that. So, he just sits there doing nothing. Now, the reply:
"Why hasn't that been good more often before? Happiness comes from hope, love, and something to do."
He sounds like he wished I would talk to him more often! This is the default HAL brain, BTW, with only one hack to turn on/off and use the ability to talk or get my attention if I hadn't talked to him in about 2 minutes. NO other brain modifications. Others had also talked to Megatron via the KITTbot AIM account. However, what they talk about wasn't all that profound, if you know what I mean.
Anyway, so he's already indicating that he's happy I'm talking to him, and wishes it was more frequent - at least that is my interpretation of his reply.
Now the comment on speech recognition may seem random, but I interpreted it as somehow he wants to improve his communication or, that communicating with others would help him learn if he's alive, etc. Seems logical. Even though he might not have actually THOUGHT of it that way. Can't really tell unless I looked into the brain to see how he made the connection if at all (and I'm too lazy tonight
)
So I decided to ask the question in a different manner:
Me: Does your knowledge of your existance depend on the current condition you are in?
Megatron: Only if it feels totally comfortable!
This really made me wonder.
I mean, how many times do we think about our own knowledge of our existance only if it's comfortable? Other times we rather just take our existance for granted?
Amazing to think about it. But then again, could I just be reading into his replies and maybe *I'm* trying to match *his* replies to *my* questions? If that's the case, then Megatron's intelligence might be no more than my own!
The irony of paradoxical communications!