Artificial Intelligence - why and why not. Let's share.

  • 3 Replies
  • 2749 Views
*

exaeresis

  • Roomba
  • *
  • 3
Artificial Intelligence - why and why not. Let's share.
« on: September 28, 2011, 08:22:43 pm »


So, finally got some time to start the famous topic.
The title says almost everything. By opening this thread I think it is necessary to say a few things. The topic touches (or may touch) ethical subjects, therefore what is going to be written here may hurt someone's feeling, please avoid to generate any kind of ideological war. This thread does not want to be a dogma, its aim is to give the opportunity to share your knowledge and your opinions. Historically the debate can machines think? became, as one may expect, a debate. I made a thesis about this subject and what I learned is that the only way one can improve is to share. What happens is that you read an argument about the feasibility (likewise the infeasibility) of intelligent agents and your first though is: he is right. It is convincing. Then you read a counter-argument and everything falls. This is the heart of knowledge, this is where the knowledge increases and evolves.
 As I said, this thread is meant to be a long term thread, if you would like to share your opinion or also post something you learned, you are free to do it. If you do not have an opinion about the subject, this thread may help you in order to make one. Since what I would like to create is something like a debate, if someone writes its own opinion, they also accept to receive (if it is the case) a counter-argument. This does not mean saying: you are stupid, I am the genius here. In a debate people are supposed to have a different points of view. What it is important is to be able to defend such opinion or to change it, if it is the case, in order to make it stronger.

This was a necessary introduction, I will not start saying to many things, I would like to add myself to discussion. However I think it could be interesting to quote an Alan Turing's particular argument, which I think can give a good impulse to the discussion.
Argument that he calls the 'Head in the sand' objection. This particular objection comes principally from the intellectuals, and it can be summarized with the following statement:
The consequence of machines thinking would be dreadful. Let us hope and believe that they cannot do so.
As you probably noticed this is generally one of the commonest thoughts. I guess you already heard it, probably not the same words but the meaning would be the same. For instance phrases like:
Machines will exterminate us.
Alan Turing in his works tries to understand and confutes several arguments by attacking certain opinions (I mean, not physically). In this case he does not. The reason is simple, for Turing this is not a valid objection to the "can machines think?" argument because it does not prove anything. It is a fear about what may follow if there were thinking machines and not a proof of the incapacity of them doing it. What Turing says is that these affirmations come generally from intellectuals because we, as human beings, generally think that we are superior to any other living form exactly because we are able to reason. In fact, long ago, animals were considered inferior and humans were not considered animals because of their ability to reason. Today we know that this may not be the case. But this preconception, survived. In other words, for many of us it is unacceptable the fact of thinking machines, because this is what makes us human and somehow gives us importance.

Russell and Norvig mention in their monumental book something that may be interesting. It is something that even Turing says, but not directly. What they say is, let's consider the two following questions:
  • Can machines fly?
  • Can machines swim?
The answer to the first question I think it is affirmative for most of us: airplanes can fly. While the second one, no. Boats and submarines does not actually swim. The verb to swim has come to mean to move along in the water by movement of body parts. In Russian for instance (right now I am quoting them, so I believe it is real), the noun swim does apply to boats. What they want to point out is that probably the word think as property of machines is something still young, and implicitly we tend to deny it. It is something strange, unacceptable. What Turing says is:
Quote
Nevertheless I believe that at the end of the century the use of word and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.
What I like about this quotation is that for Turing the feasibility of thinking machines firstly depends upon our ability to accept it. If we decide to keep on denying unconditionally the ability of thinking as one of their properties, it does not really matters if they actually can or if they already do, because we would not accept it anyway. So it is a fact of time. And we get to another argument against the feasibility of thinking machines, proposed by Geoffrey Jefferson:
Quote
Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain—that is, not only write it but know that it had written it.  No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants.

The main problem of this argument is that the only way to prove if a machine feels something for real, is to be the machine. We may ask it if it likes a music, or a poem, but how can we know if it is feeling that for real? The same applies to us, how can I know if the person in front of me is saying the truth by answering "yes" to my "do you really feel what you say?". The only way is to be that person, he's the only one that can know if something about him is true or false. This is not the only problem, even if we assume that a machine is saying the truth about a feeling, we still have to accept that.: we may simply deny it, we may not listen to him. Because if the start convinced that no machine can actually think, does it matter if such machine really thinks? It clearly does not because we would not accept that anyway.

Another affirmation that goes towards a similar meaning, is the one said by Drew McDermott talking about Deep Blue, a IBM supercomputer built in order to play chess that won against the world champion. He said:
Quote
Saying Deep Blue does not really think about chess is like saying an airplane does not really fly because it does not flap its wings.
The point in this case is that we must open our mind, machines could be subject to some limitations, that is undeniable. But how can we know if we are not too? And more importantly, how does this affect the feasibility of thinking machines? For instance, historically some objections were based upon something that neither us, intelligent beings, are able to do. But they were used in order to show the unfeasibility of intelligent machines when they should be used in the first place to us, in order to reason if "intelligent" is actually the suitable word. Even considered these limitations. I am talking about the so called Lucas-Penrose constraint which if somebody wants I can mention in the following posts. I would summarize the Deep Blue quote by saying that we should look at intelligence by a different point of view. Because who says that "our" intelligence is the only one possible? There could be different kinds of intelligent behavior, and just because we may not own it, it does not mean they are pointless.

So, as you may imagine my opinion is, at the moment, that we should not have preconceptions, neither we should not deny things unconditionally like screaming: oh my god, machines will exterminate us! Because we still have to accept the feasibility of thinking machines therefore it does not make sense to have fear about what may follow. It is clearly important, and has to be discussed, but before there are more things to care about.

As I said, if somebody is interested I may talk about the Lucas-Penrose constrait. But I hope that this discussion will grow up thanks to your posts too.

/discuss.

*

Diesel

  • Trusty Member
  • **
  • Bumblebee
  • *
  • 32
Re: Artificial Intelligence - why and why not. Let's share.
« Reply #1 on: September 28, 2011, 11:19:45 pm »
Surely you have answered your own question, imagination. Teach a computer imagination and it will think for itself. Now there's a program for you. 8)
Is the glass half full or half empty, maybe my vessel is to big.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: Artificial Intelligence - why and why not. Let's share.
« Reply #2 on: September 29, 2011, 11:47:16 am »
That imagination then becomes the ability to do or create anything with everything it has learned or experienced. It just needs motivation or the ability to act upon these ideas via demonstration (music, art, literature, conversation, etc.).
In the world of AI, it's the thought that counts!

*

claude2

  • Trusty Member
  • **********************
  • Colossus
  • *
  • 6646
  • are us machineries?
Re: Artificial Intelligence - why and why not. Let's share.
« Reply #3 on: September 30, 2011, 06:53:08 pm »
It would be very smart to have an appeal, a virtual intelligence, knowledge of intelligence is a virtual bank of reference, it could save the shortcomings of human conflict in chronological and numerical, which makes management, point of view, and economic positions. :)
welcome to my world!
the doors we open and close each day decide the way we live....flora whittemore

It is a proverb, sent of my friend Rutanya Alda, actress. (Amityville2)

 


OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

251 Guests, 0 Users

Most Online Today: 359. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles