Strong AI (symbolic primarily) - approaches, ideas, philosophical foundations...

  • 17 Replies
  • 710 Views
*

AndyGoode

  • Electric Dreamer
  • ****
  • 134
Obeying and understanding are mostly independent things:

Yes, I agree. I was about to make that same comment myself, but I jumped ahead and decided to just consider what the poster had likely *intended*, not what the words used actually meant.

*

AndyGoode

  • Electric Dreamer
  • ****
  • 134
Which rules? Everything has it limits, for example by physics, living beings by their DNA, their experiences etc.
This is no fundamental difference between humans and possible AI.

That's a good and interesting point. I agree with both of you, but it depends of how big a scope is being considered.

Here's an example of being unable to overcome limits: If you line up caterpillars on the rim of a flower pot, they will follow each others in an infinite loop until they drop dead from exhaustion and starvation. (http://www.wolverton-mountain.com/articles/caterpillars.htm) This is partly because their brains cannot learn sufficiently or grasp higher level concepts.

Here's an example of unlimited scope: Suppose there exists a certain geometrical phenomenon, such as a certain pattern of alignment of corners, that happens when the number of dimensions and the number of geometrical figures become large. Human brains may never be able to understand the pattern because humans literally cannot visualize that number of dimensions, or that many objects at once, or have enough neurons to recognize that such a pattern exists. By extension, there is no end to the complexity that can exist like that, so there will always exist patterns and therefore concepts that can never be grasped by any physical entity. I suppose that's a consequence of Godel's incompleteness theorems (https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems).

*

Zero

  • Trusty Member
  • *********
  • Terminator
  • *
  • 800
  • Ready?
    • Thinkbots are free
Quote
I think it is
- a technical systems with a software kernel (implementing learning etc.)
- which should do what we want it to do
- which should be intelligent so that
  - it can understand what we want and
  - it can act what we want/intent.

If you want it to do what you decide, it has to understand what you want. But when its understanding will grow beyond yours, it will interpret what you want in a way you can't even understand, which is why it will appear to be "out of control". See what I mean now?

 


Users Online

44 Guests, 0 Users

Most Online Today: 69. Most Online Ever: 340 (March 26, 2019, 09:47:57 pm)

Articles