Strong AI (symbolic primarily) - approaches, ideas, philosophical foundations...

  • 17 Replies
  • 846 Views
*

AndyGoode

  • Guest
Obeying and understanding are mostly independent things:

Yes, I agree. I was about to make that same comment myself, but I jumped ahead and decided to just consider what the poster had likely *intended*, not what the words used actually meant.

*

AndyGoode

  • Guest
Which rules? Everything has it limits, for example by physics, living beings by their DNA, their experiences etc.
This is no fundamental difference between humans and possible AI.

That's a good and interesting point. I agree with both of you, but it depends of how big a scope is being considered.

Here's an example of being unable to overcome limits: If you line up caterpillars on the rim of a flower pot, they will follow each others in an infinite loop until they drop dead from exhaustion and starvation. (http://www.wolverton-mountain.com/articles/caterpillars.htm) This is partly because their brains cannot learn sufficiently or grasp higher level concepts.

Here's an example of unlimited scope: Suppose there exists a certain geometrical phenomenon, such as a certain pattern of alignment of corners, that happens when the number of dimensions and the number of geometrical figures become large. Human brains may never be able to understand the pattern because humans literally cannot visualize that number of dimensions, or that many objects at once, or have enough neurons to recognize that such a pattern exists. By extension, there is no end to the complexity that can exist like that, so there will always exist patterns and therefore concepts that can never be grasped by any physical entity. I suppose that's a consequence of Godel's incompleteness theorems (https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems).

*

Zero

  • Trusty Member
  • *********
  • Terminator
  • *
  • 789
  • Ready?
    • Thinkbots are free
Quote
I think it is
- a technical systems with a software kernel (implementing learning etc.)
- which should do what we want it to do
- which should be intelligent so that
  - it can understand what we want and
  - it can act what we want/intent.

If you want it to do what you decide, it has to understand what you want. But when its understanding will grow beyond yours, it will interpret what you want in a way you can't even understand, which is why it will appear to be "out of control". See what I mean now?
Thinkbots are free, as in 'free will

 


In The Age of AI - (full film)
by Art (Future of AI)
Today at 02:41:11 AM
Implika automated inference engine
by goaty (General Project Discussion)
Today at 01:26:00 AM
Friday Funny
by Freddy (General Chat)
December 07, 2019, 08:10:58 PM
It's Not All Doom and Gloom
by ruebot (Bot Conversations)
December 07, 2019, 06:53:09 PM
mini a.i puzzles
by yotamarker (General AI Discussion)
December 07, 2019, 02:46:52 PM
XKCD Comic : Flu Shot
by Tyler (XKCD Comic)
December 07, 2019, 12:00:04 PM
XKCD Comic : AI Hiring Algorithm
by Tyler (XKCD Comic)
December 05, 2019, 12:00:43 PM
Who are you, what do you do & what do you want to do ?
by LOCKSUIT (New Users Please Post Here)
December 04, 2019, 12:18:53 PM

Users Online

17 Guests, 0 Users

Most Online Today: 26. Most Online Ever: 340 (March 26, 2019, 09:47:57 PM)

Articles