Strong AI (symbolic primarily) - approaches, ideas, philosophical foundations...

  • 17 Replies
  • 3348 Views
*

AndyGoode

  • Guest
Obeying and understanding are mostly independent things:

Yes, I agree. I was about to make that same comment myself, but I jumped ahead and decided to just consider what the poster had likely *intended*, not what the words used actually meant.

*

AndyGoode

  • Guest
Which rules? Everything has it limits, for example by physics, living beings by their DNA, their experiences etc.
This is no fundamental difference between humans and possible AI.

That's a good and interesting point. I agree with both of you, but it depends of how big a scope is being considered.

Here's an example of being unable to overcome limits: If you line up caterpillars on the rim of a flower pot, they will follow each others in an infinite loop until they drop dead from exhaustion and starvation. (http://www.wolverton-mountain.com/articles/caterpillars.htm) This is partly because their brains cannot learn sufficiently or grasp higher level concepts.

Here's an example of unlimited scope: Suppose there exists a certain geometrical phenomenon, such as a certain pattern of alignment of corners, that happens when the number of dimensions and the number of geometrical figures become large. Human brains may never be able to understand the pattern because humans literally cannot visualize that number of dimensions, or that many objects at once, or have enough neurons to recognize that such a pattern exists. By extension, there is no end to the complexity that can exist like that, so there will always exist patterns and therefore concepts that can never be grasped by any physical entity. I suppose that's a consequence of Godel's incompleteness theorems (https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems).

*

Zero

  • Eve
  • ***********
  • 1287
Quote
I think it is
- a technical systems with a software kernel (implementing learning etc.)
- which should do what we want it to do
- which should be intelligent so that
  - it can understand what we want and
  - it can act what we want/intent.

If you want it to do what you decide, it has to understand what you want. But when its understanding will grow beyond yours, it will interpret what you want in a way you can't even understand, which is why it will appear to be "out of control". See what I mean now?

 


OpenAI Speech-to-Speech Reasoning Demo
by ivan.moony (AI News )
March 28, 2024, 01:31:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

321 Guests, 0 Users

Most Online Today: 396. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles