Self Perpetuating Initiative

  • 3 Replies
  • 995 Views
*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1175
Self Perpetuating Initiative
« on: August 03, 2019, 08:30:09 am »
We can create the greatest reasoning machine the world has known, but logic is the tip of the iceberg and there is no current to make it move. Trying to get a logical processor that's untethered to outside events to churn things out on its own is equivalent to expecting a perpetual motion machine to power its self. This just uses a different kind of energy, one level higher, something like the effects of energy. You need to connect physical reality to your processor somehow, in order to make it go in a productive succession of logical steps. It needs a source for its actions, they can't spring out of thin air, no matter the level of intelligence. You need to take in more than you put out because there are always losses. So you need a broad scoop for events going on around you, then you need to condense and convert the information in a series of steps in order to turn it into an abridged and usable format for a data processing engine with fairly limited resourses. Of course making use of the data at each step. First you have the body and physical sensations, then emotion, then awareness, then logic. They form a chain of diminishing informational mass but increasing precision. The one behind has the heft to make the one in front do the  dirtywork for which it lacks the dexterity.

Aside: This poses an interesting question. If less evolved creatures posess just physical sensation, or just physical and emotional... Maybe a more evolved creature than homo sapiens wouldn't only have better logic, but rather a whole new level, a new toolkit for dealing with the world one step beyond logic. What could such a thing look like?

*

AndyGoode

  • Guest
Re: Self Perpetuating Initiative
« Reply #1 on: August 03, 2019, 01:52:02 pm »
If you had a more focused question maybe I could answer it. For now, I can just confirm via Marvin Minsky that you're right in that logic is insufficient for intelligence that deals with the real world; humans use real logic only sparingly:

----------

(p. 186)
      18.1 MUST MACHINES BE LOGICAL?

   What's wrong with the old arguments that lead us to believe that if machines could ever think
at all, they'd have to think with perfect logic? We're told that by their nature, all machine must
work according to rules. We're also told that they can only do exactly what they're told to do.
Besides that, we also hear that machines can only handle quantities and therefore cannot deal
with qualities or anything like analogies.
   Most such arguments are based upon a mistake that is like confusing an agent with
an agency. When we design and build a machine, we know a good deal about how it works.

(p. 186)
   When do we actually use logic in real life? We use it to simplify and summarize our thoughts.
We use it to explain arguments to other people and to persuade them that those arguments are
right. We use it to reformulate our own ideas. But I doubt that we often use logic actually to
solve problems or to "get" new ideas.
Instead, we formulate our arguments and conclusions in
logical terms after we have constructed or discovered them in other ways; only then do we use
verbal and other kinds of formal reasoning to "clean things up," to separate the essential parts
from the spaghettilike tangles of thoughts and ideas in which they first occurred.
   To see why logic must come afterward, recall the idea of solving problems by using the
generate and test method. In any such process, logic can be only a fraction of the reasoning; it
can serve as a test to keep us from coming to invalid conclusions, but it cannot tell us which
ideas to generate, or which processes and memories to use. Logic no more explains how we
think than grammar explains how we speak; both can tell us whether our sentences are properly
formed, but they cannot tell use which sentences to make. Without an intimate connection
between our knowledge and our intentions, logic leads to madness, not intelligence. A logical
system without a goal will merely generate an endless host of pointless truths like these:

A implies A.
P or not P.
A implies A or A or A.
If 4 is 5, then pigs can fly.

(p. 187)
   For generations, scientists and philosophers have tried to explain ordinary reasoning in terms
of logical principles--with virtually no success.
I suspect this enterprise failed because it was
looking in the wrong direction: common sense works so well not because it is an approximation
of logic; logic is only a small part of our great accumulation of different, useful ways to chain
things together. Many thinkers have assumed that logical necessity lies at the heart of our
reasoning. But for the purposes of psychology, we'd do better to set aside the dubious ideal of
faultless deduction and try, instead, to understand how people actually deal with what is usual
or typical. To do this, we often think in terms of causes, similarities, and dependencies. What
do all these forms of thinking share? They all use different ways to make chains.

Minsky, Marvin. 1986. The Society of Mind. New York: Simon and Schuster.

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1175
Re: Self Perpetuating Initiative
« Reply #2 on: August 03, 2019, 07:11:30 pm »
I'll try to clarify. I'm trying to come up with a chain of processes which would allow for a self perpetuating machine intelligence. We all know the problem of how robots don't do things of their own volition to the extent that we would recognize as sentience. So I'm thinking this requires layering some processes on top of each other, in a sort of hierarchical pyramid. From the bottom up it would go: Artificial Body Intelligence (ABI) →   Artificial Emotional Intelligence (AEI) → Artificial Awareness (AA) → Artificial Logical Intelligence (ALI). The body drives basic emotion, emotion motivates awareness, and awareness directs logic, logic guides the body, making it get new inputs, generating new emotions, and the cycle continues. Now I just gotta figure out how to combine this model with the narrative intelligence model.  O0

*

AndyGoode

  • Guest
Re: Self Perpetuating Initiative
« Reply #3 on: August 03, 2019, 07:50:18 pm »
So I'm thinking this requires layering some processes on top of each other, in a sort of hierarchical pyramid.

OK, I think I understand now: you're looking for a hierarchical architecture.

I don't think of brains as operating strictly that way. First, whatever processor you use must deal with real-world data. That's essential, and I even define intelligence to require that. Remember in a recent thread I mentioned how I believe machines/computers should be given the same IQ tests as humans, *especially* Raven's progressive matrices (RPMs)? Well, RPMs are visual problems, not numerical problems, not word problems, and not quite logical problems, either, which is why they are such a good start: they're approaching real-world data. RPMs will take you only part of the way, though. Without getting into details, probably *the* key problem in AI is to figure out the representation system used by higher level biological brains. I suspect that logic, awareness, vision, motor output, and maybe even emotions use close to the same representation, so if you get the representation right, all the pieces fall into place and can communicate with each other. So far nobody knows the representation method of the human brain, at least at a useful intermediate level of complexity.

Second, it's also well-known that intelligence requires *goals*, not instructions. Biological organisms have goals, like to satisfy hunger or to seek shelter, and the ways they choose to achieve those goals are just details that are up to those organisms to decide. My definition of intelligence also specifies that an intelligent system be goal-directed. This is nothing new. For example, the (applied AI) language Prolog works exactly that way: the programmer supplies a goal, and the software tries to achieve that goal. (Prolog details are actually more complicated, but basically that's what happens.)

Third, it sounds to me like machine learning (of a yet unknown type) solves the self-perpetuating problem you mentioned. The more a system learns, the more it can generalize (i.e., can perform induction), and the more it can generalize, the more it can think up specific examples within that generalization (i.e., can perform deduction), etc., in a never-ending cycle of self-improvement. If you can develop a system that does both induction and deduction, with a few goodies like associative memory thrown in, then I believe your system would likely have any required logic processes that might be needed for any problem/goal whatsoever.

----------

(p. 318)
   In any case, one aftermath of the controversy with teleologists was that many scientists in
other realms became so afraid of making similar mistakes that the very concept of purpose
became taboo throughout science. Even today, most scientists regard it as an abomination to
use "anthropomorphic" or "intentional" language in connection with anything but persons or
higher animals. This burdened the science of psychology with a double-barreled handicap. On
one side, it made psychologists regard many of their most important problems as outside the
scope of scientific explanation. On the other side, it deprived them of many useful technical
ideas--because such concept words as "want," "expect," and "recognize" are among the most
effective ever formed for describing what happens in human minds. It was not until the "cyber-
netic revolution" of the 1940s that scientists finally realized there is nothing inherently unscien-
tific about the concept of goal itself and that attributing goals to evolution was bad not because
it was impossible, but simply because it was wrong. Human minds do indeed use goal-
machinery, and there is nothing wrong with recognizing this and bringing technical theories
about intentions and goals into psychology.


Minsky, Marvin. 1986. The Society of Mind. New York: Simon and Schuster.

(p. 314)
   As the 1970s drew to a close, knowledge representation was
perhaps the most hotly debated topic in artificial intelligence
. At
the 1977 International Joint Artificial Intelligence Conference, a
panel with representatives from the entire spectrum of opinions,
ranging from the most formal ot the most contingent, drew shouts
and cheers from the nearly one thousand scientists present, acting
(p. 315)
as if they were watching a football game. As I sat among them,
amused by the noise, I thought how much at odds with the stereo-
type of the cool, disinterested scientist this demonstration was. More
important, what a marvelous and accomodating structure sci-
ence has, for sooner or later the issue would be resolved on the basis
of the best choice--maybe a mode of knowledge representation
which hadn't even yet been dreamed up
--and the partisanship
would disappear, or more accurately, find its expression in the next
big issue.

McCorduck, Pamela. 2004. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. Natick, Massachusetts: A K Peters.
« Last Edit: August 04, 2019, 08:48:55 pm by AndyGoode »

 


OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

268 Guests, 0 Users

Most Online Today: 296. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles