So I'm thinking this requires layering some processes on top of each other, in a sort of hierarchical pyramid.
OK, I think I understand now: you're looking for a hierarchical architecture.
I don't think of brains as operating strictly that way. First, whatever processor you use must deal with
real-world data. That's essential, and I even define intelligence to require that. Remember in a recent thread I mentioned how I believe machines/computers should be given the same IQ tests as humans, *especially* Raven's progressive matrices (RPMs)? Well, RPMs are visual problems, not numerical problems, not word problems, and not quite logical problems, either, which is why they are such a good start: they're approaching real-world data. RPMs will take you only part of the way, though. Without getting into details, probably *the* key problem in AI is to figure out the representation system used by higher level biological brains. I suspect that logic, awareness, vision, motor output, and maybe even emotions use close to the same representation, so if you get the representation right, all the pieces fall into place and can communicate with each other. So far nobody knows the representation method of the human brain, at least at a useful intermediate level of complexity.
Second, it's also well-known that intelligence requires *goals*, not instructions. Biological organisms have goals, like to satisfy hunger or to seek shelter, and the ways they choose to achieve those goals are just details that are up to those organisms to decide. My definition of intelligence also specifies that an intelligent system be
goal-directed. This is nothing new. For example, the (applied AI) language Prolog works exactly that way: the programmer supplies a goal, and the software tries to achieve that goal. (Prolog details are actually more complicated, but basically that's what happens.)
Third, it sounds to me like machine learning (of a yet unknown type) solves the self-perpetuating problem you mentioned. The more a system learns, the more it can generalize (i.e., can perform induction), and the more it can generalize, the more it can think up specific examples within that generalization (i.e., can perform deduction), etc., in a never-ending cycle of self-improvement. If you can develop a system that does both induction and deduction, with a few goodies like associative memory thrown in, then I believe your system would likely have any required logic processes that might be needed for any problem/goal whatsoever.
----------
(p. 318)
In any case, one aftermath of the controversy with teleologists was that many scientists in
other realms became so afraid of making similar mistakes that the very concept of purpose
became taboo throughout science. Even today, most scientists regard it as an abomination to
use "anthropomorphic" or "intentional" language in connection with anything but persons or
higher animals. This burdened the science of psychology with a double-barreled handicap. On
one side, it made psychologists regard many of their most important problems as outside the
scope of scientific explanation. On the other side, it deprived them of many useful technical
ideas--because such concept words as "want," "expect," and "recognize" are among the most
effective ever formed for describing what happens in human minds. It was not until the "cyber-
netic revolution" of the 1940s that scientists finally realized there is nothing inherently unscien-
tific about the concept of goal itself and that attributing goals to evolution was bad not because
it was impossible, but simply because it was wrong.
Human minds do indeed use goal-
machinery, and there is nothing wrong with recognizing this and bringing technical theories
about intentions and goals into psychology.Minsky, Marvin. 1986.
The Society of Mind. New York: Simon and Schuster.
(p. 314)
As the 1970s drew to a close,
knowledge representation was
perhaps the most hotly debated topic in artificial intelligence. At
the 1977 International Joint Artificial Intelligence Conference, a
panel with representatives from the entire spectrum of opinions,
ranging from the most formal ot the most contingent, drew shouts
and cheers from the nearly one thousand scientists present, acting
(p. 315)
as if they were watching a football game. As I sat among them,
amused by the noise, I thought how much at odds with the stereo-
type of the cool, disinterested scientist this demonstration was. More
important, what a marvelous and accomodating structure sci-
ence has, for sooner or later the issue would be resolved on the basis
of the best choice--
maybe a mode of knowledge representation
which hadn't even yet been dreamed up--and the partisanship
would disappear, or more accurately, find its expression in the next
big issue.
McCorduck, Pamela. 2004.
Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. Natick, Massachusetts: A K Peters.