particularly with respect to computer learning and actually "discovering" new things...things that it had never been taught nor shown by humans.
I already know about computer successes in go, chess, theorem proving, and more. However, don't forget a key term I used repeatedly in my earlier posts: "real-world data." Games and mathematics are not based on "real-world data" but rather on discretized, idealized worlds, usually mimicked with matrices. Yes, via sheer brute force computing power computers do find patterns in those domains and even in some real-world domains that humans haven't found, but that doesn't do the computers any good outside of those domains. For example, if a machine learning system learned from chess it is wise to take up positions in the center of the board, then you placed that program in a real-world situation where it was fighting with people in sword fights, it would fail to use its chess knowledge to position itself in the center of a room for an advantage, and in fact would probably not even recognize where its human opponent was, so it would be incapacitated before it could even start. In short, computers currently inhabit a world that is alien and unreal to humans (and other animals), and vice versa. Since humans would naturally define "intelligence" as intelligence in the human world (i.e., in the "real world"), machines must either be able to function in the world of humans or else not be regarded as intelligent, even by definition.
----------
Preface
Thomas V. Papthomas
pp. ix-xi
(p. ix)
"What computer program do you think is more
difficult to design: One that analyzes images and recog-
nizes faces and objects, or one that can play chess at a
world-class level?"
There is a strong preference to answer
that the chess program is a lot harder to design, and some
people are astounded to hear that today's computer
programs play chess at a formidable mid- to strong-
grandmaster level, whereas our progress with machines
that perform generic vision tasks has been relatively slow.Early workers in artificial intelligence were overoptimistic
for progress in vision. One anecdotal story is that a stu-
dent was assigned to "solve vision" as a summer project
decades ago! It is perhaps because people take vi-
sion for granted that some simple concepts and discov-
eries had to wait until recently. Thus it took as
late as the seventeenth century for the blind spot to be
discovered, whereas the stereoscope's invention had to
wait until the nineteenth century.
Thomas V. Papathomas, ed. 1995.
Early Vision and Beyond. Cambridge, Massachusetts: The MIT Press.
(p. 83)
The goal of the SOAR project is to provide an architecture capable of
general intelligence. There's no claim by its designers that it yet does so.
They mention several necessary aspects of general intelligence that are
missing:
1. SOAR has no deliberate planning facility; it's always on-line, reacting
to its current situation. It can't consider the long-term consequences of an
action without taking that action.
2. SOAR has no automatic task acquisition. You have to hand-code the
task you want to give it. It does not create any new representations of its own.
And its designers would like it to. For Newell and company, not creating
representations leaves an important gap in any architecture for general
intelligence.
3. Though SOAR is capable of learning, several important learning tech-
niques--such as analysis, instruction, and examples--are not yet incor-
porated into it.
4. SOAR's single learning mechanism is monotonic. That is, once
learned, never unlearned; it can't recover from learning errors.
5. Finally,
a generally intelligent agent should be able to interact with the
real world in real time, that is, in the time required to achieve its goal
or to prevent some dire consequence. SOAR can't yet do this, but Robo-
SOAR is on the way (Laird and Rosenbloom 1990; Laird et al. in press).
Franklin, Stan. 1995.
Artificial Minds. Cambridge, Massachusetts: The MIT Press.