Search-Space Optimization

  • 21 Replies
  • 2550 Views
*

AndyGoode

  • Guest
Re: Search-Space Optimization
« Reply #15 on: May 26, 2019, 03:48:21 am »
I think you only need a moderate amount of good quality knowledge/memory to deduce most things.

I agree with this, and so does Jeff Hawkins. Most of what the cortex does is pattern matching (especially associative) and recall, not huge amounts of parallel processing that tackle each new problem from scratch. I keep quoting this book as answers to so many questions on this forum that it sounds to me that people here should read it as good foundation knowledge:

(p. 67)
   So how can a brain perform difficult tasks in one hundred
steps that the largest parallel computer imaginable can't solve in
(p. 68)
a million or a billion steps? The answer is the brain doesn't
"compute" the answers to problems; it retrieves the answers
from memory. In essence, the answers were stored in memory a
long time ago. It only takes a few steps to retrieve something
from memory. Slow neurons are not only fast enough to do this,
but they constitute the memory themselves. The entire cortex is
a memory system. It isn't a computer at all.

(p. 68)
Let me show, through an example, the difference between com-
puting a solution to a problem and using memory to solve the
same problem. Consider the task of catching a ball. Someone
throws a ball to you, you see it traveling toward you, and in less
than a second you snatch it out of the air. This doesn't seem too
difficult--until you try to program a robot arm to do the same.
As many a graduate student has found out the hard way, it seems
nearly impossible. When engineers or computer scientists
tackle this problem, they first try to calculate the flight of the ball
to determine where it will be when it reaches the arm. This cal-
culation requires solving a set of equations of the type you learn
in high school physics. Next, all the joints of a robotic arm have
to be adjusted in concert to move the hand into the proper posi-
tion. This involves solving another set of mathematical equations
more difficult than the first. Finally, this whole operation has to
be repeated multiple times, for as the ball approaches, the robot
gets better information about the ball's location and trajectory. If
the robot waits to start moving until it knows exactly where the
ball will arrive it will be too late to catch it. It has to start moving
to catch the ball when it has only a poor sense of location and
it continually adjusts as the ball gets closer. A computer requires
millions of steps to solve the numerous mathematical equations
to catch the ball. And although a computer might be programmed
to successfully solve this problem, the one-hundred step rule
tells us that a brain solves it in a different way. It uses memory.
(p. 69)
   How do you catch the ball using memory? Your brain has a
stored memory of the muscle commands required to catch a ball
(along with many other learned behaviors). When a ball is
thrown, three things happen. First, the appropriate memory
automatically recalled by the sight of the ball. Second, the mem-
ory actually recalls a temporal sequence of muscle commands.
And third, the retrieved memory is adjusted as it is recalled to
accommodate the particulars of the moment, such as the ball's
actual path and the position of your body. The memory of how
to catch a ball was not programmed into your brain; it was
learned over years of repetitive practice, and it is stored, not cal-
culated, in your neurons.
   You might be thinking, "Wait a minute. Each catch is
slightly different. You just said the recalled memory gets contin-
ually adjusted to accommodate the variations of where the ball
is on any particular throw . . . Doesn't that require solving the
same equations we were trying to avoid?" It may seem so, but
nature solved the problem of variation in a different and very
clever way. As we'll see later in this chapter, the cortex creates
what are called invariant representations, which handle varia-
tions in the world automatically. A helpful analogy might be to
imagine what happens when you sit down on a water bed: the
pillows and any other people on the bed are all spontaneously
pushed into a new configuration. The bed doesn't compute
how high each object should be elevated; the physical proper-
ties of the water and the mattress's plastic skin take care of the
adjustment automatically. As we'll see in the next chapter, the
design of the six-layered cortex does something similar, loosely
speaking, with the information that flows through it.

Hawkins, Jeff. 2004. On Intelligence. New York: Times Books.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Search-Space Optimization
« Reply #16 on: May 26, 2019, 04:32:00 am »
I always said RNNs would fade out. Recurrence is not right. Yes its sequence but it doesn't loop like that. Transformers were born. Highly parallel. You can watch the slide below if you don't believe me:
https://www.slideshare.net/xavigiro/attention-is-all-you-need-upc-reading-group-2018-by-santi-pascual
Emergent          https://openai.com/blog/

*

goaty

  • Trusty Member
  • ********
  • Replicant
  • *
  • 552
Re: Search-Space Optimization
« Reply #17 on: May 26, 2019, 04:43:51 am »
Saying that a memory system is all you need is good until you have to do something for the first time,  then uve got nothing to remember. :)

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1175
Re: Search-Space Optimization
« Reply #18 on: May 26, 2019, 05:28:28 am »
But a few seconds later you do!  ;D

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Search-Space Optimization
« Reply #19 on: May 26, 2019, 05:37:30 am »
Don't worry, don't worry, as soon as I understand GPT-2, I'm gonna combine it with my AGI brain design, and it will be able to recall etc.
Emergent          https://openai.com/blog/

*

goaty

  • Trusty Member
  • ********
  • Replicant
  • *
  • 552
Re: Search-Space Optimization
« Reply #20 on: May 26, 2019, 05:44:03 am »
But a few seconds later you do!  ;D

Like after its walked off a cliff? :)

.

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1175
Re: Search-Space Optimization
« Reply #21 on: May 26, 2019, 05:54:08 am »
If its a very tall cliff lol.

 


OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 15, 2024, 08:14:02 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm
AI-Generated Art Cannot Receive Copyrights
by frankinstien (AI News )
August 24, 2023, 08:49:45 am

Users Online

175 Guests, 0 Users

Most Online Today: 248. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles