A more realistic measure of "strength" of artificial intelligence

  • 23 Replies
  • 2818 Views
*

goaty

  • Trusty Member
  • ********
  • Replicant
  • *
  • 552
So the Turing Test, ppl don't like so much these days, because of developments or whatever, my personal reason is because just because you can tell your robot isn't a human doesn't mean its not a success!

So I bring forth the metric of "Independence" the more the robot is being independent of help around it to get the job done, is a more realistic way to judge the strength of an a.i.

So say I wanted a robot to get from a to b.

Judgement of extra independence would be->

* it moves its legs and arms itself.
* can it solve rat mazes on the way.
* can it open doors, and solve simple puzzles?

The more it does itself,  the more useful it is,  but the horror story of if u gave your bot too much independence it might turn on you is present for the robotician in charge. :)

.


*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: A more realistic measure of "strength" of artificial intelligence
« Reply #1 on: July 31, 2019, 12:26:56 am »
To measure intelligence, we would see things we want done, get done. We all want stuff. But describing the plans is similar to acting them out, assistant=robot, however assistant goes a step farther than any physical step you can take, because imagination is more powerful.

Of course, if getting from location to location has hidden mazes/puzzles, well, you can't talk about them unless you know a lot more info, if you expect an assistant to explain how to get to location B.

While your thread title says "more realistic" methodology, it is more primitive methodology!
Emergent          https://openai.com/blog/

*

goaty

  • Trusty Member
  • ********
  • Replicant
  • *
  • 552
Re: A more realistic measure of "strength" of artificial intelligence
« Reply #2 on: July 31, 2019, 01:05:50 am »
I like the "describing a plan is similar to acting it out"

If you see a robot successfully handle a task (even a person!),  the plan of it, had to have been in its head.

And of course its primitive!  its unrealistic if the a.i. isn't primitive, because its people on pipe dreams instead of getting with the reality of the situation!

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: A more realistic measure of "strength" of artificial intelligence
« Reply #3 on: July 31, 2019, 01:17:43 am »
Quote
If you see a robot successfully handle a task (even a person!),  the plan of it, had to have been in its head.
There is plans that come not from the planning. Walking can be learnt by basically no 'thinking'/'planning'.
I'd like to see a crawler learn to build and fly a rocket! (not gonna happen)


Quote
And of course its primitive!  its unrealistic if the a.i. isn't primitive, because its people on pipe dreams instead of getting with the reality of the situation!
The no think, just try, is "primitive learning".

You say just dreaming is bad behavior. True. But it is a powerful tool. Even it alone can still inform humans so that they can then act it out in real life.
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: A more realistic measure of "strength" of artificial intelligence
« Reply #4 on: July 31, 2019, 01:25:09 am »
And the neocortex is the newer part of the brain, it dreams simulations.

Turing test should be think, before act.

You can still see all the world, just it is memories, very similar to real life!!
Emergent          https://openai.com/blog/

*

AndyGoode

  • Guest
Re: A more realistic measure of "strength" of artificial intelligence
« Reply #5 on: July 31, 2019, 01:25:49 am »
The Turing test was always ridiculous. I regard anybody still talking about Turing tests to be a newbee who hasn't thought much about anything in the field of AI yet.

I strongly advocate giving computers the same IQ tests that humans are given. There have been only about five attempts to do this so far (the reason I know is that I wrote a proposal for such a project around 2015, it was rejected, but at least it forced me to do background research on all such attempts mentioned online). One such attempt is described at the following link, though it didn't involve pictorial questions as the other tests did...

https://observer.com/2015/06/artificially-intelligent-computer-outperforms-humans-on-iq-test/

I have other ideas for intelligence tests for computers that are more advanced, but until we can get a computer to handle static, idealized, 2D images in black-and-white, there is little chance it's going to be able to understand the full complexity of the multicolored, fuzzy, 3D, dynamic world. One team's attempts to give IQ tests to computers used only Raven's progressive matrices, which is a specific type of IQ test question...

https://iqtestprep.com/ravens-progressive-matrices/

That's a good place to start, I believe.

----------

(p. 78)
The Turing Test Cannot Prove
Artificial Intelligence

Mark Halpern

Mark Halpern is a computer software expert who has worked at
IBM. In the following viewpoint, he argues that the Turing Test
is fundamentally flawed. As evidence, he points to Turing Tests
conducted in 1991. During The Turing Tests, Halpern says, the
judging was flagrantly inadequate; computers that were generat-
ing random nonsense were judged human, while some humans
were judged to be computers. Halpern concludes that even if a
computer were to pass the Turing Test, it would not show that
that computer had achieved artificial intelligence.

(p. 79)
Perhaps the absurdity of trying to make computers that can
"think" is best demonstrated by reviewing a series of at-
tempts to do just that--by aiming explicitly to pass Turing's
test. In 1991, a New Jersey businessman named Hugh Loeb-
ner founded and subsidized an annual competition, the Loeb-
near Prize Competition in Artificial Intelligence, to identify and
reward the computer program that best approximates artificial
intelligence [AI] as Turing defined it. The first few Competi-
tions were held in Boston under the auspices of the Cam-
bridge Center for Behavioral Studies; since then they have
been held in a variety of academic and semi-academic loca-
tions. But only the first, held in 1991, was well documented
and widely reported on in the press, making that inaugural
even our best case study.

Practical Problems

The officials presiding over the competition had to settle a
number of details ignored in Turing's paper, such as how of-
ten the judges must guess that a computer is human before
we accept their results as significant, and how long a judge
may interact with a hidden entity before he has to decide. For
the original competition, the host center settled such ques-
tions with arbitrary decisions--including the number of
judges, the method of selecting them, and the instructions
they were given.
   Beyond these practical concerns, there are deeper ques-
tions about how to interpret the range of possible outcomes:
What conclusions are we justified in reaching if the judges are
generally successful in identifying humans as humans and
(p. 80)
computers as computers? Is there some point at which we
may conclude that Turing was wrong, or do we simply keep
trying until the results support his thesis? And what if judges
mistake humans for computers--the very opposite of what
Turing expected? (This last possibility is not merely hypotheti-
cal; three competition judges made this mistake, as discussed
below.)

Berlatsky, Noah, ed. 2011. Artificial Intelligence. Farmington Hills, MI: Greenhaven Press.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: A more realistic measure of "strength" of artificial intelligence
« Reply #6 on: July 31, 2019, 01:31:09 am »
By Turing Test, I meant something similar but not their test 8)

I agree, I'm hoping for a machine to dream answers with no body. I bet on it. I figure if I can do it, it can. I do a lot without moving...
Emergent          https://openai.com/blog/

*

goaty

  • Trusty Member
  • ********
  • Replicant
  • *
  • 552
Re: A more realistic measure of "strength" of artificial intelligence
« Reply #7 on: July 31, 2019, 01:32:25 am »
Yes,  IQ tests are solvable!
If the robot has the relations in its head,  it can solve for those relations.
Implantation is the easier way, but the robot developing them itself is the bigger mystery.


And locky,  you cant remember when the plan/relations were forming in your head to go plan out your first struggles to control your body weight and muscles to get yourself elevated and getting along the floor.

Also,  A->B can be anything, could even be welding sheet metal to make a rockets fuel tank,  doing it "crawler" style can do anything it just needs an accurate model to search in.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: A more realistic measure of "strength" of artificial intelligence
« Reply #8 on: July 31, 2019, 01:47:04 am »
If you mean visualizing how to walk as a baby, then that is simulating. Yes you can think up any plan of A>B.

But what I meant was primitive RL. The 'answers' it learns come from Test>Update. My suggestion was Think>Update. With the primitive, say you get a cue when touch the floor wall, you do the best actions next in sequence, and it updates the weights when it crawls faster. Maybe it will learn to snatch food from an animal's mouth. But it has to try it for real, a very slow and weak process.
Emergent          https://openai.com/blog/

*

goaty

  • Trusty Member
  • ********
  • Replicant
  • *
  • 552
Re: A more realistic measure of "strength" of artificial intelligence
« Reply #9 on: July 31, 2019, 01:55:08 am »
But Lock, your forgetting about the simulation.  it happens in a virtual model,  not the actual model... :P

"Strong" AI.     what if its doing something primitive better than the more "advanced" one,  it depends on how sweet the whole thing is running too I guess....
.
The most primitive thing, is hand to hand combat,  that would put a spin on "STRONG" ai, wouldn't it, and its not solving IQ tests is it...

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: A more realistic measure of "strength" of artificial intelligence
« Reply #10 on: July 31, 2019, 02:03:57 am »
Ah, haha. Real world TEST, Sim world TEST, both use physics. But THINK has no physics, only memories. Well, simulated bot teaching is faster and more powerful, but still very much weaker. Imagine thinking of the plans "to build a motor I must blahblah. Then, I can use it to do blahblah. Etc." realized even in a computer evolution environment 'test grounds'. They'll never learn to build it.
Emergent          https://openai.com/blog/

*

goaty

  • Trusty Member
  • ********
  • Replicant
  • *
  • 552
Re: A more realistic measure of "strength" of artificial intelligence
« Reply #11 on: July 31, 2019, 02:08:12 am »
Well what they cant learn,  you can put in by hand and get the job done still anyway.

Getting a robot to develop a recipe on its own is troubled by the fact that you cant make it *want it*,  so it makes it hard to think about.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: A more realistic measure of "strength" of artificial intelligence
« Reply #12 on: July 31, 2019, 02:11:05 am »
I'm still set on the Knowledge Generator :)
I want it to "talk" about plans and show me what's on its mind.

I suppose motor cortex is knowledge though, with the only difference being how it's learnt.
I don't believe I could learn what I have by running around my city with no mental thought.
Emergent          https://openai.com/blog/

*

goaty

  • Trusty Member
  • ********
  • Replicant
  • *
  • 552
Re: A more realistic measure of "strength" of artificial intelligence
« Reply #13 on: July 31, 2019, 05:23:01 am »
Its just no different to me lock,  walky same as talky same as thinky.   Its all based apon assignment sets - even tho when it comes to operating in 3-space its hard to think of symbols it would be.   but over?  under?   grab?   its all symbols, even motor.

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1175
Re: A more realistic measure of "strength" of artificial intelligence
« Reply #14 on: July 31, 2019, 05:47:27 am »
I think that the test for the lack of strong AI, is the necessity of developing a test, in order to test it. If it’s truly good enough, then you wouldn’t need to think about your interactions.

 


OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

183 Guests, 0 Users

Most Online Today: 259. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles