Ai Dreams Forum

Artificial Intelligence => General AI Discussion => Topic started by: goaty on July 31, 2019, 12:14:04 am

Title: A more realistic measure of "strength" of artificial intelligence
Post by: goaty on July 31, 2019, 12:14:04 am
So the Turing Test, ppl don't like so much these days, because of developments or whatever, my personal reason is because just because you can tell your robot isn't a human doesn't mean its not a success!

So I bring forth the metric of "Independence" the more the robot is being independent of help around it to get the job done, is a more realistic way to judge the strength of an a.i.

So say I wanted a robot to get from a to b.

Judgement of extra independence would be->

* it moves its legs and arms itself.
* can it solve rat mazes on the way.
* can it open doors, and solve simple puzzles?

The more it does itself,  the more useful it is,  but the horror story of if u gave your bot too much independence it might turn on you is present for the robotician in charge. :)

.

Title: Re: A more realistic measure of "strength" of artificial intelligence
Post by: LOCKSUIT on July 31, 2019, 12:26:56 am
To measure intelligence, we would see things we want done, get done. We all want stuff. But describing the plans is similar to acting them out, assistant=robot, however assistant goes a step farther than any physical step you can take, because imagination is more powerful.

Of course, if getting from location to location has hidden mazes/puzzles, well, you can't talk about them unless you know a lot more info, if you expect an assistant to explain how to get to location B.

While your thread title says "more realistic" methodology, it is more primitive methodology!
Title: Re: A more realistic measure of "strength" of artificial intelligence
Post by: goaty on July 31, 2019, 01:05:50 am
I like the "describing a plan is similar to acting it out"

If you see a robot successfully handle a task (even a person!),  the plan of it, had to have been in its head.

And of course its primitive!  its unrealistic if the a.i. isn't primitive, because its people on pipe dreams instead of getting with the reality of the situation!
Title: Re: A more realistic measure of "strength" of artificial intelligence
Post by: LOCKSUIT on July 31, 2019, 01:17:43 am
Quote
If you see a robot successfully handle a task (even a person!),  the plan of it, had to have been in its head.
There is plans that come not from the planning. Walking can be learnt by basically no 'thinking'/'planning'.
I'd like to see a crawler learn to build and fly a rocket! (not gonna happen)


Quote
And of course its primitive!  its unrealistic if the a.i. isn't primitive, because its people on pipe dreams instead of getting with the reality of the situation!
The no think, just try, is "primitive learning".

You say just dreaming is bad behavior. True. But it is a powerful tool. Even it alone can still inform humans so that they can then act it out in real life.
Title: Re: A more realistic measure of "strength" of artificial intelligence
Post by: LOCKSUIT on July 31, 2019, 01:25:09 am
And the neocortex is the newer part of the brain, it dreams simulations.

Turing test should be think, before act.

You can still see all the world, just it is memories, very similar to real life!!
Title: Re: A more realistic measure of "strength" of artificial intelligence
Post by: AndyGoode on July 31, 2019, 01:25:49 am
The Turing test was always ridiculous. I regard anybody still talking about Turing tests to be a newbee who hasn't thought much about anything in the field of AI yet.

I strongly advocate giving computers the same IQ tests that humans are given. There have been only about five attempts to do this so far (the reason I know is that I wrote a proposal for such a project around 2015, it was rejected, but at least it forced me to do background research on all such attempts mentioned online). One such attempt is described at the following link, though it didn't involve pictorial questions as the other tests did...

https://observer.com/2015/06/artificially-intelligent-computer-outperforms-humans-on-iq-test/

I have other ideas for intelligence tests for computers that are more advanced, but until we can get a computer to handle static, idealized, 2D images in black-and-white, there is little chance it's going to be able to understand the full complexity of the multicolored, fuzzy, 3D, dynamic world. One team's attempts to give IQ tests to computers used only Raven's progressive matrices, which is a specific type of IQ test question...

https://iqtestprep.com/ravens-progressive-matrices/

That's a good place to start, I believe.

----------

(p. 78)
The Turing Test Cannot Prove
Artificial Intelligence

Mark Halpern

Mark Halpern is a computer software expert who has worked at
IBM. In the following viewpoint, he argues that the Turing Test
is fundamentally flawed. As evidence, he points to Turing Tests
conducted in 1991. During The Turing Tests, Halpern says, the
judging was flagrantly inadequate; computers that were generat-
ing random nonsense were judged human, while some humans
were judged to be computers. Halpern concludes that even if a
computer were to pass the Turing Test, it would not show that
that computer had achieved artificial intelligence.

(p. 79)
Perhaps the absurdity of trying to make computers that can
"think" is best demonstrated by reviewing a series of at-
tempts to do just that--by aiming explicitly to pass Turing's
test. In 1991, a New Jersey businessman named Hugh Loeb-
ner founded and subsidized an annual competition, the Loeb-
near Prize Competition in Artificial Intelligence, to identify and
reward the computer program that best approximates artificial
intelligence [AI] as Turing defined it. The first few Competi-
tions were held in Boston under the auspices of the Cam-
bridge Center for Behavioral Studies; since then they have
been held in a variety of academic and semi-academic loca-
tions. But only the first, held in 1991, was well documented
and widely reported on in the press, making that inaugural
even our best case study.

Practical Problems

The officials presiding over the competition had to settle a
number of details ignored in Turing's paper, such as how of-
ten the judges must guess that a computer is human before
we accept their results as significant, and how long a judge
may interact with a hidden entity before he has to decide. For
the original competition, the host center settled such ques-
tions with arbitrary decisions--including the number of
judges, the method of selecting them, and the instructions
they were given.
   Beyond these practical concerns, there are deeper ques-
tions about how to interpret the range of possible outcomes:
What conclusions are we justified in reaching if the judges are
generally successful in identifying humans as humans and
(p. 80)
computers as computers? Is there some point at which we
may conclude that Turing was wrong, or do we simply keep
trying until the results support his thesis? And what if judges
mistake humans for computers--the very opposite of what
Turing expected? (This last possibility is not merely hypotheti-
cal; three competition judges made this mistake, as discussed
below.)

Berlatsky, Noah, ed. 2011. Artificial Intelligence. Farmington Hills, MI: Greenhaven Press.
Title: Re: A more realistic measure of "strength" of artificial intelligence
Post by: LOCKSUIT on July 31, 2019, 01:31:09 am
By Turing Test, I meant something similar but not their test 8)

I agree, I'm hoping for a machine to dream answers with no body. I bet on it. I figure if I can do it, it can. I do a lot without moving...
Title: Re: A more realistic measure of "strength" of artificial intelligence
Post by: goaty on July 31, 2019, 01:32:25 am
Yes,  IQ tests are solvable!
If the robot has the relations in its head,  it can solve for those relations.
Implantation is the easier way, but the robot developing them itself is the bigger mystery.


And locky,  you cant remember when the plan/relations were forming in your head to go plan out your first struggles to control your body weight and muscles to get yourself elevated and getting along the floor.

Also,  A->B can be anything, could even be welding sheet metal to make a rockets fuel tank,  doing it "crawler" style can do anything it just needs an accurate model to search in.
Title: Re: A more realistic measure of "strength" of artificial intelligence
Post by: LOCKSUIT on July 31, 2019, 01:47:04 am
If you mean visualizing how to walk as a baby, then that is simulating. Yes you can think up any plan of A>B.

But what I meant was primitive RL. The 'answers' it learns come from Test>Update. My suggestion was Think>Update. With the primitive, say you get a cue when touch the floor wall, you do the best actions next in sequence, and it updates the weights when it crawls faster. Maybe it will learn to snatch food from an animal's mouth. But it has to try it for real, a very slow and weak process.
Title: Re: A more realistic measure of "strength" of artificial intelligence
Post by: goaty on July 31, 2019, 01:55:08 am
But Lock, your forgetting about the simulation.  it happens in a virtual model,  not the actual model... :P

"Strong" AI.     what if its doing something primitive better than the more "advanced" one,  it depends on how sweet the whole thing is running too I guess....
.
The most primitive thing, is hand to hand combat,  that would put a spin on "STRONG" ai, wouldn't it, and its not solving IQ tests is it...
Title: Re: A more realistic measure of "strength" of artificial intelligence
Post by: LOCKSUIT on July 31, 2019, 02:03:57 am
Ah, haha. Real world TEST, Sim world TEST, both use physics. But THINK has no physics, only memories. Well, simulated bot teaching is faster and more powerful, but still very much weaker. Imagine thinking of the plans "to build a motor I must blahblah. Then, I can use it to do blahblah. Etc." realized even in a computer evolution environment 'test grounds'. They'll never learn to build it.
Title: Re: A more realistic measure of "strength" of artificial intelligence
Post by: goaty on July 31, 2019, 02:08:12 am
Well what they cant learn,  you can put in by hand and get the job done still anyway.

Getting a robot to develop a recipe on its own is troubled by the fact that you cant make it *want it*,  so it makes it hard to think about.
Title: Re: A more realistic measure of "strength" of artificial intelligence
Post by: LOCKSUIT on July 31, 2019, 02:11:05 am
I'm still set on the Knowledge Generator :)
I want it to "talk" about plans and show me what's on its mind.

I suppose motor cortex is knowledge though, with the only difference being how it's learnt.
I don't believe I could learn what I have by running around my city with no mental thought.
Title: Re: A more realistic measure of "strength" of artificial intelligence
Post by: goaty on July 31, 2019, 05:23:01 am
Its just no different to me lock,  walky same as talky same as thinky.   Its all based apon assignment sets - even tho when it comes to operating in 3-space its hard to think of symbols it would be.   but over?  under?   grab?   its all symbols, even motor.
Title: Re: A more realistic measure of "strength" of artificial intelligence
Post by: HS on July 31, 2019, 05:47:27 am
I think that the test for the lack of strong AI, is the necessity of developing a test, in order to test it. If it’s truly good enough, then you wouldn’t need to think about your interactions.
Title: Re: A more realistic measure of "strength" of artificial intelligence
Post by: goaty on July 31, 2019, 06:24:36 am
 :D
.
Hows your creecha coming along?    Have you got your actuator down pat your going to use for your legs?
Title: Re: A more realistic measure of "strength" of artificial intelligence
Post by: HS on July 31, 2019, 06:59:20 am
Dude! I thought summer was going to include more free time.  hehe
nope...
So not much progress. I got a nice solar panel for the top of it.  :) Letss see... what else.. I got a nice box with compartments for batteries, motors, and neurons, got some motors, but I hope my neural net can figure out how to run brushless, (cause I'd have to buy a bigger box for all the paraphernalia required for three phase.) At this rate it's gonna end up looking (and probably acting)  like the luggage from Discworld...

(https://i.ibb.co/x73Wrsd/My-Post79.jpg) (https://ibb.co/JsCVSdg)


Following that train of thought on testing AI, you could know the strength of an AI by the constraints on the tests which provide useful feedback from said AI. But you’d have to compare different tests, the more the better. They will end up specialized to various types of AI that the people have to deal with, in order to extract the best data, not one size fits all. So, I guess we should sit back, let everyone try to figure it out, and then develop a test for testing tests! Then we’ll have a true comparison on a single metric.
Title: Re: A more realistic measure of "strength" of artificial intelligence
Post by: goaty on July 31, 2019, 07:11:58 am
Love the picture, there were plenty of little monsters in those novels, fun-fun.

Ill send a picture back over->
(https://scontent.fper6-1.fna.fbcdn.net/v/t1.0-9/67410893_961982487483142_8366133179169374208_n.jpg?_nc_cat=107&_nc_oc=AQn-wRAYz1fOEWPgIJ942VJ79vSA4L5zsOnnPm-2Mw70hJUp7_ZeecRHW-FOu0hFaMU&_nc_ht=scontent.fper6-1.fna&oh=8407f1c47331f12db654cbd9ee81fbaf&oe=5DA717BD)

Did u do a bit of carpentry for this "compartmentalized" box?

Ive got my system to 4 main compartments now,  power, actuators, eye and brain.  =)

Run a BIG solar panel,  if it can lift it, then itll be a stronger robot the bigger its panel is,  otherwise your playing catch-ups, with a sleeping cycle, and it ends up asleep more than its awake more often than not.

Running brushless - maybe if you train a net on where it thinks its eye is supposed to be, after so much electricity goes into its motors.   If you managed that its more adaptive than absolute positioning,  but still I like absolute positioning, I think im going to have it in mine, all my prior thought went into me having it, but it would be weird if its simple to adapt around it with a.i. instead, and its just as good.  (and more adaptive to error in the system.)

And keeping things simple, I KNOW It only takes a few metrics to create a very lively and varied activity.   ;D

.
Title: Re: A more realistic measure of "strength" of artificial intelligence
Post by: HS on July 31, 2019, 08:14:08 am
Lolz that diagram! It's like someone trying to calm down Lovecraft.

Quote
Did u do a bit of carpentry for this "compartmentalized" box?
I rummaged around in some old boxes and found the perfect thing. Don’t think my attempt at carpentry would produce anything… liftable. I’ll just attach a thin piece of plywood to the base an it should be strong enough.

Quote
Run a BIG solar panel,  if it can lift it, then itll be a stronger robot the bigger its panel is,  otherwise your playing catch-ups, with a sleeping cycle, and it ends up asleep more than its awake more often than not.
The solar panel is thin flexible about the size of a medium laptop monitor. 12 volts. Perfect to charge my 9-volt lithium batteries (7 in parallel). Don’t see a way around a sleep cycle though, especially at night. I mean, it could hang out under streetlights, theoretically… But the police might try to break into it to find out what it was selling.  :stoner: → :toast:

Quote
Running brushless - maybe if you train a net on where it thinks its eye is supposed to be, after so much electricity goes into its motors.   If you managed that its more adaptive than absolute positioning,  but still I like absolute positioning, I think im going to have it in mine, all my prior thought went into me having it, but it would be weird if its simple to adapt around it with a.i. instead, and its just as good.  (and more adaptive to error in the system.)
Yeah, I like the brushless motor idea; it would be a useful and simple starter skill to work on with the neural net.
Title: Re: A more realistic measure of "strength" of artificial intelligence
Post by: goaty on July 31, 2019, 08:28:45 am
Selling items,   a portable automatic middle man,  sounds like a nifty idea.
If you could make it purchase supply itself, then you could just sit back and watch the money roll in without getting out of your arm chair, but ppl would have to want to deal with it for your idea to work...  and some ppl don't like robos…

[patchon]

My bots im not planning on involving them with people, because I think it can actually get a bit negative, bots pretending to have a personality doesn't cut it for me, so I guess others might be similar...  they are just going to do jobs for me in secret,  like hunt for rabbits, go fish like bears for me, clean out my toilet and house, maybe they can even prepare the meat, and definitely tend to a garden and maybe some kind of revenue, but I probably wont need money, depends on how good I get it.     

Title: Re: A more realistic measure of "strength" of artificial intelligence
Post by: yotamarker on August 05, 2019, 07:00:12 pm
cope, they need to surpass humans, so they must have human skills.
Title: Re: A more realistic measure of "strength" of artificial intelligence
Post by: goaty on August 05, 2019, 07:16:43 pm
Surpassing humans involves what?  Using human skills? No.  It has to have something that humans HAVENT GOT.

Its hard to think about what is the metric of success,  so we don't even know WHAT EVEN IS MORE SUCCESSFUL to EVEN MAKE IT DO IT!!!

Inventing more technology is not it...  we are good enough at that ourselves at making cheesy crap.

If there is something that surpasses us, I cant think of it,  and just because it can beat you in a fight and take over the worlds armies and take the world hostage DOES NOT MEAN IT SURPASSES US.  :)

.
Title: Re: A more realistic measure of "strength" of artificial intelligence
Post by: Korrelan on August 05, 2019, 08:45:58 pm

Quote
It has to have something that humans HAVENT GOT.

Or more of it… take a moment to consider all the different species and brain structures in the animal kingdom, that we know to exhibit our definition of intelligence.

Intelligence/ sentience/ consciousness are derived from a scalable law of nature… humans have the most… but by no means the maximum.

 :)
Title: Re: A more realistic measure of "strength" of artificial intelligence
Post by: HS on August 05, 2019, 09:10:51 pm
Intelligence/ sentience/ consciousness are derived from a scalable law of nature.
But even simple things get emergent properties as you zoom out. I think intelligence is more like a series of thresholds, like wave forms on a string. With enough activation energy you can snap into another level, and get a new reaction going.