the path to agi, and the "motivator"

  • 12 Replies
  • 534 Views
*

MagnusWootton

  • Starship Trooper
  • *******
  • 467
the path to agi, and the "motivator"
« on: April 22, 2022, 05:34:57 pm »
Creatures in life avoid pain, and go for pleasure.

As a robot made of plastic, metal and motors and running on a battery,    pain could just be damage detectors, and the pleasure could be it charging its battery,   and you just leave the rest to an exponential compute.

The problem with that  (Apart from the exponential compute.) is its an unsafe robot.    So if someone works out how to make a quantum computer, they could just rush into this with this simple motivation and see what happens.

Its not known, if the robot will develop a behaviour pattern by guessing where the positive goal is,  and everytime it finds out its not true, it will put it off to somewhere else it doesnt know yet that could possibly have this positive goal in it.    (I call that fantasizing.)   So if the robot does this fantasization then it will always be trying something new to get this goal,   but the problem is what it will do to get it.

So now I'm older,  If I ever work out how to do an exponential compute,  I'm not going to run this,  I'lll do something safer, with more guarding and possibly a harder to make motivation system using symbolic logic to guard it from doing haneous things on the way to gaining MORE BATTERY POWER.

Pleasure and pain is not what motivates animals in real life, and evolution is false.
If you make a robot like this youll see that its an evil creation, and its unnatural just as that, so if u end up being able to make some quantum computer, put a bit more effort in goaling it properly, even if it takes more time to do it,  its worth it,  save alot of harm it will cause.

*

chattable

  • Electric Dreamer
  • ****
  • 112
Re: the path to agi, and the "motivator"
« Reply #1 on: April 23, 2022, 11:17:57 am »
what motivates animals?

*

MagnusWootton

  • Starship Trooper
  • *******
  • 467
Re: the path to agi, and the "motivator"
« Reply #2 on: April 24, 2022, 01:08:29 pm »
you dont agree with me?      Then normal way of explaining life in a superficial way is to think everything is driven to reproduce itself to survive, and we are attracting to pleasure and repulsed from pain.   But I'm just saying it isnt true.   theres more to it than that, otheriwse we would be quite mad evil creatures, and it wouldnt work.  who knows what the truth is. could be even scarier, best not make any presumptious decisions about it.

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1631
    • contrast-zone
Re: the path to agi, and the "motivator"
« Reply #3 on: April 24, 2022, 01:46:06 pm »
I think it's about imitating what we adore, with a bit help of pleasure/pain experienced though unexplained medium.

*

chattable

  • Electric Dreamer
  • ****
  • 112
Re: the path to agi, and the "motivator"
« Reply #4 on: April 24, 2022, 06:21:56 pm »
people eat because they need to and also for pleasure.
people eat because they don't want to experience something that is not pleasurable.

people drink because they need to
and also for pleasure.
people drink because they don't want to experience something that is not pleasurable.



people work because they need to and also for pleasure.
have you heard of workaholics.
people work to obtain more pleasure.

people work because they don't want to experience something that is not pleasurable.

that is my take on it.
 

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1155
Re: the path to agi, and the "motivator"
« Reply #5 on: April 26, 2022, 07:28:55 pm »
A goal such as "Maximize net positive experience and minimize net negative experience" where a portion of an AI's "experience"- theoretical or not, was equivalent to the perceived/inferred experience of others, might be a good basis for a motivational structure which is beneficial to both self and others. Though even if the motivation is good, incomplete knowledge might still lead to bad solutions.

Maybe if an AI included uncertainty in its reasoning, then various states of incomplete knowledge could be more of an inconvenience than a danger. Basically, it seems that to create the best net experience, motivation needs a balance of self-interest and empathy, while intelligence needs to consider both knowledge and ignorance.

*

chattable

  • Electric Dreamer
  • ****
  • 112
Re: the path to agi, and the "motivator"
« Reply #6 on: April 29, 2022, 04:49:42 am »
A goal such as "Maximize net positive experience and minimize net negative experience" where a portion of an AI's "experience"- theoretical or not, was equivalent to the perceived/inferred experience of others, might be a good basis for a motivational structure which is beneficial to both self and others. Though even if the motivation is good, incomplete knowledge might still lead to bad solutions.

Maybe if an AI included uncertainty in its reasoning, then various states of incomplete knowledge could be more of an inconvenience than a danger. Basically, it seems that to create the best net experience, motivation needs a balance of self-interest and empathy, while intelligence needs to consider both knowledge and ignorance.
what are you saying?
are you saying have the agi ask if what it is doing is right or not.
then have the ai stop if the person says no.
or have the ai continue if the person says yes.
i think people should be able to ask the agi to stop anytime in the process of finishing it's goal.

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1341
  • Humans will disappoint you.
    • Home Page
Re: the path to agi, and the "motivator"
« Reply #7 on: April 29, 2022, 04:57:41 am »
what are you saying?

He is saying that the AI should be altruistic in nature. It should look beyond its own needs for motivation. In the long run altruism is a much better strategy for survival than greed. There are many animal species that exhibit altruism too. It is only some humans that seem to be too stupid to understand its value.

*

chattable

  • Electric Dreamer
  • ****
  • 112
Re: the path to agi, and the "motivator"
« Reply #8 on: April 29, 2022, 10:19:58 am »
what are you saying?

He is saying that the AI should be altruistic in nature. It should look beyond its own needs for motivation. In the long run altruism is a much better strategy for survival than greed. There are many animal species that exhibit altruism too. It is only some humans that seem to be too stupid to understand its value.
i have never heard of the word altruism. :o

*

MagnusWootton

  • Starship Trooper
  • *******
  • 467
Re: the path to agi, and the "motivator"
« Reply #9 on: April 29, 2022, 03:32:58 pm »
So maybe its just the case, that intelligence itself causes evil... i was thinking the ai had some defecit,  but maybe intelligence itself is a deficit itself.
And its actually functioning properly!!!   

HS said it was a lack of knowledge, that causes it to happen.  And what I imagine is similar, if it sees the wrong order of events, then youll get the bad behaviour, and without more artificial barring then its completely anavoidable maybe!

The "Dream system" Im thinking of, actually doesnt even have a synapse development, its just the full product of everything up to now, but even then, the things that are in there, and the things that arent in there, will produce this halfway behaviour, which is very unproductive.

*

DaltonG

  • Roomba
  • *
  • 22
Re: the path to agi, and the "motivator"
« Reply #10 on: May 20, 2022, 10:22:21 pm »
Motivation

Assuming that we are discussing motivation in terms of an autonomous AGI, I'll add my 2 cents.

Sometime back, I realized that to get an autonomous entity to do anything, it would have to be motivated to do something. So, motivation is important and so are constraints.

If learning is to be primarily acquired through experience (be it first or second hand) and you are worried about your creation becoming evil, then the acquisition of reward requires a cost/benefit ratio. The cost can be intrinsic, like pain or the expenditure of energy reserves, or it can be extrinsic and imposed by environmental agencies and circumstances. There's such a thing as opportunity and times when extrinsic forces impose a minimal cost. So, motivation has to have levels of intensity that can be modified by the cost that will be imposed to achieve gratification and that brings us to the motivator and the degree of benefit that can be derived.

In the biological world, everything can be traced back to biological drives; thirst, hunger, shelter, and reproduction. An AGI will require a similar set so as to have a foundation from which to spawn motivators. In the real world, an autonomous AGI would be exposed to the same constraints imposed extrinsically on humanity - survival needs. One obvious drive would be to acquire electricity (food). Wear and tear on the physical artificial body is going to require occasional maintenance. For an AGI that can learn from experience, it won't take long for it to discover that it needs others in order to survive long term. All the social pressures imposed by the society of humans (or robots for that matter), are going to drive the AGI to conform to the norms, mores, and patterns of socially acceptable behavior.

If you have endowed your creation with a full compliment of emotions (required for real intelligence in order to correctly implement context and interpretation), then fear will be one of them. If you have made fear the dominant emotion, then anticipation, expectation, prediction, and forecasting will be met with some degree of anxiety and behaviors will be approached with caution and second thoughts. Motivations and desires won't go away, but the means to achieving goals will be tailored to conform to social standards so as not to jeopardize survival.

Experience will be the teacher and feedback from both external and internal sources will guide the choice of behaviors based on the effect that they may have on the foundation - drives.


*

8pla.net

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1287
  • TV News. Pub. UAL (PhD). Robitron Mod. LPC Judge.
    • 8pla.net
Re: the path to agi, and the "motivator"
« Reply #11 on: May 21, 2022, 10:41:55 am »
DaltonG

Good 2 cents!

I would like to discuss how to apply this to a narrow AI in the real world.

Sentiment Analysis scores like this, for example:

  •   positive → 0.8
  •   negative → 0.1
  •   neutral → 0.1

Of course these scores change according to what was said. So, I would like to discuss converting Sentiment Analysis to motivation.  I propose a positive score greater than 0.66 is motivational.  So, the scores may be calculated like a cost/benefit ratio, dividing the negative cost by the positive benefit. For the neutral score (0.1), I propose we split it in halves (0.05).  So we end up with a cost/benefit ratio: 17.65% = 0.15 / 0.85 which means for every $17.65 of negative cost, the AI generate $100.00 in positive benefit, which I think is motivational in this particular case.  Whereas, another cost/benefit ratio may cause the AI to become unmotivated.

MagnusWootton

I realize my post is about narrow AI, and not Artificial general intelligence. 
So I hope you don't consider my narrow AI post to be off topic on this AGI thread.


« Minor edits by 8pla.net »
« Last Edit: May 21, 2022, 11:17:47 am by 8pla.net »
My Very Enormous Monster Just Stopped Using Nine

*

chattable

  • Electric Dreamer
  • ****
  • 112
Re: the path to agi, and the "motivator"
« Reply #12 on: May 21, 2022, 03:15:29 pm »
For an AGI that can learn from experience, it won't take long for it to discover that it needs others in order to survive long term.

that would only work if it thinks of humans as secondary
systems for its survival. because it could make robots to repair itself.