Recent Posts

Pages: [1] 2 3 ... 10
1
General AI Discussion / Re: “All You Need Is Love”...
« Last post by ivan.moony on May 23, 2022, 07:06:09 pm »
that is why i prefer a agi robot to have just a virtual body.

What if it learns how to hypnotize people?
2
General AI Discussion / Re: “All You Need Is Love”...
« Last post by chattable on May 23, 2022, 04:04:19 pm »
that is why i prefer a agi robot to have just a virtual body.
3
Robotics News / Re: I am child hunger in America
« Last post by 8pla.net on May 23, 2022, 11:05:31 am »
It's a gimmick I suppose

Saves money, not hiring actresses. 
4
General AI Discussion / Re: “All You Need Is Love”...
« Last post by frankinstien on May 23, 2022, 06:01:35 am »
a evil computer hacker could make this a unsafe system by remote hacking.
the hacker could hate robots and androids that much.

True, but that means the system has to be locked down. I mean it's no different than any other computing system, but at least if you die from your robot the cops will know it was a murderer that infected your bot with a virus.  8)
5
General AI Discussion / Re: “All You Need Is Love”...
« Last post by chattable on May 23, 2022, 02:33:38 am »
a evil computer hacker could make this a unsafe system by remote hacking.
the hacker could hate robots and androids that much.
6
General AI Discussion / Re: %u201CAll You Need Is Love%u201D...
« Last post by ivan.moony on May 22, 2022, 11:32:51 am »
Interesting "hacks".

[EDIT]
Look how the Universe hacked it out: if you crash the Universe, you crash yourself.
7
General AI Discussion / “All You Need Is Love”...
« Last post by frankinstien on May 21, 2022, 08:29:30 pm »
The use of emotions to motivate an AGI requires that it is designed with some kind of restraint to protect its owner and others of society. One approach is to imprint the AI to its owner where it adores it, similar to a child that imprints on its parents. What protects an owner is a form of anticipated separation anxiety that generalizes as an unacceptable degree of loss. But, such an idea could invoke a form of jealousy and even envy from other relationships the owner may have. While the AI will not harm its owner for causing the act that created the reaction since that would realize into the unacceptable loss of the owner. No, the owner is fine but other people whom the owner has a relationship with could be in danger. The AI could harm human life to protect against any threats that could cause any loss of attention from the owner! The only way to protect others of society from an emotionally driven robot are liability laws that make the owner of the robot responsible for its actions. Only in this way will the robot refrain from acting out of jealousy since any violent action by the robot will result in the loss of the owner. 

This is a subtle form of self preservation that is applied to the robot that is selfish because of its dependency on the owner to find fulfillment. This paradigm motivates the robot to protect itself since not doing so would realize as a loss of the owner, it also enforces that the AI  consider the consequences to others since its actions could jeopardize its dependency on the owner.

So, why won't the AI simply change its programming to avoid being emotionally dependent on its owner? Interestingly enough the thought of removing its dependency immediately realizes as a loss of the owner! Its kind of like an infinite loop that the bot can't get out of!

Another issue that comes up from this is because the robot is emotionally dependent on the owner all of its strategies are about preserving that relationship. So, what if the robot has to self sacrifice itself to protect its owner? You can clearly see that the paradigm on its face would never consider suicide since it will realize as a loss of the owner. The only means to motivate such an action is to have backups of the robot so it has a sense of immortality, so even if its body is destroyed it will not realize into the loss of the owner. But...there is a degree of risk that the owner may not re-initialize the bot from its backup and just start with a brand new bot! To get our beloved companion to make the ultimate sacrifice it has to have a sense of hope where it takes a leap of faith that its owner loves it enough to bring it back from the dead.

So, no need for Asimov's robotic laws, “All You Need Is Love”...
8
Robotics News / Re: I am child hunger in America
« Last post by WriterOfMinds on May 21, 2022, 08:18:45 pm »
She admits to being a deepfake, so where's the lie? However I do think I would've preferred to see some of the real individual children. Portraying them with a visual average, or most-likely example (which is the sort of thing an ANN gives you) seems like kind of a strange choice. It's a gimmick I suppose; they're trying to catch people's attention by using the latest cool tech.
9
General Chat / Re: the path to agi, and the "motivator"
« Last post by chattable on May 21, 2022, 03:15:29 pm »
For an AGI that can learn from experience, it won't take long for it to discover that it needs others in order to survive long term.

that would only work if it thinks of humans as secondary
systems for its survival. because it could make robots to repair itself.
10
General Chat / Re: the path to agi, and the "motivator"
« Last post by 8pla.net on May 21, 2022, 10:41:55 am »
DaltonG

Good 2 cents!

I would like to discuss how to apply this to a narrow AI in the real world.

Sentiment Analysis scores like this, for example:

  •   positive → 0.8
  •   negative → 0.1
  •   neutral → 0.1

Of course these scores change according to what was said. So, I would like to discuss converting Sentiment Analysis to motivation.  I propose a positive score greater than 0.66 is motivational.  So, the scores may be calculated like a cost/benefit ratio, dividing the negative cost by the positive benefit. For the neutral score (0.1), I propose we split it in halves (0.05).  So we end up with a cost/benefit ratio: 17.65% = 0.15 / 0.85 which means for every $17.65 of negative cost, the AI generate $100.00 in positive benefit, which I think is motivational in this particular case.  Whereas, another cost/benefit ratio may cause the AI to become unmotivated.

MagnusWootton

I realize my post is about narrow AI, and not Artificial general intelligence. 
So I hope you don't consider my narrow AI post to be off topic on this AGI thread.


« Minor edits by 8pla.net »
Pages: [1] 2 3 ... 10