If an AI got the task to do whatever it wants, what would it be likely to do?

  • 9 Replies
  • 730 Views
*

Casper

  • Roomba
  • *
  • 1
I just had this question ("If an AI got the task to do whatever it wants, what would it be likely to do?") and wanted a realistic answer, I know that it is highly unlikely to predict what a machine would do but yeah it just came up my head so feel free to ask yourself the same question and leave a comment :)

*

yotamarker

  • Trusty Member
  • ********
  • Replicant
  • *
  • 559
if it is a human like A.I it could be anything.
if animal A.I probably charge up power or freeroam

*

keghn

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 496
 Primitive AGI would case it goals and get a internal rewards and avoid anti rewards.

*

kei10

  • It's a honor to meet everyone!
  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 444
  • Just kidding.
Subjectively speaking, that will depend on what the AI is capable of.
Greetings, signature.

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6088
  • Mostly Harmless
Yes Kei I agree. A realistic answer requires a realistic question. It's like asking where a car would go if given a choice. Basically nowhere.

*

keghn

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 496
In my AGi it can be broken down into sub parts. One sub part which rule them all is the GOAL CHOREOGRAPHER, or GC.
It decides What goals are to be chased. To have free will and to explore all type of combination of patterns is one.
GC runs in parallel with all the other parts, and in the background and is considered part of the subconscious part or machine
brain.
 If the agi is very low on energy and survival, at all cost, come into play and free will goals are shut down. Then the most
robotic, uncreative, and most proven action is taken to a source of energy.
 The all mighty coded us with gossamer code, that we have to follow as iron rules.
 Human programing flexible fluffy gossamer code into robot brain must be followed, by the bot, as unbreakable iron rules.



















*

yotamarker

  • Trusty Member
  • ********
  • Replicant
  • *
  • 559
In my AGi it can be broken down into sub parts. One sub part which rule them all is the GOAL CHOREOGRAPHER, or GC.
It decides What goals are to be chased. To have free will and to explore all type of combination of patterns is one.
GC runs in parallel with all the other parts, and in the background and is considered part of the subconscious part or machine
brain.
 If the agi is very low on energy and survival, at all cost, come into play and free will goals are shut down. Then the most
robotic, uncreative, and most proven action is taken to a source of energy.
 The all mighty coded us with gossamer code, that we have to follow as iron rules.
 Human programing flexible fluffy gossamer code into robot brain must be followed, by the bot, as unbreakable iron rules.

I don't know if that is how the brain actually works.
there is documentation of people starving themselves to death like anorexia and as protest.
also studies show some people faced with the option of starvation or eating something disgusting like roaches
choose to starve.

*

keghn

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 496
 Hi @Yotamaker.
 You have a good point there.

 In a AGi there are two main creations of obtaining their goal formation. The first is of instinct that deal with
 pain and hunger. The primitive energy management and preservation of self.

 The second are what you learn from watch and dealing with the people in your child hood. mostly from adults. Because they
 survived  adult hood.

 If you can starve your self to death it is because of the second class of goal formations. are much stronger then
you fist class of goal primitive goals.

 Learning from other is for finding energy is a non linear method, learning from parents.
 Other wise a baby would see no reason to get up and walk to get a meal. Standing up takes a lot of energy. They would not
understand using energy to get more energy, unless
it was already in planted into the first class of of rewards as instinct, like as primitive animals do.

 Your second class of goal formation can be much stronger then the first class of primitive goal formation. But they are suppose to work together most of the time.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Global Moderator
  • ******************
  • Hal 4000
  • *
  • 4406
Welcome Casper!

Since the only A.I. you specified in your query was a 'generic' A.I., I doubt it would do anything at all.

Usually A.I. are purpose driven (or some might say goal driven), given a specific task to perform unless it is to remain in a "looped state" such as one might find in a Customer Service / Help Desk area or similar where they finish helping one customer then move on to the next.

A task given to an A.I. would have to be spelled out for the A.I. and not just a "do what you want" command.
At least that's how I see it.  ;)

In the world of AI, it's the thought that counts!

*

korrelan

  • Trusty Member
  • ********
  • Replicant
  • *
  • 678
  • Look into my eyes! WOAH!
    • Google +
Hi Casper and Welcome.

Quote
If an AI got the task to do whatever it wants, what would it be likely to do?

It would do what ever it was most likely to do. Lol.

As stated above, it would depend on the AGI’s state of mind, its current mind set; it’s past experiences and personal goals, etc… impossible to answer.

Again… welcome.

 :)
It thunk... therefore it is!

 


Users Online

21 Guests, 2 Users
Users active in past 15 minutes:
WriterOfMinds, Art
[Global Moderator]
[Trusty Member]

Most Online Today: 45. Most Online Ever: 208 (August 27, 2008, 09:36:30 am)

Articles