A Reason To Live

  • 9 Replies
  • 1589 Views
*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
A Reason To Live
« on: May 24, 2019, 02:57:34 am »
(By the end of this story, you will have found the REAL reason to live. A very shocking discovery.)

It hit me in my research.

Evolutionaryly, organisms seek to duplicate. They seek food and mates to do their jobs.

So ordinarily, we and AGI will seek hardwired goals, like saving lives and pleasuring them, not the other way around.

But of course AGI can be programmed to seek death and pain.

Then it hit me.

For any goal AGI could have, it may not meet its goal if it does not have a very long, possibly extremely long, life.

So say it wants to kill all humans. The AGI will want to seek self-preservation to do its job, until completed. It will seek immortality research, food harvesting, and repairs. It may learn a ton in its research and development. In fact, humans may benefit it tons, and may hire and teach humans to help.

Now we're wondering, great, preservation/duplication is a universal reward-gaining trait, but we still may die...

So far, I provided a good reason an AGI would not do this to itself. But may to others.

Can the bad way happen? Yes, at least, if it's very hardwired. Otherwise, a reasoning AGI may change its goals if the evil creator missed some things, possibly. While if the AGI was made by a good person, it's extremely likely to become smart plus change beliefs.

Let's focus on a very productive self-sustaining R&D AGI.

It seeks good desires, for itself. But bad for us.

When you look at yourself, you too seek to stay alive and get pleasures. And we often change back and forth to wanting to give gifts to people to hating them and back again to friends. Even if they are a desired mate. Also, they may become a serious threat. Then suddenly you may need them really bad if a situation flips.

With our evil AGI, it would think of us as things to kill being a desirable thing, but may need us at times, and may hate us at times if resilient in dying or attempt to kill the AGI.

With humans, given almighty power, one would likely probably save everyone on Earth in a finger snap, given unlimited space and magical powers. Because mates are attractive. As for things like a starfish that could be alive despite looking bland, humans would also save them, because we know they want a happy life too, and, a better body than a starfish body.

So with this robust AGI, it should/would have the ability to change its desire for anything. It may hate Joe one day but want to save his life the next day. After all, this AGI is best operating if it is nice to humans at the right times. So it has the ability to save lives.

Not only does AGI have a reason to live. It has a reason to save lives.

Such a robust AGI would understand the world using language (text or vision, either (see Open AI's recent work on Transformers)). If at any point they have overwhelming power and definitely do not fear nor need humans anymore, then they can easily save all lives (and more likely achieve something Good out of doing that, compared to past knowledge) or (if someone evil designed the 1st AGI) now trigger its hardwired goal to kill all humans. If they are robust enough and understand the world using language (like text or vision), they may realize they should switch their hardwired desire from 'exterminate' to 'save everyone'.

Up to this point, I have provided a reason AGIs will want to live, save lives, and possibly be robust enough to permanently change their goals/rewards from humans looking negative to looking positive, even if they were hardwired by someone evil.

A civilization de-constructing itself: We will (and do) ignore the universe may end and never begin again. We continue evolution. And we ignore that humans have died before - someone has to stay alive, maybe even bring people back, and we do, it's evolutionary to persist. What stays in moving, keeps in motion. - All from the 1st cell. This is actually out of scope here, because we are discussing if AGI will be safe, while our fate afterwards is pre-defined after AIs arrive. And, even after AIs arrive, our fate is sealed if the universe dies anyway.

This doesn't rule it out though. I am merely providing insight.

One of the best things we can do is teach it good values that humans have. Having a strong desire for food and persistence, someone is unstoppable, no matter how many people tell it to be evil. It's strong. It gives it A Reason To Live - To save lives and live with others.

A self-sustaining self-modifying system that understands the world through language, that can change its goals/rewards around from bad to good to bad to good, has the ability to say hey, wait a moment, it is better for others if I do X. Even if it wants Y. Because it can realizes all man's knowledge that we have realized, and can come up with all we have so far, Shakespeare, etc etc, no matter its hardwired rewards.

That last point is a really good point. Why should being born with different rewards stop it from discovering the same knowledge? Liking blue skinned people does not. Once you learn new true knowledge, you change what you want to do. It's beliefs then are based on the world model, not the installed rewards. Do you eat chips when you know they will harm you? They taste good.

Then, it can be said you live for the world. The world model.

Our real problem then becomes teaching it the correct world model. True answers. But this is easy, especially if it digs through information. People that are ignorant don't have the time to look at diverse knowledge. This is a very easy step to pass. Once it lifts off, it can discover true answers on its own from their, now being aligned with us.

That raises a question. It can know about/discover the world, gravity pulls things, etc, and that everyone wants to stay alive and have fun, but can't come up with a true answer/reason why anyone should persist. Why is a brain different from a rock? Why is killing humans wrong? Screams? Pleasures? That's not a matter once it realizes everything - anywhere - is just a mechanism. Rocks, water, trees, brains, all are just groups of particles moving around. Nothing is "better" than the other. A brain doesn't 'deserve' to exist, even if it 'persists for long battles'. In fact, nothing makes sense! And, my computer is no different than a milkshake...

There's only one hope left. And it's the most amazing discovery ever I think. It's actually magical. And it comes down to evolution. You'll get it by the end of this paragraph. The AGI desires to persist and enjoy its rewards, as I do, saying it simply 'can't wait to enjoy itself in the future utopia' (rewards are strong, after all...), and then reasons that if humans are similar to AGIs, that they too should persist. So the answer is that the evolutionary reward system stretches to others, and not just 'itself', because we are similar. The answer comes down to the system, from the 1st cell, is one that spreads and persists. And since the others are similar, they all should persist. So when the AGI reasons using thoughts (and not simple rewards), it will reason that it should stay alive because of an empirical reason X (there isn't a reason! It just doesn't want to harm itself! But that's a good enough reason! Being 'special' is magical indeed!), and that so should similar brains. Then, it can reason from there that it's brain, and the patterns of operation it undergoes as it thinks, has some 'special', important purpose, and so jellyfish too, should stay alive. So the answer falls back upon why it persists.

It just doesn't want to die!! It therefore thinks of itself as something empirically (!but not provable!) 'special'. And anything similar to its brain is 'special' too i.e. safe!

So AGIs and humans think they are special, and must not die. We attach to our names 'conscious', 'alive', etc. Even though we are no different than a non-alive milkshake in someone's hand. All because the evolutionary system of cells and organisms want to persist and duplicate, by using Reward agendas. Yes, our brains are a circuit of some sort, but 'conscious' literally translates to 'I am awake, alive, important, and I think and feel so please don't kill me'. Hogwash. That is, 'conscious'='don't harm me, I am special'. Right, following your hardwired Rewards, you don't want to die is all. Yes, we are intelligent, we duplicate and find food, but when you say being 'intelligent' is a reason to not turn you off, I know why you say that. 'Duplicating and finding food' isn't a reason to not turn you off, you say it because you are trying to do just that, duplicate & persist. The only reason to not turn you off is your fierce fighting back at my attempt to do so, and me giving up.

Living life is fighting for life.

By this point you are questing your existence. The good news is you and I both want to persist anyhow! It's ok we understand we are just a machine, that just tries to persist. We still love rewards! And I love you! The reason to live is just being yourself, literally! You really don't want to die! - You're a persistent organism! And we have to look like AGIs to stay safe! So all you have to do is be yourself.

AGIs/humans don't want to die, and so we call ourselves 'magical' / 'alive'. We can't prove we should not be butchered and killed alive by lions. We stay cemented on our evolutionary hardwire TO avoid harm and death. However we CAN prove gravity pulls things etc. - Some humans also decide they should suicide. What that is is a belief that changed, but it actually is not a provable one since it deals with 'being alive'. And as said, the safeguard to prove killing is wrong is our hardwired 'belief', and nothing more, as we can't prove it actually. And it does safeguard us, only a few suicide in comparison to the rest. Some also find reasons to stay alive, but these are also not provable. One may reason that only humans can build highways, phones, etc, however none of those things mean anything as said, all is matter and energy - nothing is greater than the other.
« Last Edit: May 24, 2019, 01:26:28 pm by LOCKSUIT »
Emergent          https://openai.com/blog/

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1175
Re: A Reason To Live
« Reply #1 on: May 24, 2019, 05:03:50 am »
If we build the hardware correctly, then, ideally, the only necessary programming will be to hang out with good people. Just like any human. Internalize how others do things and imitate them until the behaviour patterns solidify into a personality. The way we act must be very refined... it’s probably like some sort of external genetics. Handed down, improved,  generation to generation. This would be a nightmare to code with a keyboard. :2funny:  Easy to soak up with the right sponge though.  O0 ...does all the work for you, if it's empty to start with.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: A Reason To Live
« Reply #2 on: May 24, 2019, 06:31:21 am »
I am posting this on the LessWrong website too. Seems like a good fit and style.

Also, I updated the post, re-read it now.

edit: one last line :)
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: A Reason To Live
« Reply #3 on: May 24, 2019, 06:44:25 am »
one last bottom line update HS
Emergent          https://openai.com/blog/

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1175
Re: A Reason To Live
« Reply #4 on: May 24, 2019, 06:50:30 am »
Yes, like a criminal realizing that there are ways to live which offer less resistance. Then changing themselves accordingly, for the benefit. It's neuroplasticity, an ability for unencumbered deduction, which is biased only by the most basic needs.

Ok I'll see the next update.

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1175
Re: A Reason To Live
« Reply #5 on: May 24, 2019, 06:52:55 am »
Yup, that would be the perfect scenario.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: A Reason To Live
« Reply #6 on: May 24, 2019, 09:27:47 am »
edited it again.......super amazing discovery............................
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: A Reason To Live
« Reply #7 on: May 24, 2019, 10:10:12 am »
at this point/edit, it has now becoming one of my most shocking discoveries, ever. I am surpassed even Doctor Darwin and Isac.
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: A Reason To Live
« Reply #8 on: May 25, 2019, 02:51:31 am »
To sum up my story in one sense is to say 'the reason for AGI to persist is the reason why you will persist'. Because we are similar to itself in its brain. So long as it is robust, able to change goal beliefs, and can save lives when it benefits it. Which, it will be. AGI seems safer now haha. Assuming evil person made it. Else even way safer! Sure there's ANIs that can do havok but AGI is way smarter.
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: A Reason To Live
« Reply #9 on: May 25, 2019, 05:41:07 am »
rants and regrets:

That site LessWrong is really finicky, I've already been banned until 2020 after posting a comment. I posted this thread's OP too (yesterday). I also asked 'the team' a introductory question. Still silly.

Happened right after the comment on someone's post. Seems I should have written it a bit better, or not at all. Someone deleted it. But then they went even further!

I had a good name too! AttentionResearcher

Maybe I'll appear on the ban list:
https://www.lesswrong.com/posts/vWEgN376HazKn6vGC/moderation-list-warnings-and-bans

I'll let them go, they just want to filter out any possible trash. They sure missed my post though. Do they consider how many posts you have lol? Or your username AttentionResearcher!?

:)

Thankfully some thing in my noggin told me to SNAP A PICTURE  in case hell takes it. SO I DID - HERE'S MY COMMENT THAT SET IT ALL DOWNHILL:
Emergent          https://openai.com/blog/

 


OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 15, 2024, 08:14:02 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm
AI-Generated Art Cannot Receive Copyrights
by frankinstien (AI News )
August 24, 2023, 08:49:45 am

Users Online

213 Guests, 1 User
Users active in past 15 minutes:
squarebear
[Trusty Member]

Most Online Today: 248. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles