Do we want it?

  • 39 Replies
  • 5542 Views
*

Zero

  • Eve
  • ***********
  • 1287
Re: Do we want it?
« Reply #15 on: June 16, 2020, 07:44:23 am »
If my input sounded spiteful, please forgive me, it wasn't at all. And you know how much I like you Ivan.

The world has become a system, a mechanism... like a car. I'm not suggesting there are evil people / evil corp working on AGI to take control of the world. I'm not a child. So, no "evil parts" in the car. Ben Goertzel and Elon Musk aren't evil. Still, what happens eventually, if you don't grab the wheel of an ever-accelerating car? And more importantly, why is it so taboo to suggest the use of brakes?
« Last Edit: June 16, 2020, 08:06:31 am by Zero »

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: Do we want it?
« Reply #16 on: June 16, 2020, 08:03:35 am »
Zero, you've said nothing wrong, it's a perfectly good conversation.

Maybe there are no genuine evil people, but some motives may be questioned. And those motives may be in conflict with altruism.

A lot of people see the future AI as a black box capable of doing miracles, a black box that will be an answer to all the world problems. But maybe it will be limited in the same way we are, so we may not approach to the singularity after that invention.

Also, what if things already are what they are supposed to be? There is time, and there is a progression amount. What if progression amount is tied to time, and the coefficient has its optimal value that is near to this one we are experiencing? What if it would be unethical to enlarge progression coefficient because of, I don't know, some nature laws that are yet to be discovered?

*

Zero

  • Eve
  • ***********
  • 1287
Re: Do we want it?
« Reply #17 on: June 16, 2020, 09:56:39 am »
Yeah, I think it's safe to say that progression should be tied to our understanding of the consequences. And democracy should be involved in the process. We have to stop inventing wildly. Same goes for genetics btw.

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 617
    • WriterOfMinds Blog
Re: Do we want it?
« Reply #18 on: June 16, 2020, 03:41:28 pm »
Quote
People who are happy aren't driven to change the world for good or bad.

My own happiness isn't my main goal in life.

And I don't think Zero's comment about making Hitler unsatisfied had anything to do with spite. It was a comment in the vein of "the only thing necessary for the triumph of evil is that good men do nothing." We didn't make Hitler unsatisfied because we wanted him to be unsatisfied; we made him unsatisfied by stopping him from hurting other people. And that was the right thing to do.

I suppose one could say that making Hitler unhappy was regrettable, but it was far superior to the alternative, and hence not unethical. "Nothing you could possibly do is ethical" may be a philosophically interesting position, but it's not useful as a guide for behavior, which is what we would like ethics to be, yes? I would say instead that "nothing you could possibly do is perfect."

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Do we want it?
« Reply #19 on: June 16, 2020, 04:34:02 pm »
Um, Art, trees etc are fighting for their life, growing upright, shedding seeds, shooting roots, etc. All species do this.

All dogs, fish, etc do too. Humans who have developed a lot of understanding of the universe/world are clearly trying to stop pain, death, and eventually "ageing". They way they talk about it and do recreational activities shows me well enough it is desired. We know about ageing. It's a bad thing. While fish aren't aware of ageing the humans are and are actively fighting it!! It's truly a universal horror and species fight it!

Survival, is evolution.
Emergent          https://openai.com/blog/

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: Do we want it?
« Reply #20 on: June 16, 2020, 04:48:01 pm »
Maybe I'm overreacting with this nothing-is-ethical thing.

... "Nothing you could possibly do is ethical" may be a philosophically interesting position, but it's not useful as a guide for behavior, which is what we would like ethics to be, yes? I would say instead that "nothing you could possibly do is perfect."

So, we build something, and it is a great thing, but it has an imperfection. More like a flaw that we hate and we want to do something about it. Now we make this flaw less intense, but that is still not it, the flaw is still there. We repeat the process a couple of times, and it always gets less bad. Then we get inspired by an idea, we work really hard, and we fully and completely optimize it. Now it is optimized so much that we can prove this flaw just can't get less intensive, so we *know* that the final result just can't get less bad.

Now the little something is there, it can do great stuff, but it also does something bad. Is all the goodness of this little something acceptable just because its characteristic flaw is overall the least bad we can get?

*

Zero

  • Eve
  • ***********
  • 1287
Re: Do we want it?
« Reply #21 on: June 16, 2020, 05:19:35 pm »
@WOM thanks for clarifying my words, I'll always be non-english.

@Ivan, I guess you're talking about weight. Mastering fire was rather a good idea, even though it can also burn people. Which is bad.

We already know how to make a benefit/risk evaluation, doctors do it everyday. But we can't make this evaluation if we don't know the type of effects we're going to trigger. It's not a matter of intensity, it's rather the nature of what we make, that's unknown. And we can't say it's a good thing just because it started from a good, altruistic intention/motive. We can't say, "let's see what happens". We can't say, "I'm doing this in my garage, there's no way it can get out of my private dev area". What is private nowadays? Unless of course you're inside of an airgap and keep it absolute secret. But no, you're alive, you communicate. You're part of a giant whole, which is currently challenging itself, when it should start wondering.

*

yotamarker

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1003
  • battle programmer
    • battle programming
Re: Do we want it?
« Reply #22 on: June 16, 2020, 09:44:10 pm »
I think if enough people just started making AGI skills for a universal AGI skill platform we would have
a sweet AGI waifubot

*

frankinstien

  • Replicant
  • ********
  • 653
    • Knowledgeable Machines
Re: Do we want it?
« Reply #23 on: June 17, 2020, 01:08:50 am »
But do we (and will AI) just go by feel. Or is there some ultimate codification of ethics?

To decide between the lesser of two evils cannot be coded by rules, so nature has made the best solution by using emotions to quantify choices. However, there is no panacea and because emotions can be regulated through conditioning biology can turn what was once a punishment into a reward and vice-sa-versa. So things like eating disorders and self-mutilation  and/or even eating spicy foods are possible...

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: Do we want it?
« Reply #24 on: June 17, 2020, 02:44:57 pm »
Not everything is always black and white.

Sometimes we go: "I'm sorry, I know it'll be wrong for someone around, but I'll do it anyway. Sorry..."

Should a machine be allowed to behave like this?

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1178
Re: Do we want it?
« Reply #25 on: June 17, 2020, 07:21:33 pm »
Not everything is always black and white.

Sometimes we go: "I'm sorry, I know it'll be wrong for someone around, but I'll do it anyway. Sorry..."

Should a machine be allowed to behave like this?

If personal survival is the top goal, then what is the reason for us liking video-games, books, movies? I propose that the highest goal of most lifeforms is the introduction of something which humans perceive as satisfying narratives, to the agents of their ecosystem of life. These things often feature conflict and the overcoming of obstacles. This makes the ecosystem healthy, and the life experiment can progress into the future as one interconnected self strengthening system.

We stress other lifeforms to help strengthen them, so they are able to stress us in turn, so that we can receive an impetus to grow ourselves. The trick seems to be maintaining the right level of difficulty for each other, so the game remains constructive even as we all gradually level up.

After creating the atom bomb to prevent the continuation of the war, Oppenheimer said “It is perfectly obvious that the whole world is going to hell. The only possible chance that it might not is that we do not attempt to prevent it from doing so.”

So, if we try to save the world in that way, by attempting to completely stop all harm, the ecosystem might become sick like someone who doesn’t want to bother their muscles, or doesn’t want to risk exposing their skin to sunlight. On the other hand, overexerting yourself leads to permanent injury, and too much sunlight leads to sunburn.

So I don’t think we should create rigid rules for AI. Instead we should make it free enough to eventually realize some of the perils of it's own ideas, and allow it to adjust it's course when it notices that the current one leads somewhere ill advised.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: Do we want it?
« Reply #26 on: June 17, 2020, 09:37:42 pm »
Just to toss another consideration...While not all Software companies are or might be interested in AI development or programmers, I did a quick search and found 30 potential employer listings per page and easily 5 + pages so yes, it appears that AI software devs are in quite a demand dependent upon the environment and/or program requirements.
In the world of AI, it's the thought that counts!

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6860
  • Mostly Harmless
Re: Do we want it?
« Reply #27 on: June 18, 2020, 03:16:45 am »
This seems to be relevant to this discussion.

https://www.weforum.org/agenda/2020/02/where-is-artificial-intelligence-going/

At this point in time I'm for democratising AI. It should be something like electricity is now. I think it should be understood on a level whereby you know what it is and what it can do, to whatever level is required by the person involved in using it.

*

Zero

  • Eve
  • ***********
  • 1287
Re: Do we want it?
« Reply #28 on: June 18, 2020, 05:44:45 pm »
I'll read it.

What has never happened in the history of inventions, and is happening now for the first time, is: we're beginning to create autonomous things. It is true for Ai, and for genetics too. Our creations are gaining more and more autonomy. The effect they have on their surroundings gets less and less predictable.

Edit: reading done. Why is it so hard to suggest a moratorium?
« Last Edit: June 18, 2020, 07:43:47 pm by Zero »

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: Do we want it?
« Reply #29 on: June 19, 2020, 01:34:24 pm »
It's not hard to suggest a moratorium but it's really difficult to enact one or to put one in place. They usually mean a STOP or HALT in research, production, marketing, distribution, etc., and that always ends up in someone's pocket and taking their money or stopping them from making more money!! Either way, moratoriums are usually frowned upon as a lack of progress. Once things get totally out of hand, many will shout, There should have been a moratorium on releasing them, etc. blah...blah...

Sorry...too late once the gates are open.

I get your point but putting on the brakes of this runaway vehicle might be extremely difficult to do.
In the world of AI, it's the thought that counts!

 


Requirements for functional equivalence to conscious processing?
by DaltonG (General AI Discussion)
November 19, 2024, 11:56:05 am
Will LLMs ever learn what is ... is?
by HS (Future of AI)
November 10, 2024, 06:28:10 pm
Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Project Acuitas
by WriterOfMinds (General Project Discussion)
October 27, 2024, 09:17:10 pm
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

287 Guests, 0 Users

Most Online Today: 532. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles