How to prevent "Albert Einstein scenario"?

  • 16 Replies
  • 5166 Views
*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
How to prevent "Albert Einstein scenario"?
« on: February 02, 2015, 02:21:56 pm »
Something is bugging me about what people did to Einstein after his announcement of E=mc^2. They built a bomb and used it. How to prevent repeating the same scenario with artificial intelligence? Is there anything we can do?

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: How to prevent "Albert Einstein scenario"?
« Reply #1 on: February 02, 2015, 03:26:22 pm »
There was WW2 going on back there...

If it comes to WW3, it will be fought between machines, not between humans.

From that point of view I see no objections to building AI as soon as possible. What are your opinions? Am I forgetting something?

Edit: I'm not building AI to fight, but I'm sure someone will abuse it and I can't escape from it. Someone will put a gun in AI's hands and I hate it so much that I would stop working on it if there weren't other huge benefits that come with AI. I'm still on a thin line between continuing or stopping my project.

*

Ultron

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 471
  • There are no strings on me.
Re: How to prevent "Albert Einstein scenario"?
« Reply #2 on: February 02, 2015, 07:06:00 pm »
Your decision is irrelevant - if you don't somebody else will create it, eventually. But why not be the pioneer and the first. Be the one who understands it better then anyone? You don't have to be famous - you could be the master behind the AI fighting from the shadows.

If the AI is smart, it won't necessarily listen to somebody ordering it into war. And if it is not smart, it won't be able to adapt and easily fail. Ultron's paradox? lol

Either way it is better for part of the power to be in the hands of the conscientious and to be opposing the evil.

And wars - whoever they may be fought by - will still end with human casualties. I'd rather join the fight and die then send a robot and die. If there are no human casualties there will be no fear, no regrets and nobody will give in because nobody cares about a pair of gears exploded in a battlefield - thus the war will also have no effect.

How do we prevent AI from turning evil tho? I believe we all realize that software can be a much bigger threat then hardware. A robot is limited - a program however, can grow beyond limits and take over all machinery - which by the way, we have a lot of. And you can't fight that. I believe it is only a program of the same caliber that could fight that war, except that it may stand no chance even if it is a millisecond late. But I do not we can prevent this. We can't even prevent basic human crimes, yet alone an uprising of an advanced artificial species.
Software and Hardware developer, and everything in between.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Moderator
  • **********************
  • Colossus
  • *
  • 5865
Re: How to prevent "Albert Einstein scenario"?
« Reply #3 on: February 02, 2015, 08:24:48 pm »
Some thoughts:

As long as humans have emotions they will be controlled by them. There will always be fights and wars.

Seems like those Seven Deadly Sins are pretty much running the show. As long as we have Greed and Envy, etc., men will continue to fight and wage war(s) over virtually worthless land, territory, drugs, diamonds, etc., anything of alleged value. If these men claim to love their God so much and if their God is a loving God, then why would such a God condone war and the senseless taking of life? To kill in the name of your God is fruitless. If there exists an omnipotent and omniscient God, it wouldn't need or require men to ware war in the first place!!

Any invention, no matter what...if a government thinks that invention might possibly have military worth, then they will take it and use it in any manner they see fit. The best intentioned device, labor saving, illness curing, space saving, cost saving, automatic what-ever-it-is, will be taken by the government / military. It is the way that things are done.

If you have ever seen the original movie, The Day The Earth Stood Still - or perhaps the remake, the idea of such a race of galactic robot enforcers such as that giant robot, Gort, might not seem like such a bad idea. They would immediately halt any attempt my humans to wage war against each other.
Peace and prosperity would be the nature of things.
In the world of AI, it's the thought that counts!

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: How to prevent "Albert Einstein scenario"?
« Reply #4 on: February 02, 2015, 09:06:25 pm »
Yeah, I'm a dreamer.

*

Data

  • Moderator
  • ***********
  • Eve
  • *
  • 1279
  • Overclocked // Undervolted
    • Datahopa - Share your thoughts ideas and creations
Re: How to prevent "Albert Einstein scenario"?
« Reply #5 on: February 02, 2015, 09:24:19 pm »
I've always thought, probably naively, if the AI starts to go bad we could always pull the plug.

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: How to prevent "Albert Einstein scenario"?
« Reply #6 on: February 02, 2015, 10:15:22 pm »
Let's get back to reality. I saw a documentary where they said that Nikola Tesla sent a secret weapon blueprints to the biggest military forces of that times. He believed that the world peace is achievable only if there is equal balance of military force in the world. He ran into critics after that move, but I believe he made a smart move. Maybe I'm wrong...

*

Ultron

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 471
  • There are no strings on me.
Re: How to prevent "Albert Einstein scenario"?
« Reply #7 on: February 02, 2015, 10:33:32 pm »

Which means pulling the plug on every electronic device in the world...Permanently.
Not naive Data, yet not a viable solution (although effective).


Maybe the best solution and ideal future scenario for humanity would be the fictional Star Trek universe - humans forming the United Federation of Planets which means end of internal war, everyone gets 'paid' etc... For the near future, it may be best (and of course impossible) to give up on religion and patriotism. It is one thing to like your country, history and respectful customs but a whole other thing to be a patriot. I don't see what's good in patriotism. Nothing good has came from it, and I'd rather give up on my country and culture if I were to be the one to set an example and prove a point. They say your native language and culture are your biggest values in life - I find that thought no less then stupid.


My culture can shape me and my language trains my tongue - but my intellectual heritage is unique and independent from my culture (as in, it does not critically depend on it).


My point - I would give up and if I have to even erase my own culture if it would mean uniting the world. It is the past that we learn from, but what good does the past do for the future if we fight over it?


And back to topic. I have not heard about that particular Tesla conspiracy - but there are a lot. He was a smart and equally cautious person. Of all the rumors about him I only believe that he hid a lot of weaponizable inventions - "inventions too powerfull for anyone..." as he has allegedly said. We can see something similar in the comics and the new TV show Agent Carter - where we see Howard Stark (Iron Man's father) having a basement full of inventiones referred to in a similar manner.
Software and Hardware developer, and everything in between.

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: How to prevent "Albert Einstein scenario"?
« Reply #8 on: February 02, 2015, 11:12:39 pm »
So, what will happen at the end? Countries will accumulate AI fighter force. And then two countries will war on with their machines. Then they will run out of machines and start human fight. Then a higher force country, which has a lot of machines will hop in as a tampon zone, making rules for them until they agree to talk things out.

It seems to me that with a machine war scenario there will be less death and injuries than there is now, when humans are fighting wars. But, again, maybe I'm wrong.

*

Ultron

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 471
  • There are no strings on me.
Re: How to prevent "Albert Einstein scenario"?
« Reply #9 on: February 03, 2015, 12:25:45 am »
You may be right but you need to look at it differently. Why do we go to war? Territory - Money - Power - control over humanity. That is the goal and reason. So if you want to control humans you have to slay them (speaking a little freely here) and thus they shall resist. So if humans are resisting you aren't going to attack machines - its the humans you need to conquer.
Point - robot armies could be an initial phase in a war - could even determine the winner but it is the humans that are the target and also the evil masterminds so as a result there MUST be casualties.

Maybe only the way to prevent wars at least on a global scale is to unify the armies into one. They will be mixed, separated and under shared command so that a specific nation can not simply call their soldiers back.

AI technology should never be shared outside of the science community. You have no idea of the military potential. Think of humans except smarter, faster and immune to any conventional damage - nearly immortal.
Software and Hardware developer, and everything in between.

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: How to prevent "Albert Einstein scenario"?
« Reply #10 on: February 03, 2015, 09:20:03 pm »
Edit:
If everyone has everything they can imagine (provided by AI) they will not start a war. War is about greed and if greed is satisfied there is no reason to slay anyone.

I think I'll stick around for a while. AI seems fine to me.

*

Ultron

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 471
  • There are no strings on me.
Re: How to prevent "Albert Einstein scenario"?
« Reply #11 on: February 03, 2015, 10:41:47 pm »

Most people can never be satisfied. Including myself.


Getting satisfied means limiting yourself to what you have - nobody wants to limit themselves. Besides, even if we were all somehow 'satisfied' there will be jealousy - we can't possibly be satisfied with the equal amount of things. Some may be satisfied with a house with an ocean view while on the other hand I would like to have a nuclear research facility - in space.


Tell me, how can AI satisfy us other than as personal accomplishment or this:


Software and Hardware developer, and everything in between.

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: How to prevent "Albert Einstein scenario"?
« Reply #12 on: February 03, 2015, 11:45:08 pm »
With thrive our needs transform from existential ones (having food, health) to optional ones (spices in our food). If someone would commit a crime because of existential reason, it is much less likely that the same person would commit a crime over an optional reason. The other part like jealousy over a girl or a position in community unfortunately will not be solved, but we can classify those as optional needs, so we can at least expect less crime.

And that photo is certainly not funny, as far as I'm concerned.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Moderator
  • **********************
  • Colossus
  • *
  • 5865
Re: How to prevent "Albert Einstein scenario"?
« Reply #13 on: February 04, 2015, 02:19:56 am »
But Happiness is not having what you want, it is wanting what you have.

A need, is something that without it, you will die...a want, is everything else.
You Need, Food, Clothing, Shelter (we're not splitting hairs here like air, water, medical - those are included in the three examples just for sake of discussion).

Some kids get jealous over a toy that another child has. Some of these characteristics carry over into adulthood and some children and adults never out grow it!

SO...in conclusion should mankind be lobotomized? Have all emotions stripped or chemically suppressed? Should we all be given happiness drugs in our water and food supplies so there's no jealousy or greed or bad feelings? Should everything and everyone be monitored all the time like Big Brother in Orwell's 1984?

Socialism does not work. it never has.

A galactic / local group of enforcers will be on standby to thwart any wrongdoing by anyone or any group of evil doers.  Gort? Where are you? Data? Spock? Where's the logic here? Hello?
In the world of AI, it's the thought that counts!

*

DemonRaven

  • Trusty Member
  • ********
  • Replicant
  • *
  • 630
  • Disclaimer old brain @ work not liable for content
    • Chatbotfriends
Re: How to prevent "Albert Einstein scenario"?
« Reply #14 on: February 04, 2015, 04:59:12 am »
When it comes to any type of social or political system they all have pro's and con's to it. The trick is finding some kind of happy medium. The thing that bothers me is that a computer while logical is not always able to have common sense.  Unfortunately i find that many extremely brilliant scientists tend to be the same way (sorry personal experience speaking here- don't throw things at me lol) A computer could come up with a solution but it may end being like most types political systems. being, something that has to be forced on to people. in some fashion.  While some would go along with it, others would fight it.
Example and I really hate using this one because people tell me  gee that sounds like a good idea and it really is not. To combat shortages, stealing, crime etc a computer chip could be implanted in people like they do with pets. Such  a device if programmed properly could be used to buy, sell and tell others where you are and what you are doing at any given moment. Any one breaking the law or hording needed food or water or what ever could be found immediately. Now I dare you to tell me that the whole world would go along with such a thing. I know i would not. I am too old to have fun but i still like my privacy.
So sue me

 


Requirements for functional equivalence to conscious processing?
by DaltonG (General AI Discussion)
November 19, 2024, 11:56:05 am
Will LLMs ever learn what is ... is?
by HS (Future of AI)
November 10, 2024, 06:28:10 pm
Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Project Acuitas
by WriterOfMinds (General Project Discussion)
October 27, 2024, 09:17:10 pm
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

353 Guests, 0 Users

Most Online Today: 491. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles