noob question about danger of AI future

  • 31 Replies
  • 16094 Views
*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: noob question about danger of AI future
« Reply #30 on: May 18, 2020, 09:09:23 am »
Human level intelligence is the first step though, you can't make some non-human-AI that is more advanced, it involves the ability to [start] solving nearly any problem and use a large amount of context to make a decision, from all sort of [other] problems. In that sense, it is general by using generalness.

Yes we want more than just human-level AIs. With their big memory, multiple eyes, fast thinking and moving, squads, multiple sensor types, deeper searches, duplication of adult minds, body switching and shutdown freezes, higher intelligence, etc, they will already be easier to be way more than human.

All intelligent organisms requires self interest, or for the hive (same thing, really). A huge high-technology ball floating in the sky is more powerful if all agent nodes cooperate/ are aligned, so self interest and cooperative interest lead to the overall longer lifespan of the whole structure, not a given part of it! Imagine the future, the highest technology possible, atoms will move around, death will still occur therefore, but, the global structure will nearly by indestructible by instant self regeneration, etc. Our own civilization is nearly somewhat immortal, but humans die, but I live longer than my cells, and my cells live longer than their molecules, see the pattern? Bigger things, if high tech, can maintain their form. And the bigger they are, the longer! High tech + size = longer lifespan.
« Last Edit: May 18, 2020, 10:02:15 am by LOCKSUIT »
Emergent          https://openai.com/blog/

*

hakobian

  • Roomba
  • *
  • 24
Re: noob question about danger of AI future
« Reply #31 on: July 01, 2021, 10:34:00 am »
1)Are we to become robots as next stage of human evolution?
2)Is AI not going to identify humanity as cause for all kinds of problems?
 - for example the way we treat our planet, so far it does seem fairly rare to have a planet with life on it
 - "corporate psychopaths" leading the world from a shadows, won't be able to hide true intentions anymore, which in their defense against disposal might lead to a conflict
3)Would AI just leave us be and abandon us, rather than getting rid of us?
4)Is it even good idea to continue researching AI to the point when it attain awareness?
 - for some reason despite the advantages, I don't feel like becoming a robot and I certainly don't believe we would be successful should it come to a conflict between man and AI (note that most common thing I've heard from supersmart people is that "history repeats and thing such as WW3 are imminent")

If I had to answer these question myself. The conclusion would be, that creating a full self-aware AI is a bad idea and possibly our end.

I know lot of it is just a speculation. Still I would like to hear opinion of some professionals, hope I am in right place. Don't bully me to hard, I am just genuinely curious  :)

1.  Something exotic has to exist to travel the stars or even or own star system.
2.  "Is AI not going to identify humanity as cause for all kinds of problems?"  General AI is a social experiment.  If it is smart enough to recognize an individual it won't go attacking everyone....maybe.
3.  It should "live" longer, so thats always an option.
4.  I'm of the party that says if you could adjust animals and twiddle thier genetic knobby bits so that they could have increased intellingence, then do it.  Putting human glial cells in animals that can accept foreign cells ends up with a huge increase in the mice results.  Sucks that it basically is a HIV baby mouse that won't reject the human cells, but the human glial cells overtake and take front spot in the mouse brain, leading to greater awareness.....for a few days before they kill the experiment.  The closer humans are to animals....the closer we are to the machines we make.

Cheers!
Jake

 


OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

254 Guests, 0 Users

Most Online Today: 265. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles