noob question about danger of AI future

  • 30 Replies


  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4501
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: noob question about danger of AI future
« Reply #30 on: May 18, 2020, 09:09:23 am »
Human level intelligence is the first step though, you can't make some non-human-AI that is more advanced, it involves the ability to [start] solving nearly any problem and use a large amount of context to make a decision, from all sort of [other] problems. In that sense, it is general by using generalness.

Yes we want more than just human-level AIs. With their big memory, multiple eyes, fast thinking and moving, squads, multiple sensor types, deeper searches, duplication of adult minds, body switching and shutdown freezes, higher intelligence, etc, they will already be easier to be way more than human.

All intelligent organisms requires self interest, or for the hive (same thing, really). A huge high-technology ball floating in the sky is more powerful if all agent nodes cooperate/ are aligned, so self interest and cooperative interest lead to the overall longer lifespan of the whole structure, not a given part of it! Imagine the future, the highest technology possible, atoms will move around, death will still occur therefore, but, the global structure will nearly by indestructible by instant self regeneration, etc. Our own civilization is nearly somewhat immortal, but humans die, but I live longer than my cells, and my cells live longer than their molecules, see the pattern? Bigger things, if high tech, can maintain their form. And the bigger they are, the longer! High tech + size = longer lifespan.
« Last Edit: May 18, 2020, 10:02:15 am by LOCKSUIT »