Ai Dreams Forum

Artificial Intelligence => Future of AI => Topic started by: Zero on September 17, 2020, 02:45:59 pm

Title: Asilomar AI Principles
Post by: Zero on September 17, 2020, 02:45:59 pm
Hi friends,

Have you heard about the Asilomar AI Principles (https://futureoflife.org/ai-principles/)?

What do you think? Does this make you optimistic about the future of AI? Do you think these principles can be followed widely? And in your opinion, are they well chosen?

Personally, reading this raised my optimism, gently. :)
Title: Re: Asilomar AI Principles
Post by: HS on September 17, 2020, 06:39:18 pm
Here's my take on the first principle:

Quote
1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

While this sounds good... I have a bad feeling about it. IMO trust and delegation of control will prove most beneficial. Placing artificial constraints on a system to make it more beneficial, does not make the system intrinsically safe, or helpful. Putting a muzzle on a dog is not how you'd create the most beneficial dog. Same with a leash, if your dog has always been on a leash and that suddenly breaks, that's a much more dangerous situation for everyone involved than if the dog had learned the how to handle freedom since it was a puppy. Plus, trust breeds trust, affection and respect. Distrust creates an opposite negative reinforcement. We won't have eternal control over AI. What precedent do we want to set?
Title: Re: Asilomar AI Principles
Post by: Zero on September 18, 2020, 09:11:38 am
I don't think it says that we should place arbitrary or artificial constraints on intelligent systems. In fact, it doesn't say anything about how intelligent systems should be designed. Instead, it rather says what the goal of research should be. My understanding of it goes like: we shouldn't "make children" without teaching them what direction is good and what direction is wrong.