Creating Rules for Artificial Intelligence

brain
What ethical guidelines should we establish as we enter the age of artificial intelligence?

Google reportedly paid more than $500 million to acquire DeepMind Technologies, a company working to develop artificial intelligence, starting with “simulations, e-commerce and games.”

According to The Information, the deal also included a provision to create a special ethics board:

The DeepMind-Google ethics board, which DeepMind pushed for, will devise rules for how Google can and can’t use the technology. The structure of the board is unclear.

Over at Reddit, hundreds of readers debated what kind of ethical framework we would need for artificial intelligence. Some readers wondered if AI should have emotions, as one writer summarized:

AI should be given the full range of human emotion because it will then behave in a way we can understand and ideally grow alongside. If we make it a crippled chimpanzee, at some point technoethicists will correct that and when they do we’ll have to explain to our AI equals (or superiors) why we neutered and enslaved them for decades or centuries and why they shouldn’t do the same to us. They’re not Roombas or a better mousetrap, they’re intelligence and intelligence deserves respect.

Another wondered about what kind of rules we will set for the humans who create artificial intelligence:

at some point in the future there will exist A.I with a complexity that matches or exceeds that of the human brain … they may enjoy taking orders, and should therefore not be treated the same as humans. But, do you believe that this complex entity is entitled to no freedoms whatsoever? I personally am of the persuasion that the now simple act of creation may have vast and challenging implications. For instance, wouldn’t you agree that it may be inhumane to destroy such an entity wantonly? These are the questions that will define the moral quandary of our children’s generation.

The reader Ozimandius made this point:

if you design an AI to want to treat us well, doing that WILL give it pleasure. Pleasure and pain are just evolutionarily adapted responses to our environment – a properly designed AI could think it was blissful to be given orders and accomplish them. It could feel ecstasy by figuring out how to maximize pleasure for humans.
The idea that it needs to be fully free to do what it wants seems to be projecting from some of our own personal values which need not be a part of an AI’s value system at all.

What do you think?

Image via Saad Faruque

Leave a Reply

Your email address will not be published. Required fields are marked *