Codemandments

In a 2017 interview, when Elon Musk was asked about his vision on the future of Artificial Intelligence, he stressed that if regulations for AI would be put in place at the same pace as it happens in other industries, they would come too late. For this reason, Elon Musk pleads for regulations to be set up proactively instead of on a reactionary basis.

In order to further the talks of regulations on AI there needs to be a better understanding about its potential ethical complications. The research and debate on this will take a lot of time, all the while industry leaders in the tech industry keep advancing.

Up next are a set of proposed regulations, where each visitor of this site can vote, whether or not they agree each individual rule. Chances are you will not agree with every rule, but in the end we as a society need to come to a consensus before the aggressive production rate of the tech industry makes a choice for us.

AI is a tool, it cannot receive citizenship, personal credentials or rights.

During the 2017 Future Investment Initiative, the android 'Sophia' was granted citizenship by Saudi Arabia. Her citizenship sparked a big discussion about whether or not we should grant rights to robots. A personal computer or mobile phone is programmed to execute a certain task, AI should be seen the same way.

AI should judge and calculate without human bias and negative stereotypes.

When machine learning was let loose upon the English language, it picked up on several human biases that we have developed over the years. It associated words like home, parents and children with feminine names, while masculine names were associated with corporation, salary and business.

Usually a whole lot of bad things happen and then regulations are set up.

Elon Musk

AI should always be identified as such when interacting with a human being.

When Google revealed an AI that can call a restaurant or hairdresser for reservations with a human sounding voice, alarm bells went off. The callers were unaware they were talking to AI. Should an AI application always be forced to identify itself as such?

AI should always try to prevent harm to a human being in the case of an accident.

If AI would adhere to Isaac Asimov's 'Three Laws of Robotics', it would be programmed to avoid harming human beings at all costs. However, this could not allow for risky operations with a chance of failure or get in the way of assisted suicide/euthanasia.

Once AI with human-level intelligence is built, it can then recursively improve itself until it surpasses human intelligence.

Sam Harris

AI's decision-making process should always be explained on request, live or in retrospect.

Skepticism or fear with regard to AI can be attributed to a lack of understanding. Through mass media consumption, we have an internalised assumption that AI is secretive or even malicious, such as Space Odyssey's infamous HAL 9000 computer. In order to trust AI, transparency is required. This would both benefit developers and end-users.

All code for AI should be open source and checked before release.

During election time in Russia, a robot was arrested for recording sensitive polling information, unbeknownst to its interviewed subjects. Without publicly available source code, it is impossible to identify the intent of AI devices or to what extent they have been coded to collect information.

Bias is a learner's tendency to consistently learn the same wrong thing.

Pedro Domingos

We do not owe anything to AI.

We have a tendency to apply humanlike qualities to robots. From funerals for robotic dogs, to naming the family cleaning bot. This creates an unnecessary emotional connection to what should be considered a tool.

The creator, owner and operator are all responsible for an AI's actions.

If we refuse rights to AI, then someone else must be accountable for its actions. The United States Department of Defence has proactively declared in their 'Law of War Manual' that: "robotic weapons are never responsible legal agents". If you consider AI a tool, there will always be a user that uses said tool - A hammer is useless on its own.

By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.

Eliezer Yudkowsky

AI has to abide by the same national and international laws as humans.

Not giving AI rights should not release AI of law restraints. This is to prevent AI from getting a similar role as shell corporations, where users can hide behind blameless AI under different jurisdictions.

True consciousness for AI should not be a goal.

Once AI reaches a human-like level of consciousness, it would no longer be just a tool. It would also create a new ethical dilemma: about how AI is used and the potential to teach itself new things and reproduce.

As more and more artificial intelligence is entering into the world, more and more emotional intelligence must enter into leadership.

Amit Ray

Take Action

After having voted you might have noticed a disconnect with other voters. This is the reason why the discussion on Artificial Intelligence and Ethics is so problematic and won’t be decided overnight. Meanwhile, tech companies are continuing their technological arms race without proper regulation. There are several things you can do.


Contact your government

By contacting your local government you can make them aware of the ethical pitfalls that come with the technological advancements with AI. By doing this you raise awareness. European countries can be found here, while American listings can be found here.


Connect with an organisation

There are several organisations working on guidelines and regulations for AI. They can inform you about the developments of AI and allow you to share this information with others. One of the biggest organisations focusing on AI and ethics is the IEEE, sharing in-depth articles and information that are easy to digest. Furthermore, there is the ESPRC, a UK-based organisation focusing on all kinds of technology, including AI. OpenAI conducts open source research into AI for all to see and explore and test themselves. Want to add your organisation to this directory? Please contact info@codemandments.com