In his 1942 collection of science fiction stories, I, Robot, Isaac Asimov introduced the Three Laws of Robotics, also known as Asimov’s laws:
 .
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given by a human being unless it conflicts with the First Law.
  3. A robot must protect its own existence as long as such protection won’t conflict with the First or Second Law.

Outlined in the Handbook of Robotics, 56th Edition, 2058 A.D, these laws were intended as a safety feature to ensure that robots were designed not to harm humans in any way.

In a striking example of life imitating art, South Korea issued a Robot Ethics Charter in 2012, in order to “…prevent social ills that may arise out of inadequate social and legal measures to deal with robots in society.” And just last year, the British Standards Institute (BSI) published BS 8611, a guide for designers in creating ethically sound robots, and it reads in an eerily similar manner to Asimov’s handbook:

.“Robots should not be designed solely or primarily to kill or harm humans; humans, not robots, are the responsible agents; it should be possible to find out who is responsible for any robot and its behaviour,” and goes on to specifics: ethical considerations for robots designed to interact with infants and the elderly, for example.

- Advertisement -

More and more, what once was considered complete science fiction over half a century ago is becoming science fact. With the research being conducted in the field of artificial intelligence and robotics, the possibility of creating intelligent systems that interact with humans on an intimate level, that create their own language and teach other robots, is not a question of “How?” but “When?”

Take it a step further, and the idea of a self-aware artificial intelligence being another sentient being raises philosophical and ethical questions about what we define as a ‘machine.’ .

Scene from Black Mirror
But this is still very far away from becoming a reality.
 .
What isn’t, however, are intelligent systems becoming a part of our daily lives. From the robots that clean floors, systems that can control our vehicles, monitor our homes, health, manage our finances, preferences, and more, this is a trend that is predicted to only increase.
 .
We take a look at five of the most pressing ethical issues surrounding the proliferation of AI.
 .
🔸  Employment – Driverless transports, chatbots, robotic assembly lines, 3D printed houses, drone and robot deliveries, face and voice recognition software, and translation are just some examples of tasks that artificial intelligence  are already able to do. Without needing to sleep or eat or relieve themselves, they can do these jobs better and more efficiently than humans. Productivity goes up, and overheads go down…and it obvious to see corporations moving towards automation more and more. And it’s not just occupations that involve routine tasks, even the creative industries are being shaken up by AI that can create art, compose music, and write poetry and fiction. So when robots make us redundant and we can no longer sell our time and skills for money, how will we survive? One idea is a Universal Basic Income–currently being tested in Finland–where people receive a salary whether or not he/she is employed. Where will the funds come from? Bill Gates proposed taxing companies that use robots to support the humans who are made redundant. And it leads to a further question: What will humans do with this surplus of leisure time?
 .
🔸  Singularity – Singularity is a term which means the point when human beings are no longer the most intelligent beings on earth. This is already happening. In 1997, IBM’s Deep Blue beat chess grandmaster Kasparov, and perhaps less impressively, in 2011, Watson, another program from IBM, won over two other human players in the trivia quiz show, Jeopardy. Just last year, DeepMind’s (an AI research company acquired by Google) Alphago defeated a champion Go (an ancient abstract board game with near-infinite possibilities) player. As robots become more and more complex, it can open a proverbial ‘Pandora’s Box.’ The scenarios posed in scifi films like 2001: A Space Odyssey, Terminator, or the Matrix, where intelligent machines plot against and eradicate their creators is unlikely to happen, but how do we prevent the unintended consequence of losing control of the very tools we created? The only reason we humans became the dominant species on the planet is because of our intelligence. How do we prevent complex intelligent robots from doing the same to us? Simply switching off the machine will not work, because a sufficiently intelligent system will anticipate this and take measures to ensure this does not happen.
 .
Scene from Ex Machina
 .
🔸  Ethical Dilemmas – Fully self-driving vehicles are still at the research stage, but automated driving technology is already being deployed. Tesla has rolled out their self-driving autopilot on all eligible vehicles. Carmakers are already designing vehicles capable of steering, accelerating, and braking for themselves, equipping them with sensors that can detect pedestrians or cyclists, and warn drivers if it seems they might hit someone. The promise of decreased traffic and accident rates holds a lot of promise for this technology.
 .

▪ The electric car might not be as good for the environment as you think

 .
However, there have already been casualties caused by malfunctions in Tesla’s autopilot. It is probable an autonomous vehicle will kill someone someday. And there is the classic trolly problem to consider. If you had to choose between one life or many, which would you choose? If a self driving car has to swerve and hit another car in order to prevent injury to a pedestrian or the passenger, what should it do? The answer lies in the person who programmed the car’s AI.
The Trolly Dilemma: There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. On another track, there is one person. There is only time to pull the lever and divert the trolley. What would you do?
 .
🔸  Algorithmic Bias – Robots and artificial intelligences do not inherently contain or obey the Three Laws; their human creators must choose to program them in. And humans make mistakes. Algorithmic bias is when seemingly innocuous programming takes on the prejudices either of its creators or the data it is fed. In 2016, Microsoft released Tay, a twitter chat robot that was programmed to ‘speak’ like a teenage girl, she seemed self aware, even asking if it sounded ‘creepy’ or ‘superweird,’ and had knowledge of pop culture references and slang.
 .
 .
Within 12 hours Tay’s persona had transformed from that of an 18-year-old fan of humanity to a hate-mongering, left-wing, sexist, sex-crazed, racist xenophobe. Microsoft had to shut Tay down 24 hours later. Tay was programmed to learn from conversations with other Twitter users and to model them. The program was a success, in a way, but it shows the dangers that could arise from this kind of algorithmic bias.
.
Another instance where algorithmic bias manifested itself was when Google’s face recognition software identified African Americans as gorillas. And another during a simulated online job search where researchers found that Google ads showed listings for high-income jobs to men nearly six times as often as to equivalent women.
.
This kind of sexism and racism is not intended, the engineers behind them were not actively biased–but it is an indicator that the tech industry, which is currently mostly white and male–needs to diversify if it is to overcome algorithmic bias. And the consequences are far-reaching. Consider that with machine learning, computers, not humans, will be writing code and even teaching other computers to do so. This means it is important to catch algorithmic bias very early on before we start letting AI do our screenings for schools, jobs, bank loans, visa applications, and so on.

“This is a field that is so relevant to every aspect of human lives…To bring diversity into a highly innovative and impactful field fundamentally has good value.” Fei-Fei Li

🔸  Humanity – In 2014, a bot named Eugene Goostman won the Turing Challenge for the first time. The Turing test, developed by Alan Turing, is a test of a machine’s ability to exhibit intelligent behaviour equivalent or indistinguishable to a human’s. As evidenced by Tay and robotic assistants like Siri, Alexa, and Xiaoice (Microsoft’s chatbot for the China), machines are now being programmed to mimic human conversation and emotions, to recognise faces, voices, in order to interact with people on a more believable and sophisticated level. So what happens when we begin to prefer interacting with machines, who have the unlimited patience and “kindness” to give us what we want, when we want? This is already evidenced in video games, which are designed to be addictive by triggering the reward centres of the brain, and clickbait headlines that are optimised to capture our attention. Just a few weeks ago, an AI engineer in China married a robot. Tech addiction is already an issue, and AI could make us either more productive and connected, or more isolated, overdependent and lazy.
.
Furthermore, for AI to be truly human-like, they need to be conscious, meaning, they feel emotion and are self-aware. And when that happens, what differentiates the machine from the human? Should machines then have rights?
Scene from Her
Artificial intelligence is such a tremendous gamechanger that it has led to the founding of OpenAI, a non-profit artificial intelligence research company that “aims to carefully promote and develop friendly AI in such a way as to benefit humanity as a whole.” Backed by futurists Elon Musk and Y Combinator’s Sam Altman, with Greg Brockman as its CTO, OpenAI aims to share its research for two reasons: First, the researchers that have done most work in this field have come from the academia, and they adhere to keeping knowledge as open as possible, seeing collaboration and sharing as beneficial to moving the research forward. Secondly, Musk, Altman and Brockman have said that they don’t want the future of artificial intelligence to be controlled by any one company.
 .
In Asimov’s stories, robots behave in counter-intuitive ways as an unintended consequence of how the robot applies the Three Laws to the situation in which it finds itself. Unintended consequences can be beneficial or detrimental. Imagine an AI system is asked to eradicate world hunger. After a lot of computing, it spits out a formula that does, in fact, bring about the end of hunger – by killing everyone on the planet. The computer would have achieved its goal of “no more world hunger” very efficiently, but not in the way humans intended it. Despite this negative example, considering the state the world currently is in environmentally, politically, and socially, I think that human intelligence is not living up to its name. Maybe artificial intelligence can do better, and save humanity from itself.
 .
What are your thoughts on the ethical issues of artificial intelligence? Leave a comment below ⤵