AI and Ethics | Part Two

What is the future for Artificial Intelligence and what does it mean for our ethical and legal approach to it? [Part 2] 

This was an essay for Media Technologies module at SHU. Read part one of this essay here.

As you realise just how close we are to incredible Artificial Intelligence inventions and the potential of their future abilities, which has been discussed in my previous blog post, many ethical issues come to mind.

When talking about ethics with regards to AI, there are two main focuses. Firstly, it’s the way that ethics are included in the creation of such intelligence from the beginning, and secondly, the ethical approach that we should or will have towards these in the future. As this is such a large subjects to cover, for the moment we will focus more on the latter – by thinking about the possibilities of what is referred to as robot rights.

Robot ethics and AI Kamila Zielinska - white robot looking at camera

We might avoid or fear this conversation, especially if we do still believe that we are a while away from singularity and a high level of general intelligent machines. However, the fact that this is being such an important subject lately doesn’t necessarily mean that they are reaching a certain status in our society. Dormehl (2017) believes that instead we need to see it as a reflection of the ‘complex reality of the roles that they play’.

The main debate of this, is that we need to consider that machines could become conscious beings, which means that carefully applied and analysed ethics must eventually be put in place, for legal and cultural reasons. It is believed that we owe them the consideration as social equals if we are to make them think, suffer or even ‘feel’ the way a living being would (Dvorsky, 2017). However, an important concern is that, with the prediction of Artificial Intelligence rising above humans in terms of intellectual, as well as physical abilities, we want to avoid a future that, in some possible scenarios, won’t take people into account (Ghose, 2013). As we want to protect ourselves, but also consider giving machines protection from us (Bidshahri, 2016), this makes ‘robot rights’ a very important topic for the subject of AI’s progress and future.

franck-veschi-517860.jpg

Currently our legal system seems to be behind the newest smart technology. Legally, all actions caused by technology put responsibility on either the person using the machine or the person behind the creation of it (Dormehl, 2017) For example, when it comes to guns, the person using the technology is responsible for the harm done, just like when using computers for criminal activity. On the other hand, if a piece of technology performs an action that wasn’t given by the person it was being used by, such as an accidental explosion or failure in its action, very often the responsibility falls back on the maker of this technology.

Although this makes sense when it comes to older technology, it seems that the law needs updating when it comes to actual thinking machines, which are made for working and making decisions independently of any instructions we give to them – such as a self driving car, or an actual human-like AI robot. Also, it’s important to remind ourselves that it’s not just people that have been given legal entity and personhood, as corporations can be given ownership as well as legal responsibility – we can sue a company or brand, rather than the person behind it. This isn’t just important to note due to making it seem quite logical to do the same for AI, but also, with that, Lopucki (2017) mentions a loophole that has been found. As AI can be put in charge of a company, it already allows for them to be given a certain legal personhood. Therefore, the law already seems to be in favour of providing certain rights or responsibilities to thinking machines.

View this post on Instagram

Having a robotic girlfriend or boyfriend may not be as far-fetched after all. . The demand for robotic lovers is growing and companies are working to give people the robotic companions they want. . In 2016, a group of Chinese scientists created an incredibly lifelike robot that acts as a girlfriend and companion. Her name is Jia Jia, she follows orders and even has human-like facial expressions. . Lilly, a French woman who identifies as robosexual, has fallen in love with the robot she built, named InMoovator. She had two previous relationships with men, but at age 19, realised she wasn't attracted to humans. . Though InMoovator can't talk or respond to her affections, Lilly plans to program him to do so one day. . . . #DCODE #DiscoveryAsia #vday #valentinesday #robots #artificialintelligence #robotlove #droidlove #robotcompanion #obectumsexuality #love #marriage #lover #loveall #potd #discoverychannel #romance #future #technology #strangelove

A post shared by Klinikita kalipancur (@klinikita.kalipancur) on

Another reason in support of placing legal responsibility on a high-level Artificial Intelligence technology is potential struggle of who to blame for wrong doings, if not the machine itself. David Vladeck’s comment, stating that it would be hard to identify the person responsible, as so many companies and individuals participate in the creation of these AI’s, is challenged by Dormehl ( 2017) bringing up the the potential issue of taking the responsibility from creators, meaning that they might be less careful when creating them. Also, considering that, we could have an even worse scenario, where the creators could easily take advantage of it when creating the algorithms for their machines.

Person and Robot, AI and Ethics Kamila Zielinska

However, this subject doesn’t just consider the legal responsibility of AI towards the society, but also the legal responsibility of the society towards the machines. This refers to the subject of not a legal status but a moral status for a machine, which is a concept not only still difficult to accept for the contemporary world, but also difficult to grasp, as it has never been specified nor defined what exactly a being must possess to be given a moral status (Bostrom & Yudkovsky, 2011). We know that Artificial intelligence is already rising above animal intelligence and mind capabilities. We even have artificial neural networks, which currently already possess more neurones than honey bees or cockroaches (Dormehl 2017). As we provide certain rights for animals, in terms of ethical treatment and consideration, we could argue that, as AI surpasses their intelligence and possibly their consciousness, they deserve the consideration of the same level of rights and treatment (Bossmann, 2016) However, with this type of thinking, if we base their rights on their level of intelligence and consciousness, this would mean that once they reach singularity pass our level of intelligence, just like they are passing the animal’s mind level, would we be giving them the same rights as humans? Then, following that rule, this would grant them more rights than humans eventually. As this is exactly what we’re trying to avoid, as their purpose of creation is for the benefit of humans, this would be a failed system.

Especially that one of the main issues of giving robots any rights at all anyway, even in protection of their dignity or their existence, is the risk of the machines turning against us. Scholars like Stephen Hawking, Elon Musk and Bill Gates all regard AI reaching the human level a threat to the existence of our species (Lopucki, 2017). It makes sense, that if a being is considered more dangerous than a human, it is crucial to put measures in place to make sure that it is always serving humans and is under control. If we considered them a moral patient, which Dormehl (2017) suggests, in protection of their integrity and existence, as well as human attachments to them, we would cause many safety concerns, as the law might give them protection that we may want to avoid in case of danger to the society.

joseph-chan-264837.jpg

Overall, although while giving AI a consciousness and ability to exist independently, it is clear that every possible legal or ethical benefit they might deserve comes with not just an opposition, but a threat. It’s not enough to just come up with an appropriate ethical approach of our society towards Artificial Intelligence based on the place it’s in now, as it is constantly developing, therefore our approach would need to progress the same way, alongside it. This is why considering the future, timescale and speed of AI’e progress is so crucial, as any rights and ethics that are put in place now need to consider the future. This doesn’t just include thinking of what’s possible for AI, but also what we might still not imagine or comprehend yet, as technology has the possibility of progressing faster than expected and, without that preparation, we might face serious consequences, even before singularity is reached.


Bibliography:

Anderson, M. & Anderson S.L. (2011). Machine Ethics.

Bidshahri, R. (2016) If Machines can think, do they deserve rights. Singularity Hub. [Online Article] Retrieved from https://singularityhub.com/2016/09/09/if-machines-can-think-do-they-deserve-civil-rights/#sm.0001u3nvkil9id6atzl1x5c1vmdj9

Blade Runner (1982). Michael Deeley (Producer). Ridley Scott (Director). Warner Brothers. [DVD]

Bossmann, J. (2016) Top 10 ethical issues in Artificial Intelligence. World Economic Forum. [Online Article] Retrieved from https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/

Bostrom, N., & Yudkowsky, E. (2011). The Ethics of Artificial Intelligence. [Draft for Cambridge Handbook of Artificial Intelligence, eds. William Ramsey and Keith Frankish (Cambridge University Press, 2011)] Retrieved from https://nickbostrom.com/ethics/artificial-intelligence.pdf

Brynjolfsson, E., & Mcafee, A. (2017) The Business of Artificial Intelligence. Harvard Business Review. Retrieved from https://hbr.org/cover-story/2017/07/the-business-of-artificial-intelligence

Dormehl, L (2017) I, Alexa: Should we give artificial intelligence human rights. Digital Trends. [Online Article] Retrieved from https://www.digitaltrends.com/cool-tech/ai-personhood-ethics-questions/

Ghose, T. (2013). Intelligent Robots Will Overtake Humans by 2100, Experts Say. LiveScience. [Online Article] Retrieved from https://www.livescience.com/29379-intelligent-robots-will-overtake-humans.html

Hunt, D.G. (2016) The future of artificial intelligence and ethics on the road to superintelligence. [Online Article] Retrieved from http://www.whyfuture.com/single-post/2016/07/01/The-future-of-Artificial-Intelligence-Ethics-on-the-Road-to-Superintelligence

IEEE Spectrum (2017). Human-level AI is around the corner – or hundreds of years away. [Online Article] Retrieved from https://spectrum.ieee.org/computing/software/humanlevel-ai-is-right-around-the-corner-or-hundreds-of-years-away

Lacoma, T. (2017). Demystifying artificial intelligence: Here’s everything you need to know about AI. Digital Trends. [Online Article] Retrieved from https://www.digitaltrends.com/cool-tech/what-is-artificial-intelligence-ai/

Lu, C (2016) Why we are still light years away from full artificial intelligence. TechCrunch. [Blog Post] Retrieved from https://techcrunch.com/2016/12/14/why-we-are-still-light-years-away-from-full-artificial-intelligence/

Sharma, K (2017). We’re all getting played by Sophia the robot. Fortune. [Online Article] Retrieved from http://fortune.com/2017/10/27/sophia-the-robot-artificial-intelligence/

Solomonoff, R.J. (1985) The Time Scale of Artificial Intelligence. Human Systems Management. Vol. 5

WhyFuture (2017). The Future of artificial intelligence and ethics on the road to superintelligence. [Video file] Retrieved from https://www.youtube.com/watch?v=8Ja_7Fx2MmU

 

One thought on “AI and Ethics | Part Two

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s