brittney cottrell

War-fair

brittney cottrell
War-fair
Kate Fitzgerald Account Executive

Kate Fitzgerald

Account Executive

When you remove humans from decisions that have been made by humans since the dawn of time, things are going to get a little messy.  

Last month, some of the world’s leading robotics and artificial intelligence pioneers called on the UN to ban the development and use of killer robots. Elon Musk lead the cavalry of 116 global specialists who are now pleading with the UN for the ban on autonomous weapons. In their letter to the United Nations, they described this arms race as being the ‘third revolution in warfare’; following gunpowder and nuclear arms.

putin-web-thumb.jpg

Yes, it’s terrifying. No matter how brainwashed or ‘highly trained’ (call it what you want) a human soldier may be, I can’t help but think that if faced with a person holding a baby, they would reconsider shooting, if even only for a split second. The humanity appeals to the human. If you remove that humanity what happens?

To discuss this any further, we need to establish what this ‘humanity’ consists of.

We could list the many things that we believe are specific to the human experience; feeling love, creating art etc… But what does it essentially boil down to? Most sources say that the answer to this question is ‘Moral Status’. But how is that determined?

18th century philosophers explained it through the terms of Sentience and Sapience. Sentience is the capacity for any phenomenal experience and qualia, which is the ability to feel pain and to suffer. Sapience is a set of capacities associated with higher intelligence, such as self-awareness and being a reason-responsive agent; or personhood, as it is often called. Even if there was to eventually be a scenario where a human brain could be scanned to minute detail and thus impeccably replicated, the question of whether this mind would be sentient is borderline unanswerable.

But perhaps worrying about the ‘un-sentience’ of these forthcoming technologies is not where our gaze should be focused. Much of the online literature surrounding this subject seems to be focusing on the ‘non-humanity’ and less on the development of the processes of reasoning. It is paramount that we note our fear of these autonomous scenarios whilst also understanding that the deployment of these machines is an ethical decision made by the humans. In other words, we can’t be blaming ‘death robots’ when they haven’t done anything yet. When we consider the idea of automating war, we are in fact considering the nature of ourselves - not our machines.

In a previous article, I explored the changing paradigm of ‘trust’ in relation to self-driving vehicles. I deduced that “part of the reason we find humans easier to trust with our lives despite the statistics of human error in comparison to technology is down to forgiveness. We can take comfort in the fact of having an individual to blame and forgive, but when entrusting technology with our lives, we can blame no one but ourselves for taking the risk.” This principle is also very applicable in this case. If a self-driving car loses control and ploughs into a crowd of people because its sensors failed to recognize them, how is this any more unethical than a traditional vehicle that malfunctions at the expense of its driver. There is a problem with the hardware. It’s broken or badly built. Without doubt, both are tragic and there is responsibility to be taken, but in both cases it is the fault of the manufacturers and designers.

I suppose, what I am trying to say is that we are dealing with two or three very different issues that seem to be increasingly rolled into one problem in mainstream media, but it’s just not that simple.

Firstly, there is the overall ethics of the usage of AI in warfare (obviously). But I think the arguments need to be less along the lines of ‘They have no morality and could obliterate the human race!’ and more along the lines of ‘Why do humans feel the need to create these in the first place?’. Those who see the sense in stopping this should do what they can to achieve it, but in the (quite possible) case where the usage of these machines becomes inevitable, it is the processes that need to be questioned.

09-trolley.w710.h473.jpg

These machines shall only respond to a situation in the manner in which they were taught/programmed to do so. The problem with this is that sometimes, as humans, our ethical and moral instincts are skewed by circumstance. Often, our determination of right and wrong can become more complex when we mix in any emotional issue. The difficulty of doing the right thing does not present itself as a result of us not knowing what it is. The difficulty comes from us being unwilling to pay the price that the ‘right’ action often demands. This is demonstrated very clearly in the widely cited ‘Trolley Problem’ (for those of you not familiar with it, here is a video for the sake of brevity). This is helpful in boiling down the main issues that may arise from autonomous weapons and robot soldiers. Yes, you can program AI to deduce the best possible option and thus outcome, but as we mentioned earlier, when this decision is devoid of any external emotional factors, the possibility (however small) of sparing a life is completely removed.

 

This however, is not about punishment. Whether or not we are able to punish the device is actually irrelevant. It is a matter of how we respond to unethical behavior, not how to assess it. The question of whether a machine has actually done something wrong is entirely different to the issue of what we plan to do about it.

The processes need to be transparent and lend themselves to inspection. We will need the robots to be able to explain themselves in all levels of their reasoning. Scarily enough, we may find that they will be able to do this better than we can.

These practices and sanctions need to be established now. We are running out of time. Elon Musk et al are on the right track; forward thinking is vital. The greatest danger to us is ourselves and the impending ‘third revolution in warfare’ is making us look at ourselves as an inherently destructive race who are destined to self-destroy.

And isn’t that far more scary than a robot with a machine gun?

 

In part two I will be examining the ethics of AI predictably extending into the sex industry and what implications this will have for humans and more specifically, female humans.