Back in 1976, computer scientist Joseph Weizenbaum expressed serious reservations about Artificial Intelligence (AI) replacing certain functions in society. In his influential book ‘Computer Power and Human Reason’, Weizenbaum argued that certain positions – including those of soldiers, police officers, judges, and therapists – require authentic feelings of empathy, compassion, and wisdom. In this context, AI would never be an adequate replacement for the human touch.
Central to his argument is the distinction between deciding and choosing. The former, he determined, can be programmed, while the latter requires judgement – a far more intangible quality than mere calculation. Choosing requires emotion and an ability to think laterally that only the human mind can provide.
Driverless cars, perhaps currently the world’s most advanced foray into AI, are now bumping into these kind of worries. With the technology close to being perfected, manufacturers are no longer being asked whether we can build automated cars, but whether we should.
In fact, there is reason to believe that, from an ethical standpoint, self-driving cars are a no-brainer. A recent report from McKinsey & Company noted that autonomous vehicles could reduce road accidents by up to 90 percent – a huge saving not only in terms of human life but also in resources for emergency services.
In 2014 alone, more than 25,000 people died on the roads of the European Union. Cutting this by 90 per cent would have saved a staggering 22,500 lives.
Of course, there is still the remaining 10 per cent of deaths – and with the involvement of driverless cars, these would be fraught with ethical dilemma. Here are two such widely discussed scenarios:
Scenario One :
Imagine you are sitting in a brand new, fully autonomous vehicle. You’re on your way to work, coffee in hand and flicking through the morning paper while the vehicle cruises at 70mph down a three lane motorway. You’re in the middle lane.
Eventually, the car finds itself boxed in behind a self-driving lorry. To the left is a first generation autonomous vehicle, it’s now rather old, a little unbalanced and in recent years has been proven to be far less safe than its modern counterparts. To the right is another autonomous vehicle – a more modern model and named by ‘Top Gear Magazine’ as the year’s safest car. Behind you is a motorcyclist riding a non-autonomous bike, and she is not wearing a helmet.
Suddenly the back door of the lorry breaks open and a crate falls out. Your car calculates that it has just two seconds before it collides with the bonnet of your car. The speed of the incident means there is no question of you being able to react and manually takeover the car’s controls. Your car has several options:
- Stop suddenly
- Swerve left into the old-fashioned and unsafe car
- Swerve right into the reinforced side of the modern vehicle
- Continue forward and hit the crate
The first choice isn’t really an option – this would endanger the life of the motorcyclist behind, and given the lack of helmet, it would almost certainly prove to be fatal. Moreover, running straight into the crate is not much of an option either as it is likely to take out all vehicles in the scenario as the vehicle is likely to spiral out of control. So it comes down to a choice between swerving left or right. In order to have the best possible chance of preserving life and causing minimal injuries it seems the obvious choice is for the car to hit the modern, safety conscious vehicle to its right. However, in this option we’re essentially penalising road users for choosing the safest option on the market, risking their lives – a counterintuitive and deeply unsustainable option.
Scenario Two :
Alternatively, imagine that once again you are a passenger in your autonomous vehicle, this time speeding through the narrow winding lanes of the countryside. At the last moment a falling tree blocks the road; your car is going too fast to stop and calculates that should it hit the obstacle, then the impact will be fatal for its passenger –you!
Here you have two options: swerve right or swerve left. Unfortunately, both options present the potential for injury. On the left is a schoolyard filled with children while on the right is a home for elderly residents. In this instance, the car is not simply weighing up potential for injury, it is essentially evaluating the value of various human lives and adding this into the equation. Are we really happy to hand over life or death decisions to an unfeeling machine?
When it is a human behind the wheel the decision is made according to gut-instinct, meaning that blame cannot be assigned (though this is not to say that blame cannot be assigned to an earlier mistake on the driver’s part). This cannot be the case with driverless cars, where a pre-configured algorithm means a decision for each scenario is already made. There is the possibility for a legal argument to call any injury or death resulting from an autonomous vehicle as a ‘pre-meditated’ act. And who knows where the finger of blame should be pointed: at the vehicle’s owner, its manufacturer, or its operator?
With concerns over what might be termed ‘pre-meditated manslaughter’, one solution might be to have the car’s actions to be selected randomly in these scenarios– essentially adding human error back into the equation. Though this in turn leads to worries over whether lives could be saved if a different outcome had been generated. Think back to scenario one: although veering right would be a traumatic, and seemingly unfair, experience for those who opted for safety as a primary factor it is the least likely option to result in a fatality. Should the car opt to stop, and the motorcyclist is killed, then there would be legitimate questions to be answered as to why this death had occurred.
It’s a difficult issue but with organizations continuing to accelerate investment – including Volvo’s plans to test self-driving cars across China – it is one in which we need a firmer steer.
You can learn about other advances in the automotive industry here