Welcome to our dystopian future. If the ongoing research, announcements and investment is anything to go by, one could be forgiven for thinking that any day now we might be able to hop into an autonomous vehicle, close the door and be whisked away to the destination of our choice in safety and comfort. Ok…the safety part might still be a bit questionable but they’re totally working on it…promise.
With these horizons looming closer it behoves us to take a moment to consider some of the implications of where these first tenuous steps might be leading and how we plan to get there. This brings us to the real conundrum I have with driverless cars, and it’s not about their technical capabilities.
Ultimately, the mechanics of autonomous vehicles are just that, an application of technology, and if there is one thing we have seen over recent decades it’s that technological solutions are not so much a question of if it will happen but rather how long it will take. Much like the advent of virtual reality, touch screens and folding displays, these things will take time to evolve and that may be quicker or slower depending on market drive but if there is significant enough interest then the technological development is all but inevitable.
Putting the host of sensors, design, development and testing aside, the real part that attracts my scrutiny is instead the question of ethics. Now, before you mention automation and the loss of jobs, let me stop you there. The concept of autonomous cars with regard to automation and employment is a whole separate discussion and debating what could be considered the natural progression of the entire industrial movement seems a little broad in scope to warrant such brief consideration as this article is capable of. No, of immediate interest to me for the moment is the ethics of decision making.
My quandary with decision making is two-fold. The first part is the question around responsibility for a decision made by a self-driving car. Let me propose a scenario—a second-hand ethical trolley problem of sorts. A young girl steps out from behind a parked car and directly onto the street in front of an autonomous car. In the oncoming lane, a motorbike driver with an adult and a teenage passenger is directly opposed. The numbers are crunched and there is insufficient time to stop and nowhere else to go. Do you continue straight which will result in the death of the young girl or veer into oncoming traffic which will result in the death of the motorbike passengers.
Well? The answer is of course neither. From the back seat, all you can do was watch as your car plows through whichever of the unlucky candidates it happened to decide were somehow mathematically less awful. Or perhaps the decision is predicated entirely on the best outcome for the owner of the vehicle-after all you’re the one who paid for it right? Whichever way the scenario plays out it’s a tragedy so it’s not much of a decision and you’re probably just glad that at least you weren’t the one who had to make it.
Now, the sticky part arrives a little bit later when the time comes to begin ascribing fault. Who then bears the responsibility for making the choice? Because you can be sure that the courts are going to want to know even if the rest of us haven’t considered it just yet. The car company who built the vehicle? The programmer who created the cars decision-making rules? The owner of the vehicle (after all, if he’d picked a smarter or more expensive car then maybe this wouldn’t have happened right)? Or maybe it’s the car itself who will need to bear that guilt—regardless of how remorseful it feels after the fact.
And now we come to the second part of my ethical concern on decision making. Autonomous cars and the systems that govern how they operate and make decisions are designed and implemented by companies whose ultimate goal is to sell those products. The mismatch here, is that companies as a whole, while they may or may not be altruistic, are driven by commercialism rather than the betterment of mankind or carkind (vehiclekind?). Not only that but we are talking about the industry that so famously back in the late 60s produced the Ford Pinto and made a cost benefit analysis weighing the cost of fixing an engineering issue against the cost of injury claims and reparations before ultimately choosing the injuries, deaths and reparations as being the more palatable option.
And before you decry my example as being too old to be relevant and claim that companies these days have learnt from their lessons, then I’d take a moment to remember that VWs dieselgate decision wasn’t exactly the pinnacle of ethical choices was it? The reality is that car manufacturers didn’t learn altruism from these events, what they took away from them was that they needed to include the loss of public image into their risk calculations. And ultimately these are the people who will be laying the foundation for how an autonomous car should make its decisions.
And so we arrive at the crux of the problem. How do you buy an autonomous vehicle when nobody knows who to blame for its decisions and the only thing you do know with certainty is that the people teaching those vehicles ethics have no understanding of what ethics are to begin with?