Here’s What The Experts Have To Say
Will self-driving cars be programmed to make moral decisions?
If a family of four is in a self-driving car and a distracted pedestrian steps into the road, should the car be programmed to swerve and possibly crash to avoid the pedestrian? What if there’s only one person in the car and it’s a group of schoolchildren who step into the road?
Deciding how a vehicle should base its decisions on the greater good was the subject of a recent study outlined in Science Magazine. The authors of the study found that:
“Even though participants approve of autonomous vehicles that might sacrifice passengers to save others, respondents would prefer not to ride in such vehicles. … Respondents would also not approve regulations mandating self-sacrifice, and such regulations would make them less willing to buy an autonomous vehicle.”
How will the morality of self-driving cars be legislated?
The law clearly must evolve with new technology. For example, as smartphones posed new types of safety risks to drivers, many states enacted laws on texting while driving. Thus, we can expect new laws to address issues raised by autonomous vehicles.
But how far will those laws go, and will they put buyers of self-driving cars in the hot seat? The authors of the study in Science Magazine raise an interesting point:
“If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them, is the buyer to blame for the harmful consequences of the algorithm’s decisions?”
How much control should be given up to a machine?
The way these autonomous vehicles (AV) should process information in order to make decisions is also up for debate. Experts are split on whether the AV should make decisions based on simple logic or “deep learning.”
The logic approach involves programming the machine with the many driving rules humans are supposed to abide by, such as stop at red and go on green. But we all know humans are adaptable and can bend or break rules based on the situation. That’s where deep learning comes in.
Deep learning involves feeding the machine countless scenarios and allowing it to detect patterns and make decisions based on those. However, that approach gives a lot of power to the machine and makes it impossible to trace how the machine makes any given decision. As a recent analysis in Forbes points out:
“As well as deep learning networks may perform at driving 99.9% of the time, this lack of interpretability becomes a real concern on those rare occasions when an AV makes the wrong decision and causes an accident. In those situations, humans have no way to explain what went wrong and no way to troubleshoot the error. Using deep learning in AV decision making, then, entails ceding control and even understanding to the machine. Not everyone thinks this tradeoff is worth it.”
What would happen if hackers took the wheel?
Handing over control of your car to a computer means potentially handing it over to hackers as well. Experts have raised the possibility of vehicle systems being held for ransom, with hackers forcing owners to pay to take back control of their cars. Back in 2015, hackers also proved that they could take control of a Jeep through the vehicle’s software and crash it.
Despite the criminal nature of such acts, there are also issues of liability that lie with the manufacturers. As a report in Left Lane points out:
“Automakers have faced criticism from the computer security industry in recent years as vehicles become more electronically integrated and come equipped with wireless communications systems. Together, these features provide pathways for hackers to remotely access a vehicle and control its systems, potentially including steering and acceleration.”
How could infrastructure pitfalls cause problems for self-driving cars?
Self-driving cars cannot operate in a vacuum. They need certain support mechanisms that can only be provided through infrastructure. As an article in Government Technology magazine explains:
“Because of the radical change that AVs will bring to the current system of transportation, infrastructure pitfalls will become a glaring need. Often, AVs need clear lane striping, places to store the data collected by driving and if they run on electricity a more robust charging network. Without properly anticipating the sometimes opaque challenges, the system could be crippled in its infancy.”
How will safety legislation affect continued innovation?
As consumers, automakers, and legislators work through the scenarios of who could be held liable in self-driving car accidents, experts say there’s a fine balance between keeping consumers safe and benefitting from advances in technology. As Harry Lightsey of General Motors told Government Technology magazine recently:
“I think the key is going to be providing room for folks to be innovative and to try new things. At the same time, a vehicle is a product that is so strongly tied to safety that we can’t just ignore that. Safety has to be at the top of everybody’s list in terms of how this change occurs.”
Who will be responsible for self-driving car crashes?
Although we may see new laws develop to account for the new challenges autonomous vehicles bring, there are many laws already in place that should apply to crashes involving self-driving cars. As attorney A.J. Bruning explains:
“Product liability laws hold car manufacturers, designers, and others in the chain of distribution responsible for defective systems and parts within a vehicle that end up causing harm to the consumer. Still, drivers could also be held responsible if their negligent actions somehow led the autonomous vehicle to crash. For example, if they were not properly maintaining the vehicle to make sure it was fit to be on the road. We are also likely to see situations where a car company and a driver share in the blame for a crash, with arguments arising over to what degree each party is responsible.”