Self-driving cars have been around for some time, though their adaptation rate is still low as they have been constantly tested for existing and potential technical issues. Some also argue that the so-called technical issues are deeply embedded in discussions around norms, morality, and ethics as the algorithms are getting increasingly in control of us in everyday life, not just our cars.
In a Journal of Consumer Research article, Tripat Gill looked at these issues, given that people are still trying to figure out and accept how much control they need to relinquish to an automated vehicle. Through a set of experiments, Gill observed a moral shift:
People thought it was permissible to harm a pedestrian (compared to having the harm done to themselves) when a regular car was driven by its algorithm (self-driving mode) rather than by themselves. In other words, if the automated vehicle made the decision, then “responsibility” belonged to the algorithm, not the person in the car.
This was repeatedly observed when the harm was classified as moderate or severe and when the harm was actual or imagined - in the case of a “one-to-one” dilemma. The effect was less if five pedestrians (compared to one) or a child were harmed. If it were the person driving the car (driver mode), they would try not to harm the pedestrians (five of them or a child), even if it means the driver may get hurt instead.
As technology takes more control of our lives, it becomes more challenging to understand and keep up with its unintended consequences. This study is vital to suggest one of those: The use of automated technologies has the potential to change moral norms and increase self-interest in people. It is further essential to point out that it may not be just whether people would adopt these algorithmic agents or not; but what kind of morality they expect from them.
Should algorithmic agents prioritize saving the passenger or the pedestrian? Should customers of such agents be able to customize that morality? How would people’s biases reflected in such choices influence society? What will be the meaning of trust between people? Between robots and people? Who will take any responsibility? How would liability work for companies who manufacture or insure these devices? Can automated-agent-morality be standardized? Whose standard would it be? And many more questions are floating around, as many research studies outlined at the end of Gill’s article and numerous others. It would be an understatement to say that both philosophical and practical debates on ethics in AI have the potential to take you to places.