I was listening to this story on NPR today on my drive home from work, and the reporter said something like 30,000 deaths [a year, I assume] are from car accidents, 95% of which are attributable at least in part to driver error, and wouldn’t it be nice if driverless car technology could help reduce or eliminate all those deaths. Now, I don’t know how fuzzy his statistics are, but in general I agree.
But that’s where it gets interesting – this goes from solving a technological problem to more of an existential question about what freedoms and responsibilities we are willing to relinquish. Ideally, driverless cars would create a traffic network free of congestion or collision. However we know that machines are only as error-free as the people who program them, and that sometimes they are simply unreliable. That would mean that even in a world where computers control all traffic, some percentage of deaths would still occur. The question is – what is our threshold for computer-related deaths? From our 30,000 deaths per year baseline, would we accept 5,000 computer-fault deaths a year? 10,000? 20,000?
I think there is something in our nature that abhors a reality in which all of our personal responsibility and ability to react is taken away from us, even if lives can be saved. It could be argued that given the choice between driver and driverless, if the driverless option on average produced just one less death a year, then it would be preferable.
And then there’s the issue of car insurance. How would that work?
You make an excellent point. That’s Allstate’s stand.