We live in the future. Don’t hassle me here. Yesteryear’s dreams of science fiction are today’s realities, including self-driving cars. But as technology vaults ahead, our ethics are struggling to keep up.
Watch this to understand the basic ethical issues at play.
Basically, who decides beforehand who has the right to live when an accident occurs? When people are behind the wheel, we react unpredictably. But when humans aren’t the drivers, those decisions belong to the car’s programming.
That brings us to Knight Rider. The 80’s saw this dilemma coming.
Back in the day, Wilton Knight and the Foundation for Law and Government (FLAG) created artificial intelligence, plopped it into a 1982 Pontiac Trans-Am, and called it the Knight Industries Two Thousand (KITT). Along the way, this self-driving car was partnered with Michael Knight (David Hasselhoff’s character) with the goal of preserving human life on the grand scale.
But KITT had a predecessor, the Knight Automated Roving Robot (KARR). The prototype was programmed with self-preservation in mind.
Here’s a clip from the episode, KITT vs. KARR, highlighting some of the issues that we’re talking about.
The options in programming between KITT (preserve as much human life as possible) and KARR (self-preservation at all costs) are the same ones that programmers face today with real-life self-driving cars.
So which one do real people think should win?
According to the work of Jean-Francois Bonnefon at the Toulouse School of Economics in France, people think that KITT’s programming is best when it comes to cars that they don’t drive, but KARR’s is best when they have to be in the vehicle. So, basically, people are always more concerned with self-preservation.
So, unlike Knight Rider’s optimistic conclusions, in real life, KARR wins.
What do you think about self-driving cars? How do you believe they should be programmed?