Karen, your last post got me thinking. The topic had suddenly ballooned out from “are Robots going to take our jobs” into a much bigger realm with implications that are almost impossible to fathom. And considering that the starting point in that progression towards “impossible to fathom” is robots taking jobs from human beings (which in itself contains a fair few unfathomable possibilities), that’s pretty impressive. However, it required some serious head-scratching to figure out how to break it down into some manageable chunks for our conversation.
Then, on Tuesday someone gave me an incredible gift. I recently had the opportunity to hear a CEO of a very successful company in the automotive industry speak to the company’s General Managers about the “New World” of automotive (note: this McKinsey report covers some of the same topics, for those that might need a quick primer). This CEO paid more than lip service to the future in a number of areas that would be incredibly relevant to our discussion. None, of course, any more important than Autonomous (aka “Driverless”) cars. The gift in the portion on robots doing the driving for us, was a challenge that he discussed, which the insurance industry is facing:
If a connected (another important topic in the CEO’s “New World” presentation), autonomous vehicle sensors observed a situation in which it had no choice but to either:
a) Hit and kill a young family of 4 pedestrians that inadvertently and suddenly landed themselves in the car’s path
b) Swerve to avoid killing them, but in doing so have no choice but to force an affluent, single man in his 50s into a barrier, which will kill him
What do we want the robot to do?
This would clearly be a big challenge for the insurance industry, but thinking about it from the perspective of your last post… for that to even have been considered (which is a given, since there are already robots driving people, and beer) means that somewhere, someone’s already considered how a robot would handle that choice. And what the RIGHT choice is for that robot to make. In fact, they haven’t JUST considered it. They’ve already made it a reality. Think about the levels to that.
Can you imagine writing that logic into lines of code for a machine to run? Can you imagine having to write policy to allow that machine on the road? Can you imagine how you’d feel knowing a robot took out your Uncle because a line of code told it that someone else’s life mattered more? These questions are just the tip of the iceberg.
If we’re at a place where the biggest issue is the risk analysis of that seemingly inevitable sort of situation, and/or how to underwrite such an insurance policy, then a lot of those decisions, and the lines of code through which they will be executed (no pun intended), have already been literally hard-wired in.
Of course, what won’t be so obvious to everyone, will be things like the fact that the robot is connected to a much larger set of hard data than any human being will ever have. And will be able to process that hard data and come to logical conclusions much faster than any human being could ever do. That’s why they are beating us at “Go” and “Jeopardy”. I don’t know this, but I assume that robot cars would almost never be in a situation in which they wouldn’t have been able to sense, calculate and pre-empt the situation.
So, many other connected devices in the IoT would have already modeled the scenario using real-time data and optimized the entire network way ahead of the “human error” that would ultimately be the cause of the accident. But now I’m speaking “tech”, something robots understand. Humans, not so much. And somewhere in that process, humans will create the logic that a robot uses to “make” that decision. Depending on your current world view, that’s a worrying thought.
What’s more worrying to me, especially these days, is whether or not the passenger in the robot’s car would even look up from their tablet, or feel anything at all when heaven forbid, their robot car kills someone.