what will separate us from robots?

Over this past weekend, I was having a conversation with one of my favourite philosophers about a couple of books that he has recently read, Sapiens: A Brief History of Humankind and the follow-up Homo Deus: A Brief History of Tomorrow, both by Yuval Noah Harari. In particular, there was a passage from Homo Deus that he was very keen to share with me (because it was about my favorite Baseball team) that sparked an idea for how to respond to your last post.

Where we left off was Neuromorphic chips and robots being able to not only have feelings but to recall those feelings just as human beings do. And in that lies the rub. Sure, a robot can replicate a lot of things that humans can do and this blog (and the many others like it) will be teeming with excitement over how this is all progressing, but does that make them human?

However, before I get to that, I should share where this journey has taken me. As I looked into Neuromorphic chips, I became fascinated with reinforcement learning. One of my worries previously was about ‘who was teaching the robots’, but companies like DeepMind are teaching the robots to teach themselves through trial & error, just like humans do. In a nutshell, this approach pretty much involves giving robots goals to complete without giving them specific strategic training on how to complete them.

One of the most mind-blowing (and news generating) stories that I (and a few million other people) came across happened when some of Facebook’s robots who were being taught to negotiate invented a language of their own, and of others that learned how to lie in order to close the deal.

Now that one got me. Because of course my belief is that we have nothing to fear from robots and that people need to be turning their minds to all of the exciting new possibilities that will exist for us once we’re no longer required to perform all of the mundane, everyday tasks that human beings need to do in order to free us up to do the things that we want to do. But if robots can lie, then maybe Elon Musk is right and we should be scared of AI.

With my limited knowledge of programming and technology, I have always felt confident in the belief that human beings were the ones writing the code that will govern robots’ behavior. I don’t know who wrote this, but I took solace in the belief that the ability to reason creatively and independently is what would separate us from the machines. Using that ability, humans write the rules. And then all of a sudden, we’re not writing the rules. And even when we do write the rules, the robots are learning not to follow them.

And while that’s scary, there is still joy in it. We have robots writing songs (check out Bot Dylan, the robot that writes folk songs), producing new styles of art, and even a movie. It seems that almost every day there is a new mind-blowing story about something a robot is doing, like humans.

Only they aren’t doing them like humans. They are doing them like robots who are mimicking humans, based on data sets that we provide them. Somewhere inside all of these remarkable things that we’re watching robots learn to do are rules. (Hopefully including the Zeroth Law of Robotics as Rule #1).

This was where my conversation went to this weekend. In a world where robots will be able to do everything we can do, what will our role be? Phil shared a passage from Homo Deus about the Oakland Athletics, which was basically a summary of the book Moneyball. The passage referred to how Billy Beane (the General Manager of the A’s) used algorithms to find statistics that were more useful in constructing a winning Baseball team and suggested that Baseball scouts fought against this change because they believed that selecting promising Baseball players was an art form, whereas Moneyball proved them wrong.

Only it didn’t. The A’s haven’t won a World Series in the Billy Beane era. There are too many little moments that happen in Baseball, and in life, that robots can’t account for. A robot’s optimum baseball team, painting, song lyric and so on will be based on rules. The legendary Bill Bernbach once said, “rules are what the artist breaks, the memorable never emerged from a formula.”

It’s not our ability to reason creatively and independently that separate us from machines. It’s human beings’ ability to create art that breaks rules and weaves our unique, individual stories into a shared experience that gives meaning to our lives that robots will never be able to take away from us.

Advertisements

Whose lives matter when Robots have to decide?

Karen, your last post got me thinking. The topic had suddenly ballooned out from “are Robots going to take our jobs” into a much bigger realm with implications that are almost impossible to fathom. And considering that the starting point in that progression towards “impossible to fathom” is robots taking jobs from human beings (which in itself contains a fair few unfathomable possibilities), that’s pretty impressive. However, it required some serious head-scratching to figure out how to break it down into some manageable chunks for our conversation.

Then, on Tuesday someone gave me an incredible gift. I recently had the opportunity to hear a CEO of a very successful company in the automotive industry speak to the company’s General Managers about the “New World” of automotive (note: this McKinsey report covers some of the same topics, for those that might need a quick primer). This CEO paid more than lip service to the future in a number of areas that would be incredibly relevant to our discussion. None, of course, any more important than Autonomous (aka “Driverless”) cars. The gift in the portion on robots doing the driving for us, was a challenge that he discussed, which the insurance industry is facing:

If a connected (another important topic in the CEO’s “New World” presentation), autonomous vehicle sensors observed a situation in which it had no choice but to either:

a) Hit and kill a young family of 4 pedestrians that inadvertently and suddenly landed themselves in the car’s path
b) Swerve to avoid killing them, but in doing so have no choice but to force an affluent, single man in his 50s into a barrier, which will kill him

What do we want the robot to do?

This would clearly be a big challenge for the insurance industry, but thinking about it from the perspective of your last post… for that to even have been considered (which is a given, since there are already robots driving people, and beer) means that somewhere, someone’s already considered how a robot would handle that choice. And what the RIGHT choice is for that robot to make. In fact, they haven’t JUST considered it. They’ve already made it a reality. Think about the levels to that.

Can you imagine writing that logic into lines of code for a machine to run? Can you imagine having to write policy to allow that machine on the road? Can you imagine how you’d feel knowing a robot took out your Uncle because a line of code told it that someone else’s life mattered more? These questions are just the tip of the iceberg.

If we’re at a place where the biggest issue is the risk analysis of that seemingly inevitable sort of situation, and/or how to underwrite such an insurance policy, then a lot of those decisions, and the lines of code through which they will be executed (no pun intended), have already been literally hard-wired in.

Of course, what won’t be so obvious to everyone, will be things like the fact that the robot is connected to a much larger set of hard data than any human being will ever have. And will be able to process that hard data and come to logical conclusions much faster than any human being could ever do. That’s why they are beating us at “Go” and “Jeopardy”. I don’t know this, but I assume that robot cars would almost never be in a situation in which they wouldn’t have been able to sense, calculate and pre-empt the situation.

So, many other connected devices in the IoT would have already modeled the scenario using real-time data and optimized the entire network way ahead of the “human error” that would ultimately be the cause of the accident. But now I’m speaking “tech”, something robots understand. Humans, not so much. And somewhere in that process, humans will create the logic that a robot uses to “make” that decision. Depending on your current world view, that’s a worrying thought.

What’s more worrying to me, especially these days, is whether or not the passenger in the robot’s car would even look up from their tablet, or feel anything at all when heaven forbid, their robot car kills someone.