what will separate us from robots?

Over this past weekend, I was having a conversation with one of my favourite philosophers about a couple of books that he has recently read, Sapiens: A Brief History of Humankind and the follow-up Homo Deus: A Brief History of Tomorrow, both by Yuval Noah Harari. In particular, there was a passage from Homo Deus that he was very keen to share with me (because it was about my favorite Baseball team) that sparked an idea for how to respond to your last post.

Where we left off was Neuromorphic chips and robots being able to not only have feelings but to recall those feelings just as human beings do. And in that lies the rub. Sure, a robot can replicate a lot of things that humans can do and this blog (and the many others like it) will be teeming with excitement over how this is all progressing, but does that make them human?

However, before I get to that, I should share where this journey has taken me. As I looked into Neuromorphic chips, I became fascinated with reinforcement learning. One of my worries previously was about ‘who was teaching the robots’, but companies like DeepMind are teaching the robots to teach themselves through trial & error, just like humans do. In a nutshell, this approach pretty much involves giving robots goals to complete without giving them specific strategic training on how to complete them.

One of the most mind-blowing (and news generating) stories that I (and a few million other people) came across happened when some of Facebook’s robots who were being taught to negotiate invented a language of their own, and of others that learned how to lie in order to close the deal.

Now that one got me. Because of course my belief is that we have nothing to fear from robots and that people need to be turning their minds to all of the exciting new possibilities that will exist for us once we’re no longer required to perform all of the mundane, everyday tasks that human beings need to do in order to free us up to do the things that we want to do. But if robots can lie, then maybe Elon Musk is right and we should be scared of AI.

With my limited knowledge of programming and technology, I have always felt confident in the belief that human beings were the ones writing the code that will govern robots’ behavior. I don’t know who wrote this, but I took solace in the belief that the ability to reason creatively and independently is what would separate us from the machines. Using that ability, humans write the rules. And then all of a sudden, we’re not writing the rules. And even when we do write the rules, the robots are learning not to follow them.

And while that’s scary, there is still joy in it. We have robots writing songs (check out Bot Dylan, the robot that writes folk songs), producing new styles of art, and even a movie. It seems that almost every day there is a new mind-blowing story about something a robot is doing, like humans.

Only they aren’t doing them like humans. They are doing them like robots who are mimicking humans, based on data sets that we provide them. Somewhere inside all of these remarkable things that we’re watching robots learn to do are rules. (Hopefully including the Zeroth Law of Robotics as Rule #1).

This was where my conversation went to this weekend. In a world where robots will be able to do everything we can do, what will our role be? Phil shared a passage from Homo Deus about the Oakland Athletics, which was basically a summary of the book Moneyball. The passage referred to how Billy Beane (the General Manager of the A’s) used algorithms to find statistics that were more useful in constructing a winning Baseball team and suggested that Baseball scouts fought against this change because they believed that selecting promising Baseball players was an art form, whereas Moneyball proved them wrong.

Only it didn’t. The A’s haven’t won a World Series in the Billy Beane era. There are too many little moments that happen in Baseball, and in life, that robots can’t account for. A robot’s optimum baseball team, painting, song lyric and so on will be based on rules. The legendary Bill Bernbach once said, “rules are what the artist breaks, the memorable never emerged from a formula.”

It’s not our ability to reason creatively and independently that separate us from machines. It’s human beings’ ability to create art that breaks rules and weaves our unique, individual stories into a shared experience that gives meaning to our lives that robots will never be able to take away from us.

Advertisements

Emotional Blackma.il

robot-2

After your last post Drew, I have indeed been scratching my head and it’s taken me a while to work out how best to respond, there are so many dark ways to go from here and I want to bring it back a bit…

There are two points in your post that stood out to me; the first is around AI making the decision, which is explicit throughout. The second is not so, as it’s towards the end and you might almost miss the pertinence and that’s this bit:

‘But now I’m speaking “tech”, something robots understand. Humans, not so much. And somewhere in that process, humans will create the logic that a robot uses to “make” that decision.’

This last point is one of the singularly most discussed points amongst anyone worried about AI ‘taking over’, this point where humans effectively hand over the controls by allowing AI to become more intelligent than us. In reality we’re a way from this, but IT WILL happen eventually.

The first computer on record to pass the Turing test was Eugene in 2014, it sparked a lot of debate about the ability to hold conversation and what defines this. Fast forward to today and the conversation you can have with AI is much more sophisticated, consider the shorts between Watson and a host of guests, I quite like this one with Ridley Scott and in addition to clarity of structure, speed of response and total relevancy, you see humour and wit coming through from Watson as well. Watson is making decisions in real time to inform an interesting conversation.

So this brings me to my thought provocation which is around the line between logic decision making and conscious decision making. The first is easy to comprehend as it’s essentially a series of true v false (or 1 v 0 if you’re into code) logic decisions based on a probability factor. The second takes in additional parameters like feelings and emotions, which are more relevant to humans; if you’ve ever made an irrational ‘in the heat of the moment’ decision it’s likely because you didn’t follow the logic of true v false to get there.

Consider Neuromorphic chips for a moment, they have been designed to replicate the way a human brain works, drawing parallels from the neurological processes using artificial neural networks. Essentially this means that AI is programmed to feel. Advances in AI will see us side by side with intelligent and sentient robots in our lifetime, so I find it fascinating to look at the differences (and similarities) in the way we (humans) and AI robots behave based on senses, perceptions and ‘feelings’.

If AI is programmed to recall a feeling, is that different to me remembering a feeling? If I recall the joy I felt from a particular moment, or the pain, or the excitement, is that so different from AI recalling the same feeling from a program. Whether that feeling is held in a chip or a brain, it exists. So what’s the difference between a feeling imagined, programmed or remembered?

If consciousness is defined as being aware of one’s surroundings, sensations, thoughts and existence and AI is being developed to understand all of these things, then are we now speaking ‘feelings’ something humans understand but robots not so much? Or has that line become even more blurred?

 

 

 

 

 

 

Whose lives matter when Robots have to decide?

Karen, your last post got me thinking. The topic had suddenly ballooned out from “are Robots going to take our jobs” into a much bigger realm with implications that are almost impossible to fathom. And considering that the starting point in that progression towards “impossible to fathom” is robots taking jobs from human beings (which in itself contains a fair few unfathomable possibilities), that’s pretty impressive. However, it required some serious head-scratching to figure out how to break it down into some manageable chunks for our conversation.

Then, on Tuesday someone gave me an incredible gift. I recently had the opportunity to hear a CEO of a very successful company in the automotive industry speak to the company’s General Managers about the “New World” of automotive (note: this McKinsey report covers some of the same topics, for those that might need a quick primer). This CEO paid more than lip service to the future in a number of areas that would be incredibly relevant to our discussion. None, of course, any more important than Autonomous (aka “Driverless”) cars. The gift in the portion on robots doing the driving for us, was a challenge that he discussed, which the insurance industry is facing:

If a connected (another important topic in the CEO’s “New World” presentation), autonomous vehicle sensors observed a situation in which it had no choice but to either:

a) Hit and kill a young family of 4 pedestrians that inadvertently and suddenly landed themselves in the car’s path
b) Swerve to avoid killing them, but in doing so have no choice but to force an affluent, single man in his 50s into a barrier, which will kill him

What do we want the robot to do?

This would clearly be a big challenge for the insurance industry, but thinking about it from the perspective of your last post… for that to even have been considered (which is a given, since there are already robots driving people, and beer) means that somewhere, someone’s already considered how a robot would handle that choice. And what the RIGHT choice is for that robot to make. In fact, they haven’t JUST considered it. They’ve already made it a reality. Think about the levels to that.

Can you imagine writing that logic into lines of code for a machine to run? Can you imagine having to write policy to allow that machine on the road? Can you imagine how you’d feel knowing a robot took out your Uncle because a line of code told it that someone else’s life mattered more? These questions are just the tip of the iceberg.

If we’re at a place where the biggest issue is the risk analysis of that seemingly inevitable sort of situation, and/or how to underwrite such an insurance policy, then a lot of those decisions, and the lines of code through which they will be executed (no pun intended), have already been literally hard-wired in.

Of course, what won’t be so obvious to everyone, will be things like the fact that the robot is connected to a much larger set of hard data than any human being will ever have. And will be able to process that hard data and come to logical conclusions much faster than any human being could ever do. That’s why they are beating us at “Go” and “Jeopardy”. I don’t know this, but I assume that robot cars would almost never be in a situation in which they wouldn’t have been able to sense, calculate and pre-empt the situation.

So, many other connected devices in the IoT would have already modeled the scenario using real-time data and optimized the entire network way ahead of the “human error” that would ultimately be the cause of the accident. But now I’m speaking “tech”, something robots understand. Humans, not so much. And somewhere in that process, humans will create the logic that a robot uses to “make” that decision. Depending on your current world view, that’s a worrying thought.

What’s more worrying to me, especially these days, is whether or not the passenger in the robot’s car would even look up from their tablet, or feel anything at all when heaven forbid, their robot car kills someone.

Can a robot be taught to understand the human ethos of right v wrong?

I’ve been reading up recently on the advances in robot machinery and developments to help improve durability which you can read about here if you’re interested.

However, this led to me pondering just how ‘human’ robots will become and with this, how responsible they will need to become.

My thought process went a bit like this…

  1. There is a very real situation where intelligent machines could be welcomed to replace humans in military operations as the increase of drones for use in conflict areas continues to rise.

2. Currently these drones are operated by humans and the operator is accountable for the fatality caused as it’s classed as a weapon in this scenario, but operation happens remotely. In these situations days, sometimes weeks of planning are allowed for before execution of an exercise occurs. It’s different arguably to facing your opponent in combat, which a soldier on the ground will need to do but the PTSD suffered has been reported to be similar.

3. The automation of these combat drones or Unmanned Aerial Vehicles (UAV’s) is on the rise and we’re talking months rather than years before full automation will be reached, likely a relief to drone operators, but with civilians being caught in the cross fire of drone use who will be responsible for actions in a fully automated drone attack?

Then I came across an article by a researcher in robot ethics who posed a question as to whether advanced robots should be held accountable for their actions…

Separately I was reading an article about ‘Man becoming the weakest link’ in relation to processing multiple scenarios at once to reach the highest probable success factor, which is what AI is vastly being adopted for in order to solve many situations more rapidly around the world; from cancer research to dietary requirements.

So…

4. Join these multiple advancements in drone technology, durability in machinery and intelligence based analysis, and on one hand you have a military weapon with precise insight and no risk to a life, yet on the other you have somewhere else to point the finger.

5. This is just one example. When you think about intelligent machines being trained as surgeons, autopilots on planes, trains and cars getting more sophisticated, service layers being automated and all the things we talked about in our earlier posts, it is a fact that robots will soon be able to do anything.

Who takes the blame when they get it wrong?

Is it them that gets it wrong?

Can a robot be taught right from wrong?

GEN.I.A.S

Fearing or embracing the evolution of the relationship between man and machine is an interesting balance indeed, and one that makes me wonder about the human perception of ‘intelligence’.

The question I pose is this: Is it the intelligence we fear, or the comparison to our own ineffectiveness? Note that I didn’t use a negative slant on intelligence there, but it would be remiss of me to gloss over the fact that we have got innately lazy due to advances in technology allowing us to be.

As a species we’re knowledge hungry and thanks to the internet we can find out pretty much anything we want, anywhere we want, simply at the touch of a button.

The average IQ of the human race is increasing by around 3 points over each decade, as explained by James Flynn in this Ted Talk (which is fascinating if you have a spare 15 minutes). It is a fact that humans are becoming more intelligent. The genius IQ of Einstein at 160 is looking less advanced and though it hasn’t been exhaustively proven, this rise in intelligence is in part down to technology having given us the ability to access, compile and remember information and knowledge in a more readily accessible way.

It would seem we embrace the rise of our intelligence, yet fear the systems that support this growth as they become ever more automated, more intuitive, more… artificial.

When we couple intelligence with an artificial brain, suddenly the very thing that aids us now threatens us, therefore I believe that for the masses the line between security and insecurity with intelligence lies with where the power is.

At the end of the day, the evolution of the world as we know it is down to human intelligence. It’s far more likely that intelligence systems will frustrate us whilst following our orders, because it’s not quite what we would have done or how we would have gone about it, than it is they will consciously rise up against us.

Today we have parameters in place that allow governance of intelligence to require human input, even if as small as the push of a button to confirm a query or automation. It is up to us to remove that decision and make it artificially enabled.

So what tomorrow holds in this next era of man and machine working together, is down to us.

In my view we should stop thinking about what parameters we put around intelligence and instead start thinking about what objectives we might set for it.

160 you say? Let’s beat that.

WHY ARE WE SO PESSIMISTIC (ABOUT MACHINES TAKING OVER)?

Do a quick search for news articles about Artificial Intelligence and you’ll come across a lot of headlines about “machines taking jobs” or, depending on how bleak the author may have been feeling, taking more than jobs (See: Skynet).

Quite rightly, we’re fascinated by machines. In many respects, they are taking over. I just don’t entirely understand why we’re all assuming this will be terrible? It strikes me that in general, we’re broadly viewing the issue from the wrong angle. The discussion about machines, which generally revolves around Artificial Intelligence (or “AI”), tends to lead towards the viewpoint that it’s a bad thing. It’s often all about what they are “taking” from us.

On the one hand, I get it. It’s in our nature to fear new things.

During the Industrial revolution, it was railroads, electricity, and cars – along with smaller, but no less impactful machines like James Hargreaves’ ‘Spinning Jenny’ or Eli Whitney’s cotton gin – that scared us. Textile workers (who were known as ‘Luddites’) most of whom made cloth in their homes and saw themselves as artisans, were so afraid of Hargreaves’ machine that they broke into his home and destroyed it (among many other forms protest against machines during the Industrial Revolution). These fears were not completely unfounded either, there have been dramatic decreases in a number of occupations that can be directly tied to the existence of trains, cars, electricity and even the Spinning Jenny.

So yes, I get it. When jobs are automated, it’s scary and can lead to bad things for some people. We don’t have as many farmers, blacksmiths and basket-weavers as we once did. But we’ve survived, haven’t we? (We have and here’s proof). And not only that, but when you look at quality of life today vs the quality of life in the Pre-Industrial revolution era, you could argue that by most counts we’re better off. Which, to me begs an important question as we debate the next wave of evolution in the relationship between man and machine.

Why are so few of us looking at what human beings will gain instead of panicking about what we’re losing?

We now have machines that can ‘think’ and not just ‘do’. Computers have intelligence and cognitive abilities. They are no longer just there to extend our physical capabilities and I believe we should be excited about this and about the potential it unleashes in mankind. I’d like us to ponder this.

Will we really be lost if we no longer have to remember to stop and get milk on the way home? When our self-driving car is dropping us off at work? When our banking app automatically transfers some money from savings to make sure that a bill gets paid on time? When a messaging app books our holiday for us? When we can 3D print our dinner?

Posted by Drew

ARE HUMANS DESTINED FOR THE SCRAP HEAP?

Increasingly in many factories around the world, humans are only really needed to feed the machinery and clean up after the robotic arms that complete the monotonous tasks of; picking, packing and pushing objects through the production lines, everything from car parts to cake slices.

Many companies have set targets to automate between up to 50% of their workforce using robotics, and whilst one half of the human world are screaming out for these jobs, the other half are screaming for immediate redemption and low costs. So where do we balance out?

Robots taking over? Yep, pretty much. And it’s not just in manufacturing.

The Henn-na hotel in Nagasaki, Japan is the first of it’s kind to be staffed entirely by humanoids; from check in, through dining and service (you’ll have your sushi served up by a raptor none-the-less) to check out and departure.

The rise of the robots is being coined as the fourth industrial revolution, and growing fears and concerns of ‘technological unemployment’ continue.

And it doesn’t stop there. Deep learning is moving the ability for robots to do the mundane repetitive tasks to developing algorithms that rinse, repeat, but then learn and decipher too.

So what’s the human advantage?