If tech tries to define an algorithm for society who owns the judgement call?


I’ve been hatching plans. Not new, news I know. But this time I’ve been doing it with someone who is far deeper into the rights of humanity than I am and it has sparked some seriously mind (and algorithm) bending questions.

Here’s one to jump straight into the deep end with; As we move towards Web 5.0 who’s thinking about Humanity 5.0?

We’re so busy defining what Web 5.0 is (Sir Tim Berners-Lee offers an interesting view in this Ted Talk) and trying to perceive what an; open, connected intelligent and potentially emotional web is, we’ve kinda lost track of the fact that the emotions of people will be read, understood and…

And what?


We are at a crossroad as technology and humanity begin to converge. As such, we have some fundamental choices to make in order to ensure the future of our collective society is shaped and created by us, not for us.

Drew, I agree with your ending statement in your last post but I challenge humanity’s laziness – I think that our acceptance of the ease that technology, and the connectivity that comes with it, could be the underlying issue here.

When was the last time you navigated somewhere new without Google maps?

When was the last time you ordered takeout without Deliveroo?

Booked a flight without SkyScanner, an AirBnb oh wait …

Google it, Uber it, Airbnb it. Everyday phrases that are today’s version of ‘hoovering’ and all sophisticated platforms that answer today’s demanding needs with ease, therefore raising the expectations of an already expectant audience.   

And that poses another question: In a world increasingly built on algorithms, where is the algorithm for society?

If we should be stepping back and defining what we want these technologies to do to serve us, they first need to know us. Better yet they need to understand us.

According to the WEF ‘Future of Jobs’ Report 2016, society will change more in the next 25 years than it has in the last 200 due to rapid progression in both technology and connectivity. We’re facing the 4th Industrial Revolution (4IR) and the perfect storm of convergence which, according to Moore’s Law WILL get exponentially faster. But humans are not exponential like technology, further, we can really only exponentially do things better and faster with technology. EEK. And there we go again.

Another question: Who, or what is writing the rules?

In an earlier post, we explored who is responsible between drone and human when an Unmanned Aerial Vehicle (UAV) programmed with an AI written by a human kills a civilian during an air strike. We had more people reach out and say humans than technology.

But, algorithms shape our thoughts and influence our actions in ways we don’t even realise, the recent deception by Cambridge Analytica shows the empowerment an algorithm can have on our collective.

This article draws the parallels between the many algorithms that help run our lives, summed up nicely in the following paragraph:

‘In short, the way we go about our lives mimics the way we engage with the internet. Algorithms are an easy way out, because they allow us to take the messiness of human life, the tangled web of relationships and potential matches, and do one of two things: Apply a clear, algorithmic framework to deal with it, or just let the actual algorithm make the choice for us. We’re forced to adapt to and work around algorithms, rather than use technology on our terms.’

Additionally, all of these algorithms and technologies are converging and they’re starting to anticipate our needs before we even know we have a need.

So are humans leading algorithms or are algorithms leading humans? And, if tech tries to define an algorithm for society who owns the judgment call?


Emotional Blackma.il


After your last post Drew, I have indeed been scratching my head and it’s taken me a while to work out how best to respond, there are so many dark ways to go from here and I want to bring it back a bit…

There are two points in your post that stood out to me; the first is around AI making the decision, which is explicit throughout. The second is not so, as it’s towards the end and you might almost miss the pertinence and that’s this bit:

‘But now I’m speaking “tech”, something robots understand. Humans, not so much. And somewhere in that process, humans will create the logic that a robot uses to “make” that decision.’

This last point is one of the singularly most discussed points amongst anyone worried about AI ‘taking over’, this point where humans effectively hand over the controls by allowing AI to become more intelligent than us. In reality we’re a way from this, but IT WILL happen eventually.

The first computer on record to pass the Turing test was Eugene in 2014, it sparked a lot of debate about the ability to hold conversation and what defines this. Fast forward to today and the conversation you can have with AI is much more sophisticated, consider the shorts between Watson and a host of guests, I quite like this one with Ridley Scott and in addition to clarity of structure, speed of response and total relevancy, you see humour and wit coming through from Watson as well. Watson is making decisions in real time to inform an interesting conversation.

So this brings me to my thought provocation which is around the line between logic decision making and conscious decision making. The first is easy to comprehend as it’s essentially a series of true v false (or 1 v 0 if you’re into code) logic decisions based on a probability factor. The second takes in additional parameters like feelings and emotions, which are more relevant to humans; if you’ve ever made an irrational ‘in the heat of the moment’ decision it’s likely because you didn’t follow the logic of true v false to get there.

Consider Neuromorphic chips for a moment, they have been designed to replicate the way a human brain works, drawing parallels from the neurological processes using artificial neural networks. Essentially this means that AI is programmed to feel. Advances in AI will see us side by side with intelligent and sentient robots in our lifetime, so I find it fascinating to look at the differences (and similarities) in the way we (humans) and AI robots behave based on senses, perceptions and ‘feelings’.

If AI is programmed to recall a feeling, is that different to me remembering a feeling? If I recall the joy I felt from a particular moment, or the pain, or the excitement, is that so different from AI recalling the same feeling from a program. Whether that feeling is held in a chip or a brain, it exists. So what’s the difference between a feeling imagined, programmed or remembered?

If consciousness is defined as being aware of one’s surroundings, sensations, thoughts and existence and AI is being developed to understand all of these things, then are we now speaking ‘feelings’ something humans understand but robots not so much? Or has that line become even more blurred?








Fearing or embracing the evolution of the relationship between man and machine is an interesting balance indeed, and one that makes me wonder about the human perception of ‘intelligence’.

The question I pose is this: Is it the intelligence we fear, or the comparison to our own ineffectiveness? Note that I didn’t use a negative slant on intelligence there, but it would be remiss of me to gloss over the fact that we have got innately lazy due to advances in technology allowing us to be.

As a species we’re knowledge hungry and thanks to the internet we can find out pretty much anything we want, anywhere we want, simply at the touch of a button.

The average IQ of the human race is increasing by around 3 points over each decade, as explained by James Flynn in this Ted Talk (which is fascinating if you have a spare 15 minutes). It is a fact that humans are becoming more intelligent. The genius IQ of Einstein at 160 is looking less advanced and though it hasn’t been exhaustively proven, this rise in intelligence is in part down to technology having given us the ability to access, compile and remember information and knowledge in a more readily accessible way.

It would seem we embrace the rise of our intelligence, yet fear the systems that support this growth as they become ever more automated, more intuitive, more… artificial.

When we couple intelligence with an artificial brain, suddenly the very thing that aids us now threatens us, therefore I believe that for the masses the line between security and insecurity with intelligence lies with where the power is.

At the end of the day, the evolution of the world as we know it is down to human intelligence. It’s far more likely that intelligence systems will frustrate us whilst following our orders, because it’s not quite what we would have done or how we would have gone about it, than it is they will consciously rise up against us.

Today we have parameters in place that allow governance of intelligence to require human input, even if as small as the push of a button to confirm a query or automation. It is up to us to remove that decision and make it artificially enabled.

So what tomorrow holds in this next era of man and machine working together, is down to us.

In my view we should stop thinking about what parameters we put around intelligence and instead start thinking about what objectives we might set for it.

160 you say? Let’s beat that.


Do a quick search for news articles about Artificial Intelligence and you’ll come across a lot of headlines about “machines taking jobs” or, depending on how bleak the author may have been feeling, taking more than jobs (See: Skynet).

Quite rightly, we’re fascinated by machines. In many respects, they are taking over. I just don’t entirely understand why we’re all assuming this will be terrible? It strikes me that in general, we’re broadly viewing the issue from the wrong angle. The discussion about machines, which generally revolves around Artificial Intelligence (or “AI”), tends to lead towards the viewpoint that it’s a bad thing. It’s often all about what they are “taking” from us.

On the one hand, I get it. It’s in our nature to fear new things.

During the Industrial revolution, it was railroads, electricity, and cars – along with smaller, but no less impactful machines like James Hargreaves’ ‘Spinning Jenny’ or Eli Whitney’s cotton gin – that scared us. Textile workers (who were known as ‘Luddites’) most of whom made cloth in their homes and saw themselves as artisans, were so afraid of Hargreaves’ machine that they broke into his home and destroyed it (among many other forms protest against machines during the Industrial Revolution). These fears were not completely unfounded either, there have been dramatic decreases in a number of occupations that can be directly tied to the existence of trains, cars, electricity and even the Spinning Jenny.

So yes, I get it. When jobs are automated, it’s scary and can lead to bad things for some people. We don’t have as many farmers, blacksmiths and basket-weavers as we once did. But we’ve survived, haven’t we? (We have and here’s proof). And not only that, but when you look at quality of life today vs the quality of life in the Pre-Industrial revolution era, you could argue that by most counts we’re better off. Which, to me begs an important question as we debate the next wave of evolution in the relationship between man and machine.

Why are so few of us looking at what human beings will gain instead of panicking about what we’re losing?

We now have machines that can ‘think’ and not just ‘do’. Computers have intelligence and cognitive abilities. They are no longer just there to extend our physical capabilities and I believe we should be excited about this and about the potential it unleashes in mankind. I’d like us to ponder this.

Will we really be lost if we no longer have to remember to stop and get milk on the way home? When our self-driving car is dropping us off at work? When our banking app automatically transfers some money from savings to make sure that a bill gets paid on time? When a messaging app books our holiday for us? When we can 3D print our dinner?

Posted by Drew


Increasingly in many factories around the world, humans are only really needed to feed the machinery and clean up after the robotic arms that complete the monotonous tasks of; picking, packing and pushing objects through the production lines, everything from car parts to cake slices.

Many companies have set targets to automate between up to 50% of their workforce using robotics, and whilst one half of the human world are screaming out for these jobs, the other half are screaming for immediate redemption and low costs. So where do we balance out?

Robots taking over? Yep, pretty much. And it’s not just in manufacturing.

The Henn-na hotel in Nagasaki, Japan is the first of it’s kind to be staffed entirely by humanoids; from check in, through dining and service (you’ll have your sushi served up by a raptor none-the-less) to check out and departure.

The rise of the robots is being coined as the fourth industrial revolution, and growing fears and concerns of ‘technological unemployment’ continue.

And it doesn’t stop there. Deep learning is moving the ability for robots to do the mundane repetitive tasks to developing algorithms that rinse, repeat, but then learn and decipher too.

So what’s the human advantage?