Emotional Blackma.il


After your last post Drew, I have indeed been scratching my head and it’s taken me a while to work out how best to respond, there are so many dark ways to go from here and I want to bring it back a bit…

There are two points in your post that stood out to me; the first is around AI making the decision, which is explicit throughout. The second is not so, as it’s towards the end and you might almost miss the pertinence and that’s this bit:

‘But now I’m speaking “tech”, something robots understand. Humans, not so much. And somewhere in that process, humans will create the logic that a robot uses to “make” that decision.’

This last point is one of the singularly most discussed points amongst anyone worried about AI ‘taking over’, this point where humans effectively hand over the controls by allowing AI to become more intelligent than us. In reality we’re a way from this, but IT WILL happen eventually.

The first computer on record to pass the Turing test was Eugene in 2014, it sparked a lot of debate about the ability to hold conversation and what defines this. Fast forward to today and the conversation you can have with AI is much more sophisticated, consider the shorts between Watson and a host of guests, I quite like this one with Ridley Scott and in addition to clarity of structure, speed of response and total relevancy, you see humour and wit coming through from Watson as well. Watson is making decisions in real time to inform an interesting conversation.

So this brings me to my thought provocation which is around the line between logic decision making and conscious decision making. The first is easy to comprehend as it’s essentially a series of true v false (or 1 v 0 if you’re into code) logic decisions based on a probability factor. The second takes in additional parameters like feelings and emotions, which are more relevant to humans; if you’ve ever made an irrational ‘in the heat of the moment’ decision it’s likely because you didn’t follow the logic of true v false to get there.

Consider Neuromorphic chips for a moment, they have been designed to replicate the way a human brain works, drawing parallels from the neurological processes using artificial neural networks. Essentially this means that AI is programmed to feel. Advances in AI will see us side by side with intelligent and sentient robots in our lifetime, so I find it fascinating to look at the differences (and similarities) in the way we (humans) and AI robots behave based on senses, perceptions and ‘feelings’.

If AI is programmed to recall a feeling, is that different to me remembering a feeling? If I recall the joy I felt from a particular moment, or the pain, or the excitement, is that so different from AI recalling the same feeling from a program. Whether that feeling is held in a chip or a brain, it exists. So what’s the difference between a feeling imagined, programmed or remembered?

If consciousness is defined as being aware of one’s surroundings, sensations, thoughts and existence and AI is being developed to understand all of these things, then are we now speaking ‘feelings’ something humans understand but robots not so much? Or has that line become even more blurred?








Published by


I'm an agent of change, an explorer of territory unclaimed, I love to experiment, if something hasn't been done then I love to find a way of doing it. New and shiny things excite me.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s