I’ve been reading up recently on the advances in robot machinery and developments to help improve durability which you can read about here if you’re interested.
However, this led to me pondering just how ‘human’ robots will become and with this, how responsible they will need to become.
My thought process went a bit like this…
- There is a very real situation where intelligent machines could be welcomed to replace humans in military operations as the increase of drones for use in conflict areas continues to rise.
2. Currently, these drones are operated by humans and the operator is accountable for the fatality caused as it’s classed as a weapon in this scenario, but operation happens remotely. In these situations days, sometimes weeks of planning are allowed for before execution of an exercise occurs. It’s different arguably to facing your opponent in combat, which a soldier on the ground will need to do but the PTSD suffered has been reported to be similar.
3. The automation of these combat drones or Unmanned Aerial Vehicles (UAV’s) is on the rise and we’re talking months rather than years before full automation will be reached, likely a relief to drone operators, but with civilians being caught in the crossfire of drone use who will be responsible for actions in a fully automated drone attack?
Then I came across an article by a researcher in robot ethics who posed a question as to whether advanced robots should be held accountable for their actions…
Separately I was reading an article about ‘Man becoming the weakest link’ in relation to processing multiple scenarios at once to reach the highest probable success factor, which is what AI is vastly being adopted for in order to solve many situations more rapidly around the world; from cancer research to dietary requirements.
4. Join these multiple advancements in drone technology, durability in machinery and intelligence based analysis, and on one hand, you have a military weapon with precise insight and no risk to a life, yet on the other, you have somewhere else to point the finger.
5. This is just one example. When you think about intelligent machines being trained as surgeons, autopilots on planes, trains and cars getting more sophisticated, service layers being automated and all the things we talked about in our earlier posts, it is a fact that robots will soon be able to do anything.
Who takes the blame when they get it wrong?
Is it them that gets it wrong?
Can a robot be taught right from wrong?