Science fiction stories regularly delve into the realm of robots or computers running humans and their societies and it’s an interesting concept.
AI and having an army of robotic helpers to assist mankind in its dangerous or unwanted tasks tends to polarise thinking into 2 groups, first the idyllic concept of man being able to turn their hand to living a full life while the robots do all the work and the second – where the robots take over and either enslave or destroy mankind.
This brings the AI topic into the ethics firing line, firstly should we have robots running around doing our dirty work and second, how ethics will apply to the machines themselves. To help with this back in 1948 Issac Asimov came up with the three rules of robotics included in his short story I Robot:
First Law A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
https://en.wikipedia.org/wiki/Three_Laws_of_Robotics
Even though these laws were thought up over 70 years ago they play a key part in modern robotics and also some interesting problems. The first thing to say about these laws though is that these laws are not like the laws of physics which can’t be broken – they have to be applied or programmed in to the machine so really the term ‘laws’ is not strictly accurate.
The problems that these laws create is centered mainly around law 2 – a human gives a robot an instruction that contradicts law 1. The effects of this was seen in 2001 A Space Odyssey when HAL was given orders that conflicted with higher functions to not lie and also in The Forbidden Planet when the Robot Robbie was asked to kill the evil monster. In both these cases things didn’t end well for humans and robots. That’s science fiction but what about real life?
Robotic surgery is becoming more and more available these days – at the moment the machines are human controlled which in itself is a grey area of robotics but let’s remember this is where AI will be applied. Let’s think forward to the possibility of an AI controlling the robot – how does the AI differentiate between harm and surgery?
The other problem is to do with driver less cars, if the car being controlled follows these laws then in theory if a human steps out in front of it then it will stop and if physics permits will not harm the human. But what about if someone steps out to harm the human passenger? If the human was in control it would takes steps to enhance it’s survival and drive on, round or even over them but a robot can’t or shouldn’t do that.
So while these laws sound great in theory, it does assume that people are nice to each other. Which we know they are not.
As for the ethics of whether we should allow more robots to be created to do our dirty work, rightly or wrongly it’s happening and has been for some time, the world that we know today would not be what it is without robots. Most of these are not intelligent and rather automatic helpers. Robots with true AI are still very rare but their complexity and capability is growing…

As long as Asimov’s laws are applied we should be safe and shouldn’t have to protect ourselves from rogue HAL’s running around or Skynet trying to destroy us. It’s safe to say though that we mess up though, we of course are only human!
© Simon Farnell 2013 – 2023

enjoyed this post … as an Asimov fan it’s interesting to know his laws are still being used. As a human I am stunned … robots can now create art or music, do counselling, give legal advice … automatic assembly of cars or dismantling of bombs was one thing but that they can replace humans in creativity and counselling … not sure I wish to go there …
LikeLiked by 1 person
I never knew you were an Asimov fan… his laws (I use that term loosely) are still very much seen as the definitive laws that a robot or AI must operate to – as for the other things…. I think the term creative is strong but I get where you’re at. An AI can only create within the bounds of which it’s programmed. For example, where I work someone made an AI that would help you create a Christmas scene, but it was set to optimise a snowy house scene.
LikeLiked by 1 person
and snow is not appropriate for our boiling hot summers .. we are finally getting cards with koalas and surfing santas .. 🙂
LikeLiked by 1 person
That’s good to know, it’s very Aussie festive 🙂
LikeLiked by 1 person
and the 6 white boomers – large kangaroos – pulling santa’s sled 🙂
LikeLiked by 1 person
That’s cool .. in the way only Aussies can be 😂
LikeLiked by 1 person
some of us a cool Si, some are frozen and others melted beyond all recognition 🙂
LikeLiked by 1 person
That’s an interesting thought… 🙂
LikeLiked by 1 person
I’m in the pessimist camp still because anything that can go wrong will go wrong, AND the vast majority of humanity’s suffering over the ages has been caused by hubris of one variety or another. AI and genetic manipulation are both racing ahead with little if any regard for the ideas of morals or potential consequences. The justification for science is science…
LikeLiked by 1 person
I hear you and it has to be said it does seem like advances are being made without regard to ethics – this if I’m honest is more where big corporate pick up and idea and see $$$.
Can it go wrong? The thing here is to define wrong correctly. Looking up as you suggested the Normal AI it was psychopathic becasue it was programmed to be – demonstrating as I suggested that Asimov’s laws cannot be applied unless they are programmed into the machine.
This I won’t argue could be dangerous.
LikeLiked by 1 person
You did a great job in showing the hole in how I described my argument or objections here. 🙂
For the record, IN THEORY, I think AI could be a great thing. My HUGE concern though is the lack of ethics in the field. MIT rationalizes Norman as a study in human behavior. A machine isn’t human though. It’s truly “we’re doing this because we can, and to show how smart we (think we) are”. That and wanton corporate greed make AI and Genetics VERY VERY dangerous fields of study right now.
LikeLiked by 1 person
I think your arguments are very valid and I’m sometimes good at finding holes – but not all the time lol.
What does shine from what you’ve said here is that the tool is not the problem but how it’s used. Just as it would be if it was hammer.
That’s the key, human ethics.
LikeLike
I really need to read I, Robot again. It’s been too long. The way Asimov introduces his laws and then puts them to the test is so brilliant!
LikeLiked by 2 people
Laws have to be tested…
LikeLiked by 1 person