Science Fiction Concepts – Artificial Intelligence and Ethics

Science fiction stories regularly delve into the realm of robots or computers running humans and their societies and it’s an interesting concept.

AI and having an army of robotic helpers to assist mankind in its dangerous or unwanted tasks tends to polarise thinking into 2 groups, first the idyllic concept of man being able to turn their hand to living a full life while the robots do all the work and the second – where the robots take over and either enslave or destroy mankind.

This brings the AI topic into the ethics firing line, firstly should we have robots running around doing our dirty work and second, how ethics will apply to the machines themselves. To help with this back in 1948 Issac Asimov came up with the three rules of robotics included in his short story I Robot:

First Law A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Even though these laws were thought up over 70 years ago they play a key part in modern robotics and also some interesting problems. The first thing to say about these laws though is that these laws are not like the laws of physics which can’t be broken – they have to be applied or programmed in to the machine so really the term ‘laws’ is not strictly accurate.

The problems that these laws create is centered mainly around law 2 – a human gives a robot an instruction that contradicts law 1. The effects of this was seen in 2001 A Space Odyssey when HAL was given orders that conflicted with higher functions to not lie and also in The Forbidden Planet when the Robot Robbie was asked to kill the evil monster. In both these cases things didn’t end well for humans and robots. That’s science fiction but what about real life?

Robotic surgery is becoming more and more available these days – at the moment the machines are human controlled which in itself is a grey area of robotics but let’s remember this is where AI will be applied. Let’s think forward to the possibility of an AI controlling the robot – how does the AI differentiate between harm and surgery?

The other problem is to do with driver less cars, if the car being controlled follows these laws then in theory if a human steps out in front of it then it will stop and if physics permits will not harm the human. But what about if someone steps out to harm the human passenger? If the human was in control it would takes steps to enhance it’s survival and drive on, round or even over them but a robot can’t or shouldn’t do that.

So while these laws sound great in theory, it does assume that people are nice to each other. Which we know they are not.

As for the ethics of whether we should allow more robots to be created to do our dirty work, rightly or wrongly it’s happening and has been for some time, the world that we know today would not be what it is without robots. Most of these are not intelligent and rather automatic helpers. Robots with true AI are still very rare but their complexity and capability is growing…

Photo by Kindel Media on

As long as Asimov’s laws are applied we should be safe and shouldn’t have to protect ourselves from rogue HAL’s running around or Skynet trying to destroy us. It’s safe to say though that we mess up though, we of course are only human!

© Simon Farnell 2013 – 2023

Blog Orbit – June 2023

Hello there bloggies! Despite the weather presenting the look that it’s about April we have now arrived in June. We’ve made it this far and yet there still seems so long to go. The cost of living is still up through the roof, the backs seem forever useless and on the brink of collapse too…

Keep reading

A Lens to the Sky

I love skylines, so often a lens doesn’t do them justice but sometimes a wide lens and a moody sky can make a beautiful picture. I wish I could mount some of these at times…Enjoy the views!

Keep reading

Thinking about it makes it Hurt – The Tax System

I’m sorry, it’s me – I can’t help it. I’ve tried to keep the insanity from the door of this corner of the web but I can’t do it any longer. The world in which we live is wonderful, beautiful and terrible thing and yet there’s few – well quite a lot of people actually…

Keep reading


Something went wrong. Please refresh the page and/or try again.


16 thoughts on “Science Fiction Concepts – Artificial Intelligence and Ethics

  1. enjoyed this post … as an Asimov fan it’s interesting to know his laws are still being used. As a human I am stunned … robots can now create art or music, do counselling, give legal advice … automatic assembly of cars or dismantling of bombs was one thing but that they can replace humans in creativity and counselling … not sure I wish to go there …

    Liked by 1 person

    1. I never knew you were an Asimov fan… his laws (I use that term loosely) are still very much seen as the definitive laws that a robot or AI must operate to – as for the other things…. I think the term creative is strong but I get where you’re at. An AI can only create within the bounds of which it’s programmed. For example, where I work someone made an AI that would help you create a Christmas scene, but it was set to optimise a snowy house scene.

      Liked by 1 person

  2. I’m in the pessimist camp still because anything that can go wrong will go wrong, AND the vast majority of humanity’s suffering over the ages has been caused by hubris of one variety or another. AI and genetic manipulation are both racing ahead with little if any regard for the ideas of morals or potential consequences. The justification for science is science…

    Liked by 1 person

    1. I hear you and it has to be said it does seem like advances are being made without regard to ethics – this if I’m honest is more where big corporate pick up and idea and see $$$.

      Can it go wrong? The thing here is to define wrong correctly. Looking up as you suggested the Normal AI it was psychopathic becasue it was programmed to be – demonstrating as I suggested that Asimov’s laws cannot be applied unless they are programmed into the machine.
      This I won’t argue could be dangerous.

      Liked by 1 person

      1. You did a great job in showing the hole in how I described my argument or objections here. 🙂
        For the record, IN THEORY, I think AI could be a great thing. My HUGE concern though is the lack of ethics in the field. MIT rationalizes Norman as a study in human behavior. A machine isn’t human though. It’s truly “we’re doing this because we can, and to show how smart we (think we) are”. That and wanton corporate greed make AI and Genetics VERY VERY dangerous fields of study right now.

        Liked by 1 person

      2. I think your arguments are very valid and I’m sometimes good at finding holes – but not all the time lol.
        What does shine from what you’ve said here is that the tool is not the problem but how it’s used. Just as it would be if it was hammer.
        That’s the key, human ethics.


It's great to hear from bloggies - feel free to leave a comment :-)

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.