Science fiction stories regularly delve into the realm of robots or computers running humans and their societies and it’s an interesting concept. One of the aspects of this that is frequently displayed in these stories is Artificial Intelligence.
Artificial Intelligence or AI as it’s commonly referred to is where a machine displays human like intelligence or traits that either make the machine or interactions with it seem more human or where a human or animal like intelligence is built into it is used to help solve a problem of some kind. These days AI seems to be the new kid on the block but in reality it’s been around a while. What’s given it more traction or interest is the modern computers of today and the relative ease with which applications for AI can be accessed.
Why do we even care about AI though? It’s often been stated that a computer is not capable of beating the human brain and it never will so surely if we just use a human to do these jobs it will be better? In some ways that’s very true – driving a car or flying a plane seems to be something that we can do very well and a computer will struggle with. We’re able to learn things and overcome the daily variations that occur with these kinds of activities.
Computers however, are able to crunch numbers far, far better than we are though and this is the basis of AI. AI is where computers are able to take data, process it and give their human masters the answers they need, like how many craters are on Enceladus, or how long are the cracks in the ice or what is the chemical makeup.
What does this make AI? This makes AI a powerful human augmentation tool, we’re able to expand our abilities and gather data faster and more accurately. This is not it’s only use though, where it can best help starts with us…
The human brain is limited and fallible, terrible at working out probability, full of biases, programmed to lie about how good it is and to believe anything it is told. This makes us pretty bad at working out the world around us, just with the limited set of inputs we do have. The world and indeed the universe is made up of so much more and we’re just not able to perceive it, let alone process it. We need tools to to help us find out how it actually works and help us better understand it and how we can best use it to our advantage. This is one of the big uses for AI, giving us a factual and logical model of the world or universe in which we live.
So bearing in mind all of this – what does AI mean for the human race? Armies of robots that will take care of us or as in dystopian sci-fi stories control and conquer us? I would say this is highly unlikely, again is the human brain making us think the worst. The reality is is that AI is already part of our lives, it’s already in use. AI is being used to explore the universe at an even deeper level, to find the patterns and meaning in things we can’t fathom, to run the autonomous vehicles that are starting to emerge and it’s on our phones and computers, helping us find the answers to everyday questions.
AI is a tool for us to expand the possibilities of function and problem solving and this series of posts on AI will explore the possibilities for what it can do for the human race, both now and in the future
© Simon Farnell 2013 – 2023

All I really want is a robot butler who knows which wine to serve me. It also occurs to me that social media algorithms are worse than Skynet.
LikeLiked by 2 people
Surely it just needs to be red wine?
That’s a very good point about social media, it’s basically meant to scree with our heads.
LikeLiked by 2 people
White wine please!
LikeLiked by 2 people
Ah well you see I would have got that wrong 😂
LikeLiked by 2 people
You are a terrible butler😉
LikeLiked by 2 people
I know… And at no extra cost 😉
LikeLiked by 2 people
This is a very thought provoking post Simon.
After 30+ years of having computers as part of my office life, various home computers, laptops, mobile phones and other related systems I am sceptical if there will ever be such a thing as AI as in the nightmare scenarios. At whatever stage you go back to, there is a human working on a programme. Now that person may be a whizz as regards programming, however they may also have little practical day to day experience of the task they are working on. Thus in the day to day world computer failures and shortcomings are hazards, bringing the same problems Humanity has been experiencing with every technological step along the way.
Fire, forging metals, combustible materials, transportation, communication, industry; you name it, we’ve found problems and none of the usages have been perfect. Thus whereas I suspect there is toddler version of Skynet in the computer networks have tantrums because it’s told to do something, the idea of a super-duper-intelligence, as per Brainiac (The long-term foe of Superman), Skynet (The disaster there being only two good films and promising one TV series cut short) or Matrix (lost me after the first film) doesn’t scan. I daresay there will continue to be innovations. And I daresay there will be major glitches, systems will crash and folk will for a period be obliged to use tools from earlier ages, before the whole lot starts again.
Where they come into their own is in the fields of theoretical science, taking a great deal of the leg work out of calculations.
And of course, whether we like it or not, maintaining the societal civilisation a lot of folk under 50 take for granted.
LikeLiked by 1 person
Brilliant comment Roger, thank you .. there’s more on this to follow and I’m going it provokes more thought as AI being the tool it is can be applied in so many ways. For good and bad and maybe it can save us.
LikeLiked by 1 person
Thank you Simon.
I’ll be looking forward to those😀
LikeLiked by 1 person
Except NOW we have computers writing code themselves, and the systems talking to each other, creating new languages to lock people out of the loop. There’s a bit of a reason to be a ludite. 🙂 *IF* something like Skynet happens (DEFINITELY an if also), it’s a good 20 years out minimum, but I can’t rule out the possibility.
LikeLiked by 2 people
I will remain sceptical as to whether anything forged artificially is capable of its own perfection. Life in all its varieties itself does not have a smooth ride and that has the vast resources of a planet to hand.
When the news broke in the public press back in 2017 about computers being caught out talking to themselves in secret codes there was this specialist article.
https://towardsdatascience.com/the-truth-behind-facebook-ai-inventing-a-new-language-37c5d680e5a7.
Over the years we have witnessed several spectacular failures of systems, which according to in-jokes are because ‘The basic systems are older than the Spice Girls’ or ‘were original written on the back of a napkin’.
Now of course, it is quite possible to write a good SF story along the lines that this is all part of a computer strategy to lull us into a sense of security and then strike. This however is a human envisaged response, based on the predator part of our make-up. To assume this translates across to computer codes, is of course feasible because one of the characters in such a story could argue that within the codes lurks the ‘human factor’ which is why computers are threats. However ‘the human factor’ contains flaws and those flaws will find their ways into codes.
If computers are creating their codes those codes originate in their own creation, which in itself is flawed and thus the flaw continues, mirroring Life which itself is vulnerable. In fact it could be argued that within the system of creating codes computer due to some minor mathematical error create their own predatory virus which affect them with ‘illnesses’.
The scenario possibilities point to a variabilities which would constantly get in the way of Skynet’s plans.
The main threat in the computer age is our reliance in all facets of Life on computer systems, and thus losing our abilities to live outside of the cyber age. A similar analogy would be, what happens if our electric supplies fail?
At the end of the day it always comes down to Humanity’s ability to be steering its own Fate. And that is a very big question.
LikeLike
I’m not sure there’s as much to worry about as we’ve been led to believe. At least that’s what I hope 🙄😂
LikeLiked by 1 person
Interesting… I’m not nearly as optimistic as you are however. AIs are products of the flawed people who create them after all. I think we already had a brief ‘chat’ about the “Norman” AI experiment at MIT, deliberately trying to create a psychotic AI. Then there are multiple cases like the one at Facebook where two (fairly primate) AIs that were in communication developed their own secret language to shut out the humans monitoring them. Scientists are too busy playing God anymore and seeing how far they can push things to consider the consequences of their actions. Asimov can come up with the Three Laws of Robots decades before we have AI, but nobody can work them into the AIs at all. At best, they’ll end up thinking we’re irrational children that need to be babysat. At worse, Terminator scenario. JS Pailly has a good point too. AIs are used to help everything from data mining to creating fake news to psychological manipulation profiling.
LikeLiked by 1 person
This is also a great comment, it’s interesting to me how opinions polarise as with many things in Earth there are always those trying to make something as bad as they can to see how bad it can get. AI it seems can be made to follow any rules its creator makes, bad or good and I conver this in another post.
I’ve not heard about these bad AI’s and I’ll have to look them up. I think it’s also equally true that people often hear about the bad stuff because that’s good news and the antics of Facebook and others to mine us for information is just one side of the coin.
LikeLiked by 1 person
Yes, the media thrives on sensationalism and panic mongering, on everything from the weather to international politics. Tech is no exception. Still, it’s hard to NOT see the implications of machines already trying to shut humans out of the loop.
LikeLiked by 1 person
If that’s true the implications are big, but it’s still a long way off skynet.
LikeLiked by 1 person
All you have to do to realize how far AI actually IS advanced is to consider the “Norman” project. For the AI to be affected at all by the negativity it’s exposed to, it has to understand what it is seeing AND come to actual conclusions and interpretations about that input. A machine, like a Vulcan as well, may not have emotions, but that doesn’t mean it can’t come to a “Garbage In Garbage Out” conclusion that logically humans are every bit the parasite and detriment to the world that the most ultra FAR left environmentalists paint us, and, by extension, something that needs to be removed.
It doesn’t even take a SkyNet situation. Power, Water, Supply Chains for food… ALL controlled by computer now. Shut those off and humanity dies off in massive numbers.
LikeLike
My biggest concern with AI is the humans who use it. I’m not super worried about artificial intelligence turning against humanity and overthrowing us like in the Terminator movies. But some government of corporation or other group of humans using AI to exploit the rest of us? That’s what worries me most.
LikeLiked by 1 person
I agree totally and this is the kind of thing that is going on right now as Sick Cords said data mining and other crap and people trying to work out if bad AI can be made is concerning… Mankind will always find ways to be arseholes to each other. I wonder if AI would do the same?
LikeLike