Human and Computer Partnership: Challenging Misconceptions around AI

Do you remember that scene in The Matrix when Neo awakens in his liquid-filled pod, to the knowledge that he had been used as a power source for machines his entire life? Or the final scene in Ex-Machina, when Ava leaves Caleb trapped in the facility and escapes to the outside world after having manipulated him all along? These movies don’t exactly portray machines in a good light. Think of the whole premise of the Terminator movies: human equals good, machine equals bad. It’s a familiar pattern in films that probably has a lot to answer for in giving AI bad press. Because when the average person hears the phrase “Artificial Intelligence”, they automatically think of evil, murderous robots who look just like us but have no capacity for empathy.


Not exactly the sort of human-computer partnership we believe in. “Terminator 3” by insomniacuredhere is licensed under CC BY-SA 2.0.

Empathy is perceived to be the defining trait of humanity. And so, because AI technology is unable to empathise, it is seen as something to be afraid of. The portrayal of VIKI in I, Robot is a perfect example of the fear of something that lacks empathy. VIKI concluded that humans had embarked on a path to their own extinction and so she decided to ensure the survival of the human race by stripping individuals of their free will. Without a capacity for empathy, or genuine emotion, VIKI cannot see that this is a bad thing to do; she is simply obeying The First Law of Robotics, which states that a robot may not injure a human being or, through inaction, allow a human being to come to harm. It is only Sonny, with his capacity for emotion and ability to understand non-verbal communication, like the wink, who redeems himself as a good robot.

The horrific dystopian futures portrayed by of these movies doesn’t exactly give us any confidence in a future where AI becomes more of a presence in our lives. As a result, many people worry about where such technology will lead. There are two common anxieties when it comes to AI; firstly, that AI will take over the world in an evil overlord kind of way. Secondly, and slightly more realistically, people worry that AI will take over our jobs, leaving us as surplus to requirement in many industries.

The ‘Artificial Intelligence’ Hype

Along with movies, the big hype in AI technology that we currently see in the media, perpetuated by marketing campaigns, is responsible for people’s fears that AI will one day make them functionally redundant. For many companies, the attention generated by this hype is too appealing to avoid using the ‘artificial intelligence’ buzzword. We can see that the number of businesses who have added ‘.ai’ to their URL is on the increase, showing that they are taking advantage of the hype. If we take a look at some of the language used by the biggest AI companies, it is easy to see where this hype comes from. For example, Google Deepmind describes AlphaGo as having ‘overturned hundreds of years of received wisdom’ for the game of Go and apparently since the match has continued to ‘surprise and amaze’. Likewise, Amazon’s Alexa promises to ‘voice control your world’. Such language is often hyperbolic, and in some cases, it smacks of overpromising. This overpromising inflates people’s expectations about what AI can currently achieve, leading them to believe that AI will eventually become more intelligent than them and so take over their jobs.

We can consider this in terms of Gartner’s hype cycle for emerging technologies, which visually represents a peak of inflated expectations which then leads to a trough of disillusionment.


The Gartner Hype Cycle by Jeremy Kemp is licensed under CC-BY-SA 3.0

This is a problem because, as the hype dies down, people tend to lose interest. However, the potential of AI technology is considerable, so it would be a shame for interest in it to wane due to over-promising. AI is like a magic trick. When we watch a magician saw his glamorous assistant in half, we are left in shock and awe. We can give the feat no other explanation than magic. However, once we know the trick behind it; there’s simply a second woman of whom you only see the legs, the magic is gone for us.

In the same way, once people without much experience with AI learn about the truth, that it is simply a tool, all the mystery and magic behind it goes.But, just because the magic behind AI goes once we understand it, doesn’t make it any less special or unable to impact our lives. It is therefore important that we reassure people that AI isn’t about to replace them anytime soon and generate some healthy attention for AI which is free from over-promising.

But, just because the magic behind AI goes once we understand it, doesn’t make it any less special or unable to impact our lives.

A ‘Teachable Software Tool’

We can do this, challenging these misconceptions and harnessing the potential of AI, by emphasising that the best machine learning happens when we bring a human into the loop. It is a partnership between man and machine which brings about the best results. This is called active learning, which involves allowing the algorithm to query a human user when it is unsure about something. You can read more about active learning in our blog post.



Kind of touchy-feely, but closer to our vision of AI. “Human + Machine” by Adelina Peltea is licensed under CC BY-NC 2.0

Right now, artificial intelligence alone isn’t all that intelligent. That is why it won’t be making us obsolete or taking over the world anytime soon. Artificial intelligence is simply a misnomer, or buzzword, for current intelligent systems. Let’s see what happens if we used the phrase ‘teachable software tool’ instead. AI can be taught basic things, like what kind of coffee you like, or exactly the right temperature you prefer for your shower. But it could also be taught things that have a greater impact, such as being able to make predictions on traffic to advise local authorities on urban planning. You could also teach an AI tool to recognise skin diseases better than a dermatologist, saving money and time for the patient.

Though ‘teachable software tool’ definitely doesn’t sound as cool as ‘artificial intelligence’, this phrase more accurately conveys the purpose of this technology. The word ‘teachable’ implies that human intelligence is needed to train the machine to perform at an optimum level. The fact that it is a ‘tool’ highlights that it has a specific use to improve our lives, for example, detecting diseases faster with greater accuracy, and it is adaptable to us and our environment. We’ll be using the machine and not the other way around. By simply renaming the AI product, we can remove the negative connotations of ‘artificial intelligence’ which we associate with movies like Terminator. The key element here is that AI needs us to teach it what to do; the best intelligence is still human, and AI relies on that intelligence to function and learn.

The key element here is that AI needs us to teach it what to do; the best intelligence is still human, and AI relies on that intelligence to function and learn.

So, involving a human in the loop tends to adjust people’s expectations to achievable results; they are made aware that AI is a great tool but don’t expect it to solve world hunger or put humans on Mars. Instead, AI is designed as a means to make people’s lives a little bit easier. The results of our Tata Steel project, for example, demonstrate how AI is a useful tool to optimise our workplaces. We used active learning to train the software to become better and better at spotting defects in the surface of steel the more examples we gave it. This drastically improved their quality inspection process, but it didn’t put anyone at Tata out of a job. Instead, Tata’s employees are working together with the machine learning software in order to inspect the steel in the most efficient way, increasing the quality of the inspection and the number of minor flaws found.

tata steel surface inspection

When we emphasise that machine learning is most successful when humans and computers work as a partnership, we can ease people’s worries and misconceptions when it comes to artificial intelligence. Moving away from over-promising AI as autonomous and self-sufficient, if we highlight that it is instead a tool to be trained and used by us, we can show the potential of AI without creating misrepresenting what is currently realisable with AI technology. AI won’t take over the world or take over our jobs because, for the foreseeable future, human intelligence will remain the best kind of intelligence.