We are currently sitting on the precipice of the fourth industrial revolution which is set to re-think the way we live and work on a global scale. As with the first industrial revolution, what we know roughly is that change is being driven by technology, but we lack any concrete knowledge of how great the change will be or just how dramatically it will disrupt the world we live in.
The technologies driving the upcoming revolution are artificial intelligence and robotics, technologies which have been the territory of sci-fi for generations which think and act as humans would. Just as steam power, electricity and ultimately computers have replaced human labour for mechanical and often mathematical tasks, AI looks set to supplant human thinking and creativity in a way which many see as unsettling. If the first industrial revolution was too much for the ‘luddites’ doing their best to stamp out mechanical progress, the reaction to AI and robotics is going to be even more unsettling. There are several clear reasons I can perceive that may drive people away from AI which are:
- Fear of redundancy: the first reason we can see replicates that of the first industrial revolution. People don’t want technology to do what they do, because if a machine is able to do it faster, better and stronger than they can then what will they do?
- Fear of the singularity: this one is like our fear of nuclear bombs and fusion. There’s an intrinsic fear people hold, entrenched in stories of Pandora’s Box where we believe certain things should not be investigated. The singularity of AI is when a computer achieves sentience, and though we’re some way off that (without an idea of how we’d get there) the perceived intelligence of a machine can still be very unnerving.
- The uncanny valley: the valley is the point where machines start to become more human-like, appearing very close, but not exactly like a human in the way they look or interact. If you’re still wondering what it is, I’d recommend watching these Singing Androids.
Just like we’ve seen throughout history, there is resistance to this revolution. But if history is anything to go by, while it’s likely to be a bumpy road, the rewards will be huge. Although it’s the back office, nuts and bolts which are driving change behind the scenes, it’s the front end where we interact with it that’s being re-thought to maximize potential and minimize resistance. What we’re seeing are interfaces designed to appear dumb, or mask their computational brains to make us feel more comfortable, and that’s where the eponymous title of this blog comes in.
“The Brave Little Toaster” is a book from 1980, or – if you’re lazy like me – it’s a film from about 8 years later, ‘set in a world where household appliances and other electronics come to life, pretending to be lifeless in the presence of humans’. Whilst the film focused on the adventure of these appliances to find their way back to their owner, what I’d like to focus on is how they hide intelligence when they come into sight – and this is what we’re beginning to see being followed by industry.
Journalism is a career typically viewed as creative and the product of human thought, but did you know that a fairly significant chunk of the news that you read isn’t written by a person at all? For years now weather reports from the BBC have been written by machines using Natural Language Generation algorithms to take data and turn it into words, which can even be tailored to suit different audiences with simple configuration changes. Earlier this month The Washington Post also announced that their writing on the Rio Olympics would be carried out by robots. From a consumer standpoint it’s unlikely that we’ll notice that the stories have been written by machines, and if we don’t even notice it shouldn’t be creepy to us at all. Internally, rather than seeing it as a way to replace reporters, it’s being seen as an opportunity to ‘free them up’, just like the industrial revolution before which saw people be freed up from repetitive manual tasks to more thought based ones.
Platforms like IBMs Watson begin to add a two-way flow to this, with both natural language generation and recognition, so that a person can ask a question just as they would to a person, with a machine understanding their phrasing and replying in turn without ever hinting that it’s an AI. At the stage when things become too complicated, the AI asks for a person to take action and from there on the conversation is controlled by them, with no obvious transition.
A gradual approach to intelligence and automated systems is also being adopted by some businesses. Tesla’s autopilot can be seen as an example of this, continuing a story which began with ABS (automatic breaking) over a decade ago, and developed in recent years to develop a car which, in some instances, can drive itself. In its current state, autopilot is a combination of existing technologies like adaptive cruise control, automatic steering on a motorway and collision avoidance, but the combination of this with the huge amount of data it generates has allowed the system to learn routes and handling, carefully navigating tight turns and traffic (albeit with an alert driver ready to take over control at all times!). Having seen this progression, it’s easy to imagine a time not too far from the present day where human drivers are no longer needed, with a system that learns, generates data and continually improves itself just as a human would as they learn to drive, only without the road rage, fatigue or human error.
The future as I see it is massively augmented and improved by artificial intelligence and advanced automation. Only, it’ll be designed so that we don’t see it, where the boundary between human and machine input is perceivable only if you know exactly where to look.
What do you think? Leave a reply below, or contact me by email.
Augmentation, AI and automation are just some of the topics researched by Aurora, Sopra Steria’s horizon scanning team.