Journey of BB8 (Part 1)

We all have dreamt of flying, fighting with a lightsabre, and controlling objects with our mind. I was lucky enough to make one of my dreams come true when DigiLab UK went on an exploration journey of brain-computer interfaces. I recruited one fellow dreamer, a UX designer, along with me, the software engineer. We started to look at different aspects of BCI.  The initial task chosen was to control an object with our mind, and in the journey, learn more about the technology. I was staring at my desk thinking about which object to control. Then there was my answer staring back at me, BB8 on my desk.   Whether by fate or the force, we knew what we had to do. We would control BB8 using a BCI device, the Emotiv EPOC+, which was also available and previously used for hackathon project in Norway. I will take you through my journey of making this prototype with the help of a two-part series blog in the hopes of helping others who are starting to explore BCI technology.

Setup

The Emotiv EPOC+ headset comes along with 14 electrodes.  Setup of the device is easy but tedious as you are required to soak the electrodes with saline solution each time before screwing them onto the device. This process is needed to get good connectivity between the user’s scalp and electrodes. For people with more hair, it is naturally more difficult to get good connectivity as they must adjust their hair to make sure there is nothing bet­ween the electrodes and scalp. For some connectivity levels were sufficient with dry electrodes but to save time I recommend that always soak the electrodes before using the device as you are more likely to get fast and good connectivity.  There are many videos available online that guide you through the initial setup of the device.

20180913_114255
Electrodes need to be screwed on the device

 

 

20180913_113314
Emotiv EPOC+ with fourteen electrodes and the EEG head device

Training mental commands

I aimed to control BB8 with EPOC+ headset, so I started to investigate the mental commands and its various functionalities. To use the mental commands you first need to train them. The training process enables the EPOC+ to analyze individual brainwaves and develop a personalized signature corresponding to the different mental action.

Emotiv Xavier

Emotiv Xavier control panel is an application that configures and demonstrates the Emotiv detection suites. It provides the user with an interface to train mental commands, view facial expressions, performance metric, raw data, and to upload data to Emotiv account. The user has the option to sign in to their account or use the application as a guest.

The user is required to make a training profile. Users have the option to have multiple training profiles under one Emotiv account. Each user needs their profile as each one of us possesses unique brain waves.

Let’s train the commands

The first mental command or action user must record is their “neutral” state.  The neutral state is like a baseline or passive mental command. While recording this state, it is advisable to remain relaxed like when you are reading or watching TV. If the neutral state has not recorded correctly, the user will not be able to get any other mental commands working properly. For some recording, the neutral state results in better detection of other mental commands.

The “record neutral”button allows the user to record up to 30 seconds of neutral training data.  The recording automatically finishes after 30 seconds, but the user has the option to stop recording any time they feel that enough data has been collected. At least 6 seconds of recorded data is required to update the signature.

After recording the neutral state, the user can start to train any one of the 13 different actions available. For my research, I only focused on two mental actions “push” and “pull.” Emotiv website provides tips and instruction on how to train the mental commands. It suggests remaining consistent in thoughts while training. To perform any mental action, users must replicate their exact thoughts process or mental state that they had during the training process.  For example, if a user wants to train “push” command, it’s up to the user what they want to think or visualized for that action. Some users might imagine a cube going away from them, or some might imagine a cube shrinking, whatever works for them, but they need to remain consistent in their thoughts and mental state. If the user is distracted even for a second, it is advisable to retrain the action. As the user is able to train a distinct and reproducible mental state for each action, the detection of these actions become more precise. Mostly, the users must train an action several times before getting accurate results.

While I was trying to train the “push” action, I placed the BB8 on a white table and imagined it moving away from me. I replicated same thought, imagining BB8 going away from me on the table and was able to perform the mental action. However, when I placed the BB8 on the carpet, I failed. This may have been because the different colour of the carpet distracted me and I was unable to replicate my exact mental state, therefore, failed to perform the mental action. For me, the environment needed to be the same to reproduce my specific mental state. However, this varies from user to user.

Emotiv Xavier gives the option to view an animated 3D cube on the screen while training an action. Some users find it easier to maintain the necessary focus and consistency if the cube is automatically animated to perform the intended action as a visualization aid during the training process. A user can, therefore view themselves performing an action by viewing the cube. The cube remains stationary unless the user is performing one of the mental actions (if already trained) or unless the user selects “Animate model according to training action checkbox for training purposes. It is advisable to train one action fully before moving on to the next one. It gets harder and harder to train as you add more mental actions.

Is the training process easy?

There are lots of tips and guidance given on Emotiv website for training mental commands. Users are given an interface to help them train and perform mental actions with the aid of animated 3D or 2D models. However, during my three days of training, I was not able to find an easy and generic way to train the mental commands. People are different. Some are more focused than others. Some like to close their eyes to visualize and perform the command. Some want help with animation. What I observed was that it depends on the person and how focused they are, and how readily they can replicate a state of mind. There is no straightforward equation. You need time and patience. I was only able to achieve 15 % skill rating after training two mental actions. Only one of my colleagues got 70% skill rating which he wasn’t able to reproduce later.

NeuroFeedback

While searching for simpler ways to train mental commands I came across a process known as neurofeedback. Neurofeedback is a procedure for observing your brain activity to understand and train your brain. A user observes what their brain is actually doing as compared to what they want it to be doing. The user monitors their brain waves, and if they are nearing the desired mental state, then they are rewarded with a positive response which can be music, video or advancing in a game. Neurofeedback is used to help reduce stress, anxiety, aid in sleeping, and for other forms of therapeutic assistance.

Neurofeedback is a great way to train your brain for mental commands. For example, if someone is trying to do “push” command,” they can observe their brain activities on screen and see if they are consistent. Then they can slowly and steadily train their brain to replicate a specific state. Emotiv provides the “Emotive 3D Brain Activity Map” and “Emotiv Brain Activity Map”, a paid application that can be used to monitor, visualize and adjust brainwaves in real time.  For our research, we didn’t try these applications.  If you try it out, let us know how you got on!

Training is like developing a new skill. Remember how you learned to ride a bike, or how you learned to drive? It took time and practice, and it’s the same for training mental commands. Companies do provide help by giving tips, instruction and software applications to help users train and visualize, but in the end, it’s acquiring a new skill, and users need practice. Some might learn faster than others, but for everyone it takes time.

 

 

Don’t fear the RPA!

The Digital Revolution is upon us and the reality is that it will bring change we simply cannot afford to ignore.

Humankind has constantly striven to find new, better ways of living and working. The industrial revolution introduced new ways of working to a society relying on physical labour alone and the results – cheaper goods, improved transportation, safer factories, better working conditions and evolved communications – set the tone for a period of continuous improvement moving forward.  Throughout the 19th and 20th Centuries, the pace of change increased; developments in cars, fuels, heating, atomic power, plastics and synthetics have improved countless lives and this drive to constantly enhance and improve has continued.

Industry 4.0 concept. Man is holding tablet to control smart factory manufacturing line which is equipped with sensors and robotic arm. industrial automation line.

When the manufacturing industry adopted automation 20 years ago it was seen as truly revolutionary, bringing new, more efficient ways of working.  Doomsayers warned of jobs being lost but in fact, quality increased and competition flourished.   Outsourcing was another big change but each time the market quickly adapted, leading to a service oriented industry that has since generated millions of brand new jobs.  It’s a fact that what was once seen as truly innovative is soon seen as commonplace and ‘business as usual’.

Today, the seismic change is Digital.  It’s remarkable to consider that it’s only 10 years since the smart phone was invented  – but since then, Facebook, Twitter, Instagram and Linked in have emerged and Amazon, Uber, mobile banking and even online gaming have become daily realities of life.  The whole way we live, work and socialise is undergoing truly transformational change and the pace of that change is most definitely speeding up rather than slowing down.  The reassuring element however is that each time change comes, the new way doesn’t dominate – instead it augments and enhances the previous approach, introducing totally new ways of thinking.

So what’s the next big ‘game changer’? Robotic Process Automation (RPA) is the ‘new’ hot topic of the Digital era, offering huge advantages to business and society alike.  For business leaders – RPA delivers a more efficient, streamlined and cost effective business operation; for individuals – it offers the opportunity for more interesting, fulfilling and less repetitive jobs.  RPA empowers business leaders to automate manual tasks and simple ‘rules based’ activities freeing up staff to undertake more interesting and challenging activities – a true win-win!

Curiously, despite the rise of digitally enabled and automated application processes, many organisational activities in banks and investment companies today are still manually driven.  For example, across the Credit Risk lifecycle, manual data entry and manual data processing remains surprisingly prevalent at certain stages of the decisioning process.  In addition, for many Retail and almost all wholesale credit applications, decisions are manually underwritten.  Using RPA, a virtual workforce can augment processes undertaken across the Credit Risk lifecycle to deliver increased quality, improved accuracy and greater consistency 24/7 – reducing the risk of non-compliance and delivering a more responsive customer experience.

So don’t fear the digital revolution, now is the time to jump on board and embrace it.   Click here to find out how we do it at Sopra Steria.

AI Empowered retail roles: the new competitive advantage?

A Retailer can potentially use Artificial Intelligence (AI) to empower its people to analyse, transact and crucially sell faster and smarter to customers than its competitors. So, what might these jobs look like? Here are some ideas…

“Fixers” – Retailers are always looking to optimise their supply chain costs while improving the customer experience. A key pain point is last mile logistics – the need to offer increasingly timely, flexible delivery of goods to individual customers while maintaining the right economies of scale on distribution to achieve margin. A Fixer – possibly a third-party platform service provider – bids for and delivers instant solutions to solve these daily challenges. Their unique ability to use AI to continually optimise delivery routes and facilitate the sharing of local stock between Retailers (often competitors) to satisfy customer demand 24/7 places them at the heart of the Retail Sector in 2020.

“Instore Experience Trainers” – AI doesn’t innovate by itself; this advantage comes from people teaching or training it to deliver delightful and compelling customer experiences on any channel. An Instore Experience Trainer is someone who spends their working day testing different AI driven experiences from different Sectors and then uses this emotional insight to teach an Artificial Intelligence capability new ways to better engage customers instore – rapid human innovation scaled to differentiate thousands of individual customer interactions with a specific Retailer.

“AI Scanners” – As Artificial Intelligence grows so too does the opportunity for competitors to use it to analyse a Retailer’s offerings for strengths and weaknesses. An AI Scanner is monitoring daily how customers are engaging a Retailer’s Artificial Intelligence to identify such behaviour and its source to enable a proactive response to protect market competitiveness.

If you would like more information about how artificial intelligence can benefit your retail business, leave a reply below or contact me by email.

The rise of the Intelligent Machine

So it’s Tuesday evening and I’m watching the BBC 10 O’clock news. There’s an article being aired around the impact that technology-driven automation is going to have on the labour market which is suggesting that by 2035, 35% of the total UK employment market may be at risk of displacement. This is a pretty sobering statement, and gives rise to philosophical debate around the impact that this will have – not just on those members of the workforce affected, but also on our education system and the nature of employment opportunity in the advent of the automation revolution. Should we be teaching our children differently, right now, to prepare them for this? How do we second guess those jobs that are likely to become obsolete and thus help our children to focus their energies in those areas less likely to be impacted? Are we in danger, as some have prophesised, of creating an unemployable underclass?

Only time will tell, and it’s human nature to want to predict the worst case scenario, but quite often the reverse scenario is the more likely outcome.

Historically speaking, advances in technology, robotics and automation have not resulted in a commensurate rise in unemployment numbers but have actually increased employment

Deloitte executed a study on this subject using census data going back to 1871 and found that, whilst certainly some jobs have been made largely redundant by technology, the labour market has responded by switching to roles in care, service and education sectors. Knowledge-based industries in particular have benefitted from the ubiquitous availability of data, and increasing ease of communication. People are generally wealthier as the costs of goods and services have dropped which, rather amusingly, has seen a 1000% rise in bar staff (so we now know where all of our extra cash is going).

But this new wave of technology, the rise of artificial intelligence and intelligent machines, will likely have an equally material impact on knowledge based industries as robotics and technology assisted machinery has had on manual labour based ones. Companies such as IBM are spearheading this movement with technologies such as Watson. Cognitive computing platforms that are able to ‘think’ in human-like ways, they can reason, understand context, and use previous experience to make future predictions and inform decision making. They are capable of conversing in natural language and, when used in conjunction with big data repositories, are able to present insight that would otherwise be impossible to achieve using conventional computational systems. Perhaps more importantly, when used in conjunction with process automation engines, they are able to execute tasks. Process automation is not a new technology – we’ve been achieving this to varying degrees of complexity for many years now. What cognitive technologies bring to the table, however, is the ability to deal with decisions. Theoretically a cognitive system can execute complex processes that, under normal circumstances, would be wholly reliant on human interaction to complete due to the inherent necessity to think, to reason, and to bring knowledge into the equation. The future potential for such technologies is only now starting to be truly understood.

If, like me, you have an overactive imagination you may be imagining a cognitive system like IBM’s Watson to be some kind of huge supercomputer with flashing lights akin to the WOPR in the seminal 1983 classic film, WarGames. Indeed the WOPR was capable of natural language processing (it could talk), it could ‘learn’ through trial and error (albeit via circa 1000 games of tic tac toe) and it was capable of making informed decisions based on access to a wide range of data (Russian nuclear missile launch trajectories). But the reality is that Watson is highly scalable and not nearly so resource hungry. When it won the US TV show Jeopardy! in 2011, beating two of the show’s most prolific and successful contestants in the process, it did so running on 100 IBM POWER 750 servers running in a massively parallel computing cluster. Since then, IBM has refined the code for enterprise use such that it can now run on a single server platform, or directly via the Cloud. The Watson algorithms are being embedded in multiple different enterprise applications, tuned for different use cases, and are already being adopted in major banking and healthcare applications, to name but a few.

Other companies are also now offering enterprise solutions that have cognitive capabilities behind them, and one area that is garnering quite a bit of interest of late is the Virtual Digital Assistant, also commonly known as (an intelligent) chatbot. If you’ve ever used a customer service chat box online, you may be familiar with the concept of a ‘bot’ that can ask certain pre-canned questions or relay information prior to handing you off to a human operator. Bots are also often used in web chat applications for things like providing help on how to use the service itself.

Historically bots have been pretty dumb. They possess no innate intelligence, and simply work from a script. Go off-script, and the bot will simply not understand the question.

Chatbots that use cognitive algorithms, on the other hand, possess two unique and potentially game changing characteristics. Firstly, the can converse using natural language, so the experience is a very close approximation to that when conversing with a real human. Secondly, they can go off-script – they can interpret questions or instructions and combine stored knowledge with probabilistic algorithms to provide you with a response that is highly likely to be appropriate and possibly even useful! Such systems need to learn over time, and can even be trained, so their true potential is not unlocked immediately. Their potential, however, is huge, the use cases are many.

So what of the impact of such technologies? For the consumer, the likes of Amazon’s Alexa or Apple’s Siri will only become more capable and increasingly useful. Integration with home automation systems and access to consumer services are the obvious starting points. At present, the vast majority of service integration is limited to vendor’s entertainment and media services, but thinking outside of the box, consider the implications of using such technology to engage with other types of service providers. Want to pay your bank bill? Why not ask Alexa to do it for you? Need to register a complaint with your utility provider? Why not have Siri do it for you? Need to book a taxi? …Cortana?! As consumer service provider organisations begin to digitise their customer engagement channels, this kind of opportunity for integration begins to open up, paving the way for a new era in automated service fulfilment.

For the enterprise the impact is likely to be significantly more material. Efficient gains made via labour arbitrage, for instance, will shift to those enabled by technology arbitrage, as automation, driven by cognitive platforms, drives the cost of service down and the quality of service up. The impact this will have on traditional delivery models could be both rapid and significant. Service providers using cheap labour to deliver cost-effective knowledge-centric services will likely need to re-evaluate their models to remain competitive. Junior roles within organisation, many of which may be traditional routes in to the industry, will need to adapt to cater to those areas that support these new technology capabilities, or else see themselves replaced by them. Commercial models too will need to adapt as customers choose to move increasingly toward consumption or outcome based models, rather than those dictated by headcount or traditional performance related targets. The opportunities are there in abundance for those providers – and consumers – who choose to embrace the technology. Indeed, in this particular case, the WOPR was way off target when it philosophically announced that “…the only way to win is not play”. Whilst that may be true of Global Thermonuclear War, it certainly isn’t true of intelligent computing platforms within the enterprise.

As for me, I’m off to play a nice game of chess…

What are your views? Leave a reply below or, if you would like to learn more about these topics, please contact me by email.

The power of NLP: when David becomes Goliath

“Perhaps the biggest threat and opportunity organisations face is Natural Language Processing (NLP); where ever increasingly smart robots simplify transactions for customers.”

Yet the user experience of such intelligent personal assistants can at times feel underwhelming because they lack a sufficiently broad range of services versus other digital channels. Facebook M for example relies upon human trainers to complete more complex customer service tasks requested by users and Alexa utilises ‘skills’ – tailored apps such as Spotify. None of them appear to offer the same level of complete user freedom as using traditional web browsers to access any available content.

“Any organisation regardless of its size able to master NLP can potentially compete in previously unreachable or unscalable markets.”

One way these robots could overcome these limitations is to “learn” how to use NLP to access any digital service through its front-end without the need for any technical integration or human touchpoints. All transactions could then be consumed or simplified into one customer experience accessed by a single AI.

The implication for competitive advantage is that potentially any organisation regardless of its size that can effectively master these “platform on platforms” cloud capabilities will be able to compete in previously unreachable or unscalable markets

“In this “open season” competitive environment, NLP can enable an organisation to transform its relationship with an existing customer and steal new ones from competitors.”

One such service could be an AI that searches and buys the best priced goods from competitors from their own customer-facing channels (without their co-operation or collaboration) so empowering a customer to create their own “perfect basket” free from the constraints of only shopping with one brand. These competitors would still get revenue from these purchases but critically won’t have direct access to this customer relationship or loyalty – NLP is disrupting their competitive advantage by reducing their market power.

In this “open season” competitive environment, where switching costs are practically nil for customers, NLP can enable an organisation to radically transform its relationship with an existing customer and steal new ones from competitors – David becomes Goliath.

If you would like more information about how digital transformation can benefit your organisation please contact the Sopra Steria Digital Practice.

How deep learning is advancing AI in leaps and bounds

by Michel Sebag, Digital Practice, Sopra Steria France

Nature has given human beings an amazing ability to learn. We learn complex tasks, like language and image recognition from birth and continue throughout our lives to modify and build upon these first learning experiences. It seems natural then, to use the concept of learning, building up knowledge and being able to model and predict outcomes and apply that to computer related processes and tasks. The terminology used to describe the technologies involved in this paradigm in computing are Artificial Intelligence (AI).

It’s just a game

In the late 1990s, a defining moment in the world of artificial intelligence happened. In 1996 chess master Garry Kasparov played IBM’s Deep Blue, originally built to play chess using a parallel computer system, and won 4-2. A year later, Kasparov and Deep Blue played another match – this time, Deep Blue won. This win created a sea-change in the attitude towards the idea of AI. Chess masters minds have to perform highly complex calculations, evaluating multiple moves and strategies, on-the-fly. They can also take their own learning and apply novel moves. Being able to mimic this process, even if applied to a specific task like chess, opens up real potential for the technology.

Out of this success, new developments in AI have brought us to the point of maturity and sophistication. DeepMind, now owned by Google, uses deep learning algorithms. These algorithms are based on the same idea that allows human beings to learn, i.e. neural pathways or networks. Again, AI has been applied to gaming to prove a point. DeepMind has taken the idea of ‘human vs. machine’ and this time used it in the highly complex game of ‘Go’. DeepMind, the company, describe the game of Go as having “more possible positions in Go than there are atoms in the universe”. So then, this is the perfect challenge for an AI technology. DeepMind uses deep learning algorithms to train itself against known plays by expert players. The resultant system is known as AlphaGo and has a 99.8% win rate when pitted against other Go programs, and has recently won 4 out of 5 games against the Go pro player, Lee Sedol.

It may seem that it’s just a game being played, but in fact, this is proving the technology, showing it can learn how to model and predict outcomes in much the same way that a human being does. In almost 20 years AI is already 10 years ahead of what was anticipated of the technology. The games have proven the capability and now the technology is entering a stage of maturity where it is being applied to more real-world problem solving. Following the AlphaGo success, Google has understood the benefits of these technologies and has promptly integrated AlphaGo technology in its cloud based Google Machine Learning Platfom.

Some definitions in the world of Artificial Intelligence

At this juncture, it is worth looking at some of the terminology and definitions of AI technology.

It can be viewed as this: Deep Learning is a sub-set of Machine Learning; Machine learning is a sub-set of Artificial Intelligence.

Artificial Intelligence: This is a general term to describe a technology that has been built to demonstrate a similar intelligence level to a human being when solving a problem. It may, or may not use biological constructs as the underlying basis for its intelligent operations. Artificial Intelligence systems typically are trained and learn from this training.

Machine Learning: In the case of the games we used earlier as examples, machine learning is trained using player moves. In learning the moves and strategies of players, the system builds up knowledge in the same way a human being would. Machine learning based systems can use very large datasets as training input, which they then use to predict outcomes. Machine learning based systems can use both classical and non-classical algorithms. One of the most valuable aspects of machine learning is the ability to adapt. Adaptive learning gives better accuracy of predictions. This, in turn, facilitates the handling of all possibilities and combinations to provide the optimal outcome from the incoming data. In the case of game playing, this results in more wins for the machine.

Deep Learning: This is a sub-set of machine learning, a type of implementation of machine learning. The typology of the system is vital; when learning, it’s not so much about ‘big’ but it’s more about the surface area or depth. More complex problems are solved by larger numbers of neurons and layers. The network is used to train a system, using known question and answers to any given problem and this creates a feedback loop. Training results in weighted outcomes, this weight being passed to the next neuron along to determine the output of that neuron – in this way, it builds up a more accurate outcome based on probabilities.

Real world applications of AI

We’ve seen the use of AI in gaming, but what about real-world commercial applications? Whenever it comes to predict, forecast, recognize, clustering, AI is being used in a multitude of processes and systems.

At Sopra Steria, for example, we use AI components in industry solutions, including banking and energy. We are integrating Natural Language Processing (NLP) and voice recognition capabilities from our partners’ solutions such as IBM Watson or Microsoft Cortana. NLP, voice recognition – and image recognition in a near future – are now widely used and integrated in a multitude of applications. For example, for banking industry, text and voice recognition are used in qualification assistants for helpdesk and customer care services. More generally, some of the best-known modern applications include everyday use in our smart phones. Voice and personal assistance technologies like Siri and Google Now brought AI into the mainstream and out of the lab, using AI and predictive analytics to answer our questions and plan our days. Siri now has a more sophisticated successor named VIV. VIV is based on self-learning algorithms and its topology is much deeper that SIRI’s more linear pathways. VIV is opening up major opportunities for developers by creating an AI platform that can be called upon for a multitude of tasks. Google recently announced a similar path to its widely acclaimed assistant Google Now becoming Google Assistant.

Machine Learning is also used in many back-end processes, such as the scoring required to allow things like bank loans and mortgages. Machine learning is used in banking to specifically offer personalization of products giving banks using this method a competitive edge.

Deep learning is being used in more complex tasks, ones where rules are fuzzier and more complex. The era of big data is providing the tools that are driving the use cases for deep learning. We can see applications of deep learning in anything related to pattern recognition, such as facial recognition systems, voice assistance and behavioral analysis for fraud prevention.

Artificial Intelligence is entering a new era with the help of more sophisticated and improved algorithms. AI is the next disruptive technology – many of Gartner’s predictions for technology into 2016 and beyond, was based on AI and machine learning. Artificial Intelligence holds the keys to those unsolvable issues, the ones we thought only human beings could do. Ultimately, even the writing of this article may one day, be done by a machine.

What are your thoughts? Leave a reply below, or contact us by email.

The Brave Little Toaster

We are currently sitting on the precipice of the fourth industrial revolution which is set to re-think the way we live and work on a global scale.  As with the first industrial revolution, what we know roughly is that change is being driven by technology, but we lack any concrete knowledge of how great the change will be or just how dramatically it will disrupt the world we live in.

The technologies driving the upcoming revolution are artificial intelligence and robotics, technologies which have been the territory of sci-fi for generations which think and act as humans would.  Just as steam power, electricity and ultimately computers have replaced  human labour for mechanical and often mathematical tasks, AI looks set to supplant human thinking and creativity in a way which many see as unsettling.  If the first industrial revolution was too much for the ‘luddites’ doing their best to stamp out mechanical progress, the reaction to AI and robotics is going to be even more unsettling.  There are several clear reasons I can perceive that may drive people away from AI which are:

  • Fear of redundancy: the first reason we can see replicates that of the first industrial revolution. People don’t want technology to do what they do, because if a machine is able to do it faster, better and stronger than they can then what will they do?
  • Fear of the singularity: this one is like our fear of nuclear bombs and fusion. There’s an intrinsic fear people hold, entrenched in stories of Pandora’s Box where we believe certain things should not be investigated.  The singularity of AI is when a computer achieves sentience, and though we’re some way off that (without an idea of how we’d get there) the perceived intelligence of a machine can still be very unnerving.
  • The uncanny valley: the valley is the point where machines start to become more human-like, appearing very close, but not exactly like a human in the way they look or interact. If you’re still wondering what it is, I’d recommend watching these Singing Androids.

Just like we’ve seen throughout history, there is resistance to this revolution.  But if history is anything to go by, while it’s likely to be a bumpy road, the rewards will be huge.  Although it’s the back office, nuts and bolts which are driving change behind the scenes, it’s the front end where we interact with it that’s being re-thought to maximize potential and minimize resistance.  What we’re seeing are interfaces designed to appear dumb, or mask their computational brains to make us feel more comfortable, and that’s where the eponymous title of this blog comes in.

“The Brave Little Toaster” is a book from 1980, or – if you’re lazy like me – it’s a film from about 8 years later, ‘set in a world where household appliances and other electronics come to life, pretending to be lifeless in the presence of humans’.  Whilst the film focused on the adventure of these appliances to find their way back to their owner, what I’d like to focus on is how they hide intelligence when they come into sight – and this is what we’re beginning to see being followed by industry.

Journalism is a career typically viewed as creative and the product of human thought, but did you know that a fairly significant chunk of the news that you read isn’t written by a person at all?  For years now weather reports from the BBC have been written by machines using Natural Language Generation algorithms to take data and turn it into words, which can even be tailored to suit different audiences with simple configuration changes.  Earlier this month The Washington Post also announced that their writing on the Rio Olympics would be carried out by robots.  From a consumer standpoint it’s unlikely that we’ll notice that the stories have been written by machines, and if we don’t even notice it shouldn’t be creepy to us at all.  Internally, rather than seeing it as a way to replace reporters, it’s being seen as an opportunity to ‘free them up’, just like the industrial revolution before which saw people be freed up from repetitive manual tasks to more thought based ones.

Platforms like IBMs Watson begin to add a two-way flow to this, with both natural language generation and recognition, so that a person can ask a question just as they would to a person, with a machine understanding their phrasing and replying in turn without ever hinting that it’s an AI.  At the stage when things become too complicated, the AI asks for a person to take action and from there on the conversation is controlled by them, with no obvious transition.

A gradual approach to intelligence and automated systems is also being adopted by some businesses.  Tesla’s autopilot can be seen as an example of this, continuing a story which began with ABS (automatic breaking) over a decade ago, and developed in recent years to develop a car which, in some instances, can drive itself.  In its current state, autopilot is a combination of existing technologies like adaptive cruise control, automatic steering on a motorway and collision avoidance, but the combination of this with the huge amount of data it generates has allowed the system to learn routes and handling, carefully navigating tight turns and traffic (albeit with an alert driver ready to take over control at all times!).  Having seen this progression, it’s easy to imagine a time not too far from the present day where human drivers are no longer needed, with a system that learns, generates data and continually improves itself just as a human would as they learn to drive, only without the road rage, fatigue or human error.

The future as I see it is massively augmented and improved by artificial intelligence and advanced automation.  Only, it’ll be designed so that we don’t see it, where the boundary between human and machine input is perceivable only if you know exactly where to look.

What do you think? Leave a reply below, or contact me by email.

Augmentation, AI and automation are just some of the topics researched by Aurora, Sopra Steria’s horizon scanning team.