AI Empowered retail roles: the new competitive advantage?

A Retailer can potentially use Artificial Intelligence (AI) to empower its people to analyse, transact and crucially sell faster and smarter to customers than its competitors. So, what might these jobs look like? Here are some ideas…

“Fixers” – Retailers are always looking to optimise their supply chain costs while improving the customer experience. A key pain point is last mile logistics – the need to offer increasingly timely, flexible delivery of goods to individual customers while maintaining the right economies of scale on distribution to achieve margin. A Fixer – possibly a third-party platform service provider – bids for and delivers instant solutions to solve these daily challenges. Their unique ability to use AI to continually optimise delivery routes and facilitate the sharing of local stock between Retailers (often competitors) to satisfy customer demand 24/7 places them at the heart of the Retail Sector in 2020.

“Instore Experience Trainers” – AI doesn’t innovate by itself; this advantage comes from people teaching or training it to deliver delightful and compelling customer experiences on any channel. An Instore Experience Trainer is someone who spends their working day testing different AI driven experiences from different Sectors and then uses this emotional insight to teach an Artificial Intelligence capability new ways to better engage customers instore – rapid human innovation scaled to differentiate thousands of individual customer interactions with a specific Retailer.

“AI Scanners” – As Artificial Intelligence grows so too does the opportunity for competitors to use it to analyse a Retailer’s offerings for strengths and weaknesses. An AI Scanner is monitoring daily how customers are engaging a Retailer’s Artificial Intelligence to identify such behaviour and its source to enable a proactive response to protect market competitiveness.

If you would like more information about how artificial intelligence can benefit your retail business, leave a reply below or contact me by email.

The rise of the Intelligent Machine

So it’s Tuesday evening and I’m watching the BBC 10 O’clock news. There’s an article being aired around the impact that technology-driven automation is going to have on the labour market which is suggesting that by 2035, 35% of the total UK employment market may be at risk of displacement. This is a pretty sobering statement, and gives rise to philosophical debate around the impact that this will have – not just on those members of the workforce affected, but also on our education system and the nature of employment opportunity in the advent of the automation revolution. Should we be teaching our children differently, right now, to prepare them for this? How do we second guess those jobs that are likely to become obsolete and thus help our children to focus their energies in those areas less likely to be impacted? Are we in danger, as some have prophesised, of creating an unemployable underclass?

Only time will tell, and it’s human nature to want to predict the worst case scenario, but quite often the reverse scenario is the more likely outcome.

Historically speaking, advances in technology, robotics and automation have not resulted in a commensurate rise in unemployment numbers but have actually increased employment

Deloitte executed a study on this subject using census data going back to 1871 and found that, whilst certainly some jobs have been made largely redundant by technology, the labour market has responded by switching to roles in care, service and education sectors. Knowledge-based industries in particular have benefitted from the ubiquitous availability of data, and increasing ease of communication. People are generally wealthier as the costs of goods and services have dropped which, rather amusingly, has seen a 1000% rise in bar staff (so we now know where all of our extra cash is going).

But this new wave of technology, the rise of artificial intelligence and intelligent machines, will likely have an equally material impact on knowledge based industries as robotics and technology assisted machinery has had on manual labour based ones. Companies such as IBM are spearheading this movement with technologies such as Watson. Cognitive computing platforms that are able to ‘think’ in human-like ways, they can reason, understand context, and use previous experience to make future predictions and inform decision making. They are capable of conversing in natural language and, when used in conjunction with big data repositories, are able to present insight that would otherwise be impossible to achieve using conventional computational systems. Perhaps more importantly, when used in conjunction with process automation engines, they are able to execute tasks. Process automation is not a new technology – we’ve been achieving this to varying degrees of complexity for many years now. What cognitive technologies bring to the table, however, is the ability to deal with decisions. Theoretically a cognitive system can execute complex processes that, under normal circumstances, would be wholly reliant on human interaction to complete due to the inherent necessity to think, to reason, and to bring knowledge into the equation. The future potential for such technologies is only now starting to be truly understood.

If, like me, you have an overactive imagination you may be imagining a cognitive system like IBM’s Watson to be some kind of huge supercomputer with flashing lights akin to the WOPR in the seminal 1983 classic film, WarGames. Indeed the WOPR was capable of natural language processing (it could talk), it could ‘learn’ through trial and error (albeit via circa 1000 games of tic tac toe) and it was capable of making informed decisions based on access to a wide range of data (Russian nuclear missile launch trajectories). But the reality is that Watson is highly scalable and not nearly so resource hungry. When it won the US TV show Jeopardy! in 2011, beating two of the show’s most prolific and successful contestants in the process, it did so running on 100 IBM POWER 750 servers running in a massively parallel computing cluster. Since then, IBM has refined the code for enterprise use such that it can now run on a single server platform, or directly via the Cloud. The Watson algorithms are being embedded in multiple different enterprise applications, tuned for different use cases, and are already being adopted in major banking and healthcare applications, to name but a few.

Other companies are also now offering enterprise solutions that have cognitive capabilities behind them, and one area that is garnering quite a bit of interest of late is the Virtual Digital Assistant, also commonly known as (an intelligent) chatbot. If you’ve ever used a customer service chat box online, you may be familiar with the concept of a ‘bot’ that can ask certain pre-canned questions or relay information prior to handing you off to a human operator. Bots are also often used in web chat applications for things like providing help on how to use the service itself.

Historically bots have been pretty dumb. They possess no innate intelligence, and simply work from a script. Go off-script, and the bot will simply not understand the question.

Chatbots that use cognitive algorithms, on the other hand, possess two unique and potentially game changing characteristics. Firstly, the can converse using natural language, so the experience is a very close approximation to that when conversing with a real human. Secondly, they can go off-script – they can interpret questions or instructions and combine stored knowledge with probabilistic algorithms to provide you with a response that is highly likely to be appropriate and possibly even useful! Such systems need to learn over time, and can even be trained, so their true potential is not unlocked immediately. Their potential, however, is huge, the use cases are many.

So what of the impact of such technologies? For the consumer, the likes of Amazon’s Alexa or Apple’s Siri will only become more capable and increasingly useful. Integration with home automation systems and access to consumer services are the obvious starting points. At present, the vast majority of service integration is limited to vendor’s entertainment and media services, but thinking outside of the box, consider the implications of using such technology to engage with other types of service providers. Want to pay your bank bill? Why not ask Alexa to do it for you? Need to register a complaint with your utility provider? Why not have Siri do it for you? Need to book a taxi? …Cortana?! As consumer service provider organisations begin to digitise their customer engagement channels, this kind of opportunity for integration begins to open up, paving the way for a new era in automated service fulfilment.

For the enterprise the impact is likely to be significantly more material. Efficient gains made via labour arbitrage, for instance, will shift to those enabled by technology arbitrage, as automation, driven by cognitive platforms, drives the cost of service down and the quality of service up. The impact this will have on traditional delivery models could be both rapid and significant. Service providers using cheap labour to deliver cost-effective knowledge-centric services will likely need to re-evaluate their models to remain competitive. Junior roles within organisation, many of which may be traditional routes in to the industry, will need to adapt to cater to those areas that support these new technology capabilities, or else see themselves replaced by them. Commercial models too will need to adapt as customers choose to move increasingly toward consumption or outcome based models, rather than those dictated by headcount or traditional performance related targets. The opportunities are there in abundance for those providers – and consumers – who choose to embrace the technology. Indeed, in this particular case, the WOPR was way off target when it philosophically announced that “…the only way to win is not play”. Whilst that may be true of Global Thermonuclear War, it certainly isn’t true of intelligent computing platforms within the enterprise.

As for me, I’m off to play a nice game of chess…

What are your views? Leave a reply below or, if you would like to learn more about these topics, please contact me by email.

The power of NLP: when David becomes Goliath

“Perhaps the biggest threat and opportunity organisations face is Natural Language Processing (NLP); where ever increasingly smart robots simplify transactions for customers.”

Yet the user experience of such intelligent personal assistants can at times feel underwhelming because they lack a sufficiently broad range of services versus other digital channels. Facebook M for example relies upon human trainers to complete more complex customer service tasks requested by users and Alexa utilises ‘skills’ – tailored apps such as Spotify. None of them appear to offer the same level of complete user freedom as using traditional web browsers to access any available content.

“Any organisation regardless of its size able to master NLP can potentially compete in previously unreachable or unscalable markets.”

One way these robots could overcome these limitations is to “learn” how to use NLP to access any digital service through its front-end without the need for any technical integration or human touchpoints. All transactions could then be consumed or simplified into one customer experience accessed by a single AI.

The implication for competitive advantage is that potentially any organisation regardless of its size that can effectively master these “platform on platforms” cloud capabilities will be able to compete in previously unreachable or unscalable markets

“In this “open season” competitive environment, NLP can enable an organisation to transform its relationship with an existing customer and steal new ones from competitors.”

One such service could be an AI that searches and buys the best priced goods from competitors from their own customer-facing channels (without their co-operation or collaboration) so empowering a customer to create their own “perfect basket” free from the constraints of only shopping with one brand. These competitors would still get revenue from these purchases but critically won’t have direct access to this customer relationship or loyalty – NLP is disrupting their competitive advantage by reducing their market power.

In this “open season” competitive environment, where switching costs are practically nil for customers, NLP can enable an organisation to radically transform its relationship with an existing customer and steal new ones from competitors – David becomes Goliath.

If you would like more information about how digital transformation can benefit your organisation please contact the Sopra Steria Digital Practice.

How deep learning is advancing AI in leaps and bounds

by Michel Sebag, Digital Practice, Sopra Steria France

Nature has given human beings an amazing ability to learn. We learn complex tasks, like language and image recognition from birth and continue throughout our lives to modify and build upon these first learning experiences. It seems natural then, to use the concept of learning, building up knowledge and being able to model and predict outcomes and apply that to computer related processes and tasks. The terminology used to describe the technologies involved in this paradigm in computing are Artificial Intelligence (AI).

It’s just a game

In the late 1990s, a defining moment in the world of artificial intelligence happened. In 1996 chess master Garry Kasparov played IBM’s Deep Blue, originally built to play chess using a parallel computer system, and won 4-2. A year later, Kasparov and Deep Blue played another match – this time, Deep Blue won. This win created a sea-change in the attitude towards the idea of AI. Chess masters minds have to perform highly complex calculations, evaluating multiple moves and strategies, on-the-fly. They can also take their own learning and apply novel moves. Being able to mimic this process, even if applied to a specific task like chess, opens up real potential for the technology.

Out of this success, new developments in AI have brought us to the point of maturity and sophistication. DeepMind, now owned by Google, uses deep learning algorithms. These algorithms are based on the same idea that allows human beings to learn, i.e. neural pathways or networks. Again, AI has been applied to gaming to prove a point. DeepMind has taken the idea of ‘human vs. machine’ and this time used it in the highly complex game of ‘Go’. DeepMind, the company, describe the game of Go as having “more possible positions in Go than there are atoms in the universe”. So then, this is the perfect challenge for an AI technology. DeepMind uses deep learning algorithms to train itself against known plays by expert players. The resultant system is known as AlphaGo and has a 99.8% win rate when pitted against other Go programs, and has recently won 4 out of 5 games against the Go pro player, Lee Sedol.

It may seem that it’s just a game being played, but in fact, this is proving the technology, showing it can learn how to model and predict outcomes in much the same way that a human being does. In almost 20 years AI is already 10 years ahead of what was anticipated of the technology. The games have proven the capability and now the technology is entering a stage of maturity where it is being applied to more real-world problem solving. Following the AlphaGo success, Google has understood the benefits of these technologies and has promptly integrated AlphaGo technology in its cloud based Google Machine Learning Platfom.

Some definitions in the world of Artificial Intelligence

At this juncture, it is worth looking at some of the terminology and definitions of AI technology.

It can be viewed as this: Deep Learning is a sub-set of Machine Learning; Machine learning is a sub-set of Artificial Intelligence.

Artificial Intelligence: This is a general term to describe a technology that has been built to demonstrate a similar intelligence level to a human being when solving a problem. It may, or may not use biological constructs as the underlying basis for its intelligent operations. Artificial Intelligence systems typically are trained and learn from this training.

Machine Learning: In the case of the games we used earlier as examples, machine learning is trained using player moves. In learning the moves and strategies of players, the system builds up knowledge in the same way a human being would. Machine learning based systems can use very large datasets as training input, which they then use to predict outcomes. Machine learning based systems can use both classical and non-classical algorithms. One of the most valuable aspects of machine learning is the ability to adapt. Adaptive learning gives better accuracy of predictions. This, in turn, facilitates the handling of all possibilities and combinations to provide the optimal outcome from the incoming data. In the case of game playing, this results in more wins for the machine.

Deep Learning: This is a sub-set of machine learning, a type of implementation of machine learning. The typology of the system is vital; when learning, it’s not so much about ‘big’ but it’s more about the surface area or depth. More complex problems are solved by larger numbers of neurons and layers. The network is used to train a system, using known question and answers to any given problem and this creates a feedback loop. Training results in weighted outcomes, this weight being passed to the next neuron along to determine the output of that neuron – in this way, it builds up a more accurate outcome based on probabilities.

Real world applications of AI

We’ve seen the use of AI in gaming, but what about real-world commercial applications? Whenever it comes to predict, forecast, recognize, clustering, AI is being used in a multitude of processes and systems.

At Sopra Steria, for example, we use AI components in industry solutions, including banking and energy. We are integrating Natural Language Processing (NLP) and voice recognition capabilities from our partners’ solutions such as IBM Watson or Microsoft Cortana. NLP, voice recognition – and image recognition in a near future – are now widely used and integrated in a multitude of applications. For example, for banking industry, text and voice recognition are used in qualification assistants for helpdesk and customer care services. More generally, some of the best-known modern applications include everyday use in our smart phones. Voice and personal assistance technologies like Siri and Google Now brought AI into the mainstream and out of the lab, using AI and predictive analytics to answer our questions and plan our days. Siri now has a more sophisticated successor named VIV. VIV is based on self-learning algorithms and its topology is much deeper that SIRI’s more linear pathways. VIV is opening up major opportunities for developers by creating an AI platform that can be called upon for a multitude of tasks. Google recently announced a similar path to its widely acclaimed assistant Google Now becoming Google Assistant.

Machine Learning is also used in many back-end processes, such as the scoring required to allow things like bank loans and mortgages. Machine learning is used in banking to specifically offer personalization of products giving banks using this method a competitive edge.

Deep learning is being used in more complex tasks, ones where rules are fuzzier and more complex. The era of big data is providing the tools that are driving the use cases for deep learning. We can see applications of deep learning in anything related to pattern recognition, such as facial recognition systems, voice assistance and behavioral analysis for fraud prevention.

Artificial Intelligence is entering a new era with the help of more sophisticated and improved algorithms. AI is the next disruptive technology – many of Gartner’s predictions for technology into 2016 and beyond, was based on AI and machine learning. Artificial Intelligence holds the keys to those unsolvable issues, the ones we thought only human beings could do. Ultimately, even the writing of this article may one day, be done by a machine.

What are your thoughts? Leave a reply below, or contact us by email.

The Brave Little Toaster

We are currently sitting on the precipice of the fourth industrial revolution which is set to re-think the way we live and work on a global scale.  As with the first industrial revolution, what we know roughly is that change is being driven by technology, but we lack any concrete knowledge of how great the change will be or just how dramatically it will disrupt the world we live in.

The technologies driving the upcoming revolution are artificial intelligence and robotics, technologies which have been the territory of sci-fi for generations which think and act as humans would.  Just as steam power, electricity and ultimately computers have replaced  human labour for mechanical and often mathematical tasks, AI looks set to supplant human thinking and creativity in a way which many see as unsettling.  If the first industrial revolution was too much for the ‘luddites’ doing their best to stamp out mechanical progress, the reaction to AI and robotics is going to be even more unsettling.  There are several clear reasons I can perceive that may drive people away from AI which are:

  • Fear of redundancy: the first reason we can see replicates that of the first industrial revolution. People don’t want technology to do what they do, because if a machine is able to do it faster, better and stronger than they can then what will they do?
  • Fear of the singularity: this one is like our fear of nuclear bombs and fusion. There’s an intrinsic fear people hold, entrenched in stories of Pandora’s Box where we believe certain things should not be investigated.  The singularity of AI is when a computer achieves sentience, and though we’re some way off that (without an idea of how we’d get there) the perceived intelligence of a machine can still be very unnerving.
  • The uncanny valley: the valley is the point where machines start to become more human-like, appearing very close, but not exactly like a human in the way they look or interact. If you’re still wondering what it is, I’d recommend watching these Singing Androids.

Just like we’ve seen throughout history, there is resistance to this revolution.  But if history is anything to go by, while it’s likely to be a bumpy road, the rewards will be huge.  Although it’s the back office, nuts and bolts which are driving change behind the scenes, it’s the front end where we interact with it that’s being re-thought to maximize potential and minimize resistance.  What we’re seeing are interfaces designed to appear dumb, or mask their computational brains to make us feel more comfortable, and that’s where the eponymous title of this blog comes in.

“The Brave Little Toaster” is a book from 1980, or – if you’re lazy like me – it’s a film from about 8 years later, ‘set in a world where household appliances and other electronics come to life, pretending to be lifeless in the presence of humans’.  Whilst the film focused on the adventure of these appliances to find their way back to their owner, what I’d like to focus on is how they hide intelligence when they come into sight – and this is what we’re beginning to see being followed by industry.

Journalism is a career typically viewed as creative and the product of human thought, but did you know that a fairly significant chunk of the news that you read isn’t written by a person at all?  For years now weather reports from the BBC have been written by machines using Natural Language Generation algorithms to take data and turn it into words, which can even be tailored to suit different audiences with simple configuration changes.  Earlier this month The Washington Post also announced that their writing on the Rio Olympics would be carried out by robots.  From a consumer standpoint it’s unlikely that we’ll notice that the stories have been written by machines, and if we don’t even notice it shouldn’t be creepy to us at all.  Internally, rather than seeing it as a way to replace reporters, it’s being seen as an opportunity to ‘free them up’, just like the industrial revolution before which saw people be freed up from repetitive manual tasks to more thought based ones.

Platforms like IBMs Watson begin to add a two-way flow to this, with both natural language generation and recognition, so that a person can ask a question just as they would to a person, with a machine understanding their phrasing and replying in turn without ever hinting that it’s an AI.  At the stage when things become too complicated, the AI asks for a person to take action and from there on the conversation is controlled by them, with no obvious transition.

A gradual approach to intelligence and automated systems is also being adopted by some businesses.  Tesla’s autopilot can be seen as an example of this, continuing a story which began with ABS (automatic breaking) over a decade ago, and developed in recent years to develop a car which, in some instances, can drive itself.  In its current state, autopilot is a combination of existing technologies like adaptive cruise control, automatic steering on a motorway and collision avoidance, but the combination of this with the huge amount of data it generates has allowed the system to learn routes and handling, carefully navigating tight turns and traffic (albeit with an alert driver ready to take over control at all times!).  Having seen this progression, it’s easy to imagine a time not too far from the present day where human drivers are no longer needed, with a system that learns, generates data and continually improves itself just as a human would as they learn to drive, only without the road rage, fatigue or human error.

The future as I see it is massively augmented and improved by artificial intelligence and advanced automation.  Only, it’ll be designed so that we don’t see it, where the boundary between human and machine input is perceivable only if you know exactly where to look.

What do you think? Leave a reply below, or contact me by email.

Augmentation, AI and automation are just some of the topics researched by Aurora, Sopra Steria’s horizon scanning team.

Products with personality – the Liquid Big Data customer experience

Digital technology is driving new forms of customer engagement that are rapidly eliminating the functional silos between online and offline retail channels. As a result many high street retailers are already experiencing falling footfall in their physical stores as customers increasingly switch to online competitors for better convenience, choice and prices.

However, these bricks and mortar (B&M) businesses could use Liquid Big Data (cloud-based analytics shared between partners, suppliers and, potentially, competitors for their mutual benefit) to integrate the physical and digital customer experience into a unique, responsive personal customer journey online competitors can’t imitate.

So what might the Liquid Big Data customer experience be like for a global retailer selling ready-to-assemble home furniture, appliances and accessories for example?  Here are some ideas…

An eidetic world

Traditionally high street retailers focus on their brand as a source of differentiation to attract customers to their physical stores. Yet conversely, digital empowers customers to focus on their specific wants or needs regardless of provide. That’s why their online competitors invest so heavily in user experience design to continually optimise how customers use their channels to browse and buy the products they sell – choice and accessibility as a form of differentiation. To combat this challenge, B&M businesses are increasingly using digital technology (such as touch screens, beacons and virtual reality) to differentiate the in-store experience as something equally empowering or seamless as being online.

However, by choosing to replicate the online experience, in-store risks ignoring a source of competitive advantage unique to B&M: a customer’s physical experience with a product and the wider environment.

Using Liquid Big Data, the retail customer experience does not have a beginning or end nor is it location specific – it’s contextual.  Powered by a smartphone app provided by collaborating retailers and suppliers, wearable technology (such as a watch or glasses) could capture the people, places and objects an individual customer likes, loathes or loves throughout their entire lives. Even if such encounters are fleeting, these moments are captured with photographic, eidetic clarity in the individual’s private cloud. The customer can then choose which of these experiences to share with the retailer via the app to create a unique, personalised shopping experience in-store every time they visit.

This could be the raised heartbeat of seeing Rome architecture for the first time – could our example global retailer offer this customer discounts on its in-store Baroque furniture offerings? Another customer loves the feel of velvet – could an in-store sales team member suggest some appropriate soft furnishings? One customer really liked his girlfriend’s coffee table she had at university three years ago – could today’s store visit be an opportunity to find something similar for their new home?

Liquid Big Data enables a high street retailer to use the eidetic physical world as a way to effectively personalise its in-store customer experience using digital technology – enhancing its existing brand as a form of differentiation that can’t be imitated by its online competitors.

Products with personality

Harnessing the power of the eidetic world may not be sufficient long term to differentiate the in-store customer experience versus online. Although it offers a targeted customer experience it doesn’t necessarily make a customer’s relationship any closer or more intimate with the specific products a B&M business sells – a key driver at the heart of the competing online experience.

Yet the customer experience of an online retailer is ultimately a passive, limited engagement typically contingent upon the specific browsing or buying history the customer has with their channel or brand or other self-selecting activity such as social media engagement.

In response, a high street retailer with its partners, suppliers or competitors could use Liquid Big Data to take personalisation to a deeper level – use cloud based artificial intelligence (AI) to create direct relationships between individual customers and the products they sell.

The idea is to personify a product using AI with a user experience similar to smartphone personal assistants or virtual customer service agents. A customer can have a text or voice conversation with the product to explore its suitability for purchase (including reviews or endorsements) and select any desired tailoring or customisation. A customer may also enable the product AI to access his or her eidetic memories or social media profile to help shape and personalise their relationship. The AI can either be used on request or continually available to provide product updates or after sales support. In addition, products may also talk to each other in the cloud as a form of machine learning to identify potential new product designs or opportunities for complements that better meet their individual customers’ needs.

Such insights are then gathered by the retailer and participating stakeholders to inform the customer experience in-store (and beyond), support product development and address any supply chain issues.

For example, our global retailer has found that people across the world keep asking the same question about the performance of a specific brand fridge freezer it sells. Could there be a quality issue with this particular product that needs investigating? Customers in a specific region like the way the product is sold in-store by staff based on their after sales conversation with the AI – how can this be replicated in other regions where demand is falling? The collective cloud AI has also designed a new cooling, energy-efficient feature for the next model – a potential hot seller that could be delivered in collaboration with the supplier?

The potential headline benefits of a high street retailer using the Liquid Big Data customer experience include:

  • Enables new forms of personalisation and innovation deeper than anything previously available in the market by integrating real life and digital customer experience
  • Challenges the seemingly unbreakable competitive advantage of  online retailer competitors and other digital disruptors (such as platforms and social media channels)
  • Links in-store digital technology directly, explicitly with specific customer needs daily – materially lowers the risk of this investment and increase its ROI

If you would like more information about how cloud-enabled big data and analytics can benefit your organisation please contact the Sopra Steria Digital Practice.

Virtual robot workers and the impact on my pension plan

Sadly, I’ve reached the age where I am beginning to count how many years it is until I can start to draw my pension. Most days it’s a number far too close as I generally still love my job, although occasionally other days do have me dreaming that it was tomorrow.

My years of experience (!) in designing and running large back offices in the banking sector have seen me live through the centralisation of these back office functions, their subsequent outsourcing, followed by panicked in-sourcing when the wind or accountable exec changed, the drive towards off-shoring and, most recently, the delight of handling an 800-seat partial on-shoring project for a client.

For each one of those, the primary business case rationale was a step change reduction in the cost of the operating model, with CX being a nice to have secondary benefit when the business case needed a more politically acceptable feel to it!

What I couldn’t see was “what next” in the step change evolution of the back office.

That was held to be true until I reluctantly deputised for my boss at a meeting last year and was formally introduced to the world of virtual workforce robots, and an epiphany happened!

At its most simple level this is a piece of software that emulates the actions of a human in an operational process – once configured/trained, each virtual instance of an FTE is fully scalable, 100% trained, 100% accurate, and is available up to 100% of each 24 hour day.

Depending on your cost base and its location, these virtual wonders can also do the same volume of processing for as little as 1/9th of the cost of a human.

With our partners at Blue Prism, Sopra Steria has developed a Lean Robotic Automation (LPA) proposition, coalescing our deep capability in Lean process management and Blue Prism’s software wizardry.

We are still at a relatively early stage in deployment both internally and externally but watch this space – every commentator and analyst in the marketplace recognises virtual robots as playing a significant part in all our clients thinking within 12 months.

As for my pension plans, they’re on hold for a while – I’ve a target audience in the UK alone of around 8,000,000 jobs to try and automate!

What do you think about the role virtual robots will play in operational processes? Leave a reply below or contact me by email.