How telcos are turning data into a true business asset

The good news – we have more data than ever. The bad news – we have more data than ever.

Data, and more importantly, how you harness it to create real business value, is a topic that crops up time and again in my conversations with the telecoms companies I work with. The challenge they’re facing is how to master and govern their escalating data volumes: how do they ensure they have a data driven business, but that the insights they drive action from are based on the right information. How can they be confident that their data is clean, up to date and accurate, and given the sheer volumes in question, how do they master and control their data going forward so that investments to establish the right data environment have longevity?

In the digital economy, the power of data is there for all to see. To many companies created over the last 15 years, data is not just in their DNA, it often forms their currency. Whilst we may view Amazon as a digital pioneer in the retail and logistics market, and Uber as similar in transport, the fact is that they are data driven through every part of their business. They offer a digital customer experience that sets the standard to which other organisations now aspire, and they form their entire business around the use of data.

The challenge within the telco environment is primarily that data; such as customer data, product data, etc, is in so many different systems, and often in multiple formats. A customer entity may exist multiple times, and without structured data hierarchies or unique identifiers, it’s hard to deduce what is the ‘truth’ and how should it be mastered. Even if you can get all you data in one place, which is difficult enough, you then have to make it usable and ultimately valuable. And often the value in unstructured data is simply overlooked.

Getting this data issue solved is not just an ambition, but a necessity. It forms the bedrock of digital transformations in customer experience and process automation. In my previous blog I focussed on intelligent automation, but this will only materially occur when there is high confidence in the data. Enabling robotic process automation on questionable data is not where you want to be. Data, drives Insight, drives Action – but you have to trust the data if you are to make the right decision and get the right outcome.

Seeing the bigger B2B picture

Telcos really have made great strides in transforming their businesses over the last few years, and I’ve witnessed this especially in B2B. Cloud computing and increased IT agility has underpinned the move towards a more data driven approach. Suddenly, better quality data is starting to emerge as a result of system migrations and associated data cleansing activities, and a continued focus on data stewardship and resulting data quality. The next stage in this evolution though is to establish a scalable and lasting solution, and this is where a focus on Master Data Management (MDM) is key. Whether a data registry model or a data hub approach has been opted for, there are still issues around achieving a single version of truth, whilst avoiding repeated manual data cleansing processes.

Because telcos and other large enterprises capture, store, share, secure and analyse millions of data records day in, day out, in too many cases, projects introducing or upgrading MDM capabilities to solve ongoing data challenges, or to meet new regulatory requirements, have, and continue to fail. It’s time to look to a different way to turn enterprise data assets into competitive advantage and make those projects deliver on their promise.

A new approach

MarkLogic (a Sopra Steria partner) advocates a new approach (its heliocentric solution) to getting on top of disconnected data silos. What it refers to as its ‘schema-agnostic platform’ allows data to be stored in its original form and enriched as necessary. Instead of a big-bang approach required by traditional MDM – which demands all data be mapped before the system is useful, a schema-agnostic approach is more flexible and responsive. Iterative transformation of data after ingest allows businesses to focus on high value tasks first, testing each change for correctness, and being able to respond to business changes quickly.

That’s just one aspect of this heliocentric approach. At the same time, semantics, a new database technology, provides a new approach to modelling data that focuses on relationships and context, making it easier for data to be understood, searched, and shared. Using semantics, it becomes possible to integrate disparate data faster and easier, and to build smarter applications with richer analytic capabilities.

In-line with Telcos digital strategies, and the underlying need for increased agility, the opportunity to address the “data challenge” in an agile way, tackling high value areas and accelerating business value is now a reality. Several years, and many millions of investment that struggles to solve the issue, could now be a thing of the past.<

Get in touch

To find out more about seizing control of your data with a new approach to MDM, contact Jason Butcher on jason.butcher@soprasteria.com

Read the MarkLogic blog ‘A new way to master MDM

Why telcos are getting smart with their process automation

If you’re a telecoms operator providing services into consumer and business markets, you will know how important customer experience is to your business. Whether you’re providing connectivity (fixed, mobile, converged), unified communications and IT services into Enterprises, or quad-play offerings to consumers, you will know the importance of an increasingly digital experience and efficient and effective processes.

It is often the case though, that the experience for the customer when trying to work with you is less than they have become accustomed to with organisations who have been born in the digital era. These companies have changed the paradigm for customer experience and set a new bar height to which organisations now aspire. But these organisations do not have to carry the weight of process complexity, and challenges that the myriad of systems within a multi-decade year old IT estate create.

So against this backdrop, how do you make the complex simple? How do you take what your customer expects to be easy and make it so? How do you achieve significant operational cost savings, especially in back-office functions, whilst the work of your customers continues to exist? Moreover, how do you deliver all this, stay true to a digital strategy that makes you fit for the future, AND improve today’s customer experience in the process?

Turning to process automation

Can you afford the cost? In today’s increasingly commoditised telecoms industry, where every opportunity to reduce operating costs is being pursued, it’s no wonder that telcos have been investing in process automation for some time now. For heavily process-driven businesses, process automation brings significant benefits, from improved efficiency and heightened productivity levels, to reduced operational costs and assurance of compliance – not forgetting greater customer satisfaction.

Now this automation is becoming smarter, more intelligent. Self-learning, self-healing, intelligent process automation that leaves only genuine business reasons for exceptions to require expert human intervention. It has the potential to deliver significant savings and productivity increases. To quantify this, let’s say that you took 80% of your standard customer transactions and fully automated them, using Robotic Process Automation. But with Intelligent Automation, your ‘robotic process engine’ uses Artificial Intelligence in order to analyse data and apply continuous learning to optimise processes. So, in other words, it becomes increasingly smarter.

Adding up the gains

What you’re left with is the 20% of more complex customer requirements better handled by human agents, with these ‘exceptions’ reducing as the intelligent automation is applied. This not only cuts costs through enabling significant people savings in back-office functions (you get the work done, but with a fraction of the resources), but it also has a positive impact on customer experience. How? Expert human decisions remain integral to the end-to-end process for personal contact where required, but customers can quickly and easily carry-out those standard transactions in a digital and frictionless way. With the option of presenting the customer choices of interface, including voice integration into the same intelligent automation engine, your customer experience is improved, whilst your operational processing costs are reduced.

Using intelligent automation to reduce manual processing also increases accuracy levels by removing errors and issues with data quality. The less human interference, the less human error. Customers are now achieving more than 5 x processing speed benefits with 99.5% accuracy from customer service processes using intelligent automation. All this with the ability for the robotic engine to be accessible 24 x 7 within vastly reduced operating costs.

The other benefit is that of systems integration. The intelligent automation platform becomes the means by which different systems are accessed and data transferred between them in real-time, and with quality assurance occurring on the fly.

A platform for rapid adoption

At Sopra Steria, our Intelligent Automation Platform (IAP) offers a way for clients to fast-track their adoption of intelligent automation. IAP learns ‘on the job’, fine-tuning people-less processes to continuously improve the level of straight-through processing available, and reducing the requirement for human intervention. For organisations with global entities, such as telcos, it is fully scalable and geographically independent. This means businesses can link up processes from previously disparate parts of their organisation.

Intelligent automation really is a game changer. It enables automated and intelligent decision making, using real-time business insight. Whether the process is to on-board new customers or suppliers, transact for services or enable change management, or deal with in-life operational support, the ability for intelligent automation to underpin a digital customer experience and transform front and back-office processes is incredible. It may seem like the stuff of science fiction, but organisations, especially those heavily dependent on systems and processes such as telcos, will be increasingly looking at intelligent automation as part of their digital strategy.

Get in touch

To find out more about our Intelligent Automation Platform, please contact me via jason.butcher@soprasteria.com

Confronting the M&S challenge – why data is the solution

The impact of digital on the retail sector hit home at the end of May when M&S announced that it was accelerating its digital transformation following plunging profits. That one of the UK’s best-known retail brands had clearly failed to keep up with digital consumer trends may have come as a shock to many. I wasn’t surprised, however. I’ve recently written a paper on this very topic. In ‘Why data is the new retail battleground’ I look at one of the key reasons why traditional retailers are struggling to compete with their digital competitors – data.

For me, the challenge is not that these retailers have failed to invest in online commerce channels. Indeed, many are doing well in this respect. What’s holding them back is that they’re still using decades-old back office systems and processes governing Product Lifecycle Management (PLM), Product Information (PIM) and Product Master Data Management (MDM). These retailers, and especially those with a catalogue heritage, retain a large legacy of systems, processes and cultural norms that are not aligned to the expectations of today’s customer. They’ve typically expanded into digital channels to meet the consumer appetite, but they’re being hindered by operating models that remain wedded in their legacy data management principles.

The Amazon effect

To compete with the likes of Amazon and other digital retailers, traditional companies must transform – and they need to do this fast. The ability to capture the right product information quickly and accurately, then push it out to the relevant operational (Finance, Warehouse, Transport, Order Management, etc.) and commercial (Merchandising, Marketing, Pricing, etc.) systems will be critical for this. However, these processes are typically not well managed, or even automated, by many traditional retailer organisations today. Everything from data input, data cleansing and data matching, data enrichment and data profiling, through to data syndication and data analytics, is still dependent on disconnected and largely manual operations.

It’s clearly time to automate those areas of data management that are tying up valuable human resources in manual repetitive tasks. Trying to do what they do now without automation will not work for traditional retailers. In my paper, I describe a set of automation best practice that all retailers should be considering in this respect.

A strategic choice

I also point out that this isn’t just an IT challenge. It is a strategic choice to build a single source of data truth on which product decisions can be made. This is built on an understanding that to remain competitive with responsive and agile operations, every day, organisations need to bring about both technology and cultural change.

Like many traditional retailers, M&S clearly has a number of digital challenges to confront, such as those described above. After announcing its 62% drop in pre-tax profits, the retailer declared it would be modernising its business through ‘accelerated change’ to cater for an increasingly online customer base. I hope it puts data at the heart of this transformation.

Read my paper for more on how to move to a new data-led operating model in today’s fast-moving retail environment.

The Promise of Platforms – Joined up outcomes and better value

One of the often-quoted benefits of digital transformation is the improvement in the ways government departments interact with citizens and business. Departments aim to use the same systems and shared data to avoid time consuming and repetitive tasks. But the reality often falls short of expectations. So, in this blog I take a look at why cross-cutting activity is rare and how digital platforms might help.

Why are public services so siloed?

The current departmental structure brings together and manages most areas of government business through a top down, vertical management structure. This approach is highly effective in delivering many of the government’s key priorities. It provides a single, clear line of accountability and keeps tight control over resources.

However, vertical structures also have many disadvantages:

First, issues or problems which straddle departmental boundaries are neglected. Budgets tend to be allocated on a departmental basis rather to policies that cross boundaries. And mechanisms for reconciling conflicting priorities are weak.

The result is that policy makers can take too narrow a view of the issues. They fail to look at problems from the perspective of the user. And they focus on what is easiest for government to supply, not what makes sense to the service user.

Second, departments also fail to recognise that local authorities have separate lines of accountability to local voters and may not share their priorities. So, departments tend to be overly prescriptive, in specifying the means of delivery as well as the ends.

And third, there are real obstacles to effective cross cutting working on the ground. It involves complex relationships and lines of accountability. Costs tend to fall on one budget while the benefits accrue to another. If appraisal systems are incapable of identifying and rewarding a contribution to a successful cross cutting project, the risks are one way.

So how do we join up government?

My experience is that cross cutting interventions work best when government makes clear their priorities and when champions at Ministerial and Permanent Secretary level (and / or chief executive and senior management team in local government) have a lasting effect on behaviour.

Also, with these supportive conditions, the adoption of digital technologies will enable cross-cutting work. For example, emphasis in UK government is now quite rightly focussed on how digital can support business transformation through, for example, the creation of shared components (such as Verify, Pay and Notify) and common workplace tools . The common link is, of course, information technology: co-ordination involving multiple providers that both depend on compatible IT systems and common data collection and architectures.

But perhaps just as significantly, digital approaches can promote dialogue with citizens and service users.

There are two aspects to this dialogue. First government needs to provide digital channels for information and views to reach them, which are not constrained to departmental silos – people and organisations should not have to tailor their views to fit Whitehall’s structure. People often want to be involved in shaping services, particularly at a local level, not just choosing between them. Open source methods that involve users in designing services have become commonplace in business and have always been common in civil society.

And second, government needs to shift the quality of the relationship between citizens and the state, so services are shaped around the individual’s needs rather than being too standardised. The commitment to make services more personal can mean little more than having someone – a teacher or a doctor – to talk to face to face. But it can mean a different curriculum and programme for every pupil. Or a different pattern or modular options of care for every patient.

Where are we see the benefits of joined-up government?

The harbingers of the future can be found where governments face the most intense pressures. This includes the increasing incidence of chronic conditions, as an ageing population and changes in societal behaviour are contributing to a steady increase in common and costly long-term health problems. Mental illness is equally significant, accounting for over 30% of all GP consultations and 50% of follow up consultations.

As a result, in the UK we now spend over £24 billion on disability and incapacity benefits for over 3.5 million working age people.

Chronic and other complex conditions are not easily administered or treated either through a traditional clinical lens or prescriptions. Much of the care is provided by families and friends and is too expensive to be provided by formal structures and by highly paid doctors. Most of the most important knowledge about how to handle these long-term conditions resides with other patients rather than just doctors.

So, part of the answer lies with giving people control over how money is spent and support structured to meet their needs. This means giving service users direct power over money and new structures of advice, often through simple but powerful online platforms/

At its best, these approaches bridge the bottom-up and top-down, paying attention to the worlds of daily experience rather than seeing people as abstract categories. Networks and platforms can help the state track behaviours, highlighting ‘what works’, and make it easier for people to band together and take control of their care.

Blockchain in a post GDPR World

Blockchain’s explosive growth has had businesses all over the globe scrambling to invest. But with GDPR fast approaching, how will an unchangeable database cope with the right to be forgotten?

How do you inflate your share price by 400% in a day? The answer is simple: add the word blockchain to your company’s name. As absurd as these figures seem, this is actually what happened last October to venture capitalist firm On-line Plc, following their decision to alter their name to On-line Blockchain Plc.

Olivia Green - Article

This shocking report is an accurate reflection of the current level of hype surrounding this new technology, with companies left, right and centre moving to adopt blockchain. A reported 57% of large UK corporations now have immediate plans to implement blockchain into their infrastructure by the end of 2018, while demand for blockchain specialists has nearly tripled in the last year alone. But while organisations have been avidly investing in this new phenomenon, they have also (rather more reluctantly) been preparing for an equally important, but slightly less exciting, development in the tech world: GDPR.

Much of the hype surrounding blockchain has been garnered because it is an immutable method of storing information- meaning that once information is loaded onto the blockchain, it cannot be edited or deleted. However, come May 2018, this unique feature may bring more pain than joy to businesses, as one of the most significant clauses in GDPR comes into effect: the right to be forgotten. This stipulates that individuals have the right to insist that organisations erase any personal information they hold on them. Apply this clause to blockchain, and the result is a non-compliant system and a £17 million fine. So what options do businesses have?

Edit the uneditable

One answer is to change blockchain itself. Accenture, for example, have recently patented an “editable” version of blockchain, which can be altered under certain circumstances by pre-ordained parties- a modification that could be easily moulded into being GDPR compliant and, at first sight, an appealingly easy solution.

However, there are some problems with this approach. As critics have pointed out, one of blockchain’s key (and unique) values is its immutability. It is this feature, making it immune to certain kinds of malicious interference such as misappropriation of assets or fraudulent financial reporting, that gives it so much appeal. By allowing even the possibility of interference, its trustworthiness as an absolute source of information is diminished. For organisations such as banks and other financial institutions, who are anxious to utilise the power of blockchain to build trust and protect against this kind of interference, an “editable blockchain” is unlikely to be a satisfactory solution.

Legal loopholes

For those who are either unwilling or unable to adopt an editable model, legal solutions may be sufficient. GDPR itself offers no explanation as to what “erasure” actually constitutes, and, while this might seem obvious at first sight, it could be an opportunity.  In the past, for example, some authorities have ruled that encryption can legally be equal to deletion- that is, if data is irreversibly encrypted, it is considered to be erased.  It is possible to apply mechanisms like this to data stored on blockchain, via encrypting pieces of data and then “losing” the decryption key- effectively meaning that the information can never be read.

However, this is a risky solution for organisations. As the data is not actually deleted in this process, but simply rendered inaccessible, it may be vulnerable to future technological developments able to break into its encryption (quantum computing, for example). With this in mind, it is likely that European authorities will insist on a strictly all-or-nothing interpretation of data deletion- meaning that relying on mechanisms such as encryption to achieve compliance would be dangerous.

Going off-grid

If neither of these options suffices, businesses can take a more extreme route: remove personal data from the blockchain completely. This does not necessarily mean disposing of blockchain too- one possible workaround, described in more depth here, reduces blockchain to a simple “access control medium”; instead of storing personal information on the chain, links to external databases containing said information can be placed in blocks. As the rules of blockchain no longer apply in these external databases, any information stored like this could be freely deleted or changed at will. The benefits of this approach are clear- it allows for full, uncontested erasure of data, while still retaining some of the functionality of blockchain.

However, as with other options, this is still not a wholly satisfying solution. It creates an inefficient, complex process, and reduces transparency over who is accessing personal data and how- paradoxically creating even more hurdles to GDPR compliance, which also requires that organisations must have accessible and transparent processes for data management. Additionally, removing data from the immutable environment of blockchain gives rise to the same problems faced by Accenture’s “editable blockchain”; external databases can be altered or subjected to fraudulent interference, and so the trustworthiness of the system is undermined.

An uncertain future

So where does this leave organisations who use blockchain? The answer, at this stage, is frustratingly unclear. Every solution detailed above involves either sacrificing the functionality (and benefits) of blockchain or risking the security of personal data. The latter is hardly an attractive option, and if organisations must transform the blockchain beyond recognition to become compliant with GDPR, it begs the question- what is the point in using the blockchain at all? Yet it is hardly practical for authorities to demand that organisations simply stop using blockchain, given its soaring popularity, proven benefits and widespread adoption.

In ethical terms, Blockchain’s immutability is a paradox: on the one hand, it helps to prevent corruption, fraud and theft; on the other, it removes the individual’s rights over his or her personal information. This paradox makes it a complicated system to legislate effectively for, and the current tensions are symptomatic of lawmakers’ struggles to keep up with new developments in the fast-paced and ever-changing world of technology. In this case, it may not just be businesses that need to adapt; legislators too may need to take an iterative and flexible approach to GDPR.

Come May 2018, reconciling GDPR and blockchain will likely be just one challenge among many for both corporations and legislators. Yet as blockchain becomes ever more tightly wound into the infrastructure of major organisations around the globe, it is not a challenge that either can afford to ignore.

The Apple of my AI – GDPR for Good

Artwork by @aga_banach

Our common perception of machine learning and AI is that it needs an immense amount of data to work. That data is collected and annotated by humans or IoT type sensors to ensure the AI has access to all the vast information it needs to make the correct decisions. With new regulations to protect stored personal data like GDPR, does this mean AI will be at a disadvantage from the headache on restrictions for IoT and data collection? Maybe not!

What is GDPR and why does it matter?

For those who are outside of the European Union, GDPR (General Data Protection Regulation) is designed to “protect and empower all EU citizens data privacy”. Intending to return the control of personal data to individual citizens, it grants powers like requests for all data a business holds on them, a right to explanation for decisions made and even a right to be forgotten. Great for starting a new life in Mexico but will this impact on how much an AI can learn due to the limiting of information?

What’s the solution?

A new type of black box learning means we may not need human data at all. Falling into the category of ‘deep reinforcement learning’, we are now able to create systems which achieve super human performance in a fairly broad spread of domains. AIs are able to generate all training data themselves from simulated worlds. The poster-boy of this type of machine learning is AlphaZero and its derivatives from Google’s Deep Mind. In 2015 we saw the release AlphaGo which demonstrated the ability for a machine to become better than a human in a 5–0 victory against Go (former) champion Mr Fan Hui. AlphaGo reached this level by using human generated data of recorded professional and amateur games of Go. The evolution of this however was to remove the human data with AlphaGo Zero, beating its predecessor AlphaGo Lee 100:0 using 1/12th the processing power over a fraction of the time, and without any human training data. Instead AlphaGo Zero generated its own data by playing games against itself. While GDPR could force a drought of machine learning data in the EU, simulated data from this kind of deep reinforcement learning could re-open the flood gates.

Playing Go is a pretty limited area (though AlphaZero can play other board games!) and is defined by very clear rules. We want machine learning which can cover a broad spread of tasks, often in far more dynamic environments. Enter Google… again… Or rather Alphabet, the parent company of Google and their self-driving car spinoff Waymo. Level 4 and 5 autonomous driving presents a much more challenging goal for AI. In real time the AI needs to categorise huge numbers of objects, predict their paths in the future and translate that into the right control inputs. All to get the car and it’s passengers where they need to be on time and in one piece. This level of autonomy is being pursued by both Waymo and Tesla, but seemingly Tesla gets the majority of the press. This has a lot to do with Tesla’s physical presence.

Tesla has around 150,000 cars on the road equipped and boasted over 100 million miles driven by AutoPilot by 2016. This doesn’t even include data gathered while the feature is not active or more recent data (which I am struggling to find — if you know please comment below!). Meanwhile Waymo has covered a comparatively tiny 3.5 million real world miles, perhaps explaining the smaller public exposure. Google thinks it has the answer to this, again using deep reinforcement learning, meaning that their vehicles have driven billions of miles in their own simulated worlds, not using any human generated data. Only time will tell whether we can build a self-driving car, which is safe and confident on our roads alongside human drivers without human data and guidance in the training process. The early signs for deep reinforcement learning look promising. If we can do this for driving, what’s to say it can’t work in many other areas?

Beyond being a tick in the GDPR box there are other benefits to this type of learning. DeepMind describes human data as being ‘too expensive, unreliable or simply unavailable’, the second of these points (with a little artistic license) is critical. Human data will always have some level of bias, making it unreliable. On a very obvious level, Oakland Police Department’s ‘PredPol’, a system designed to predict areas of crime to dispatch police, trained on historical and biased crime data. It resulted in a system which dispatched police to those same historical hotspots. It’s entirely possible that just as much crime was going on in other areas, but by focusing its attention on the same old area and turning a blind eye to others the machine struggled to break human bias. Even when we think we’re not working on an unhealthy bias our lives are surrounded by unconscious bias and assumptions. I make an assumption each time I sit down on this chair that it will support my weight. I no doubt have a bias towards people similar to me, believing that we could work towards a common goal. Think you hold no bias? Try this implicit association test from Harvard. AlphaGo learned according to this bias, whereas AlphaGo Zero had no bias and performed better. Looking at the moves the machine made we tend to see creativity, a seemingly human attribute in its actions, when in reality their thought processes may have been entirely unlike human experience. By removing human data and therefore our bias machine learning could find solutions in possibly any domain which we might never have thought of, but in hindsight appear a stroke of creative brilliance.

Personally I still don’t think this type of deep reinforcement learning is perfect, or at least the environment it is implemented in. Though the learning itself may be free from bias, the rules and play board, be that a physical game board or rather road layout, factory, energy grid or anything else we are asking the AI to work on, is still designed by a human meaning it will include some human bias. With Waymo, the highway code and road layouts are still built by humans. We could possibly add another layer of abstraction, allowing the AI to develop new road rules or games for us, but then perhaps they will lose their relevance to us lowly humans who intend to make some use from the AI.

For AI, perhaps we’re beginning to see GDPR as an Apple in the market, throwing out the old CD drive, USB-A ports or even (and it still stings a little) headphone jacks, initially with consumer uproar. GDPR pushing us towards black box learning might feel like we’re losing the headphone jack a few generations before the market is ready, but perhaps it’s just this kind of thing that creates a market leader.

Data, consumers and trust: the quiet crisis

Building trust-based relationships with clients has always been important for successful business practice.  As the global data pool grows and consumer fears over personal privacy increase, it may become make-or-break.  Digilab’s Olivia Green investigates.

In the last two years, we have created 90% of the total data in the world today. In a day, we spit out an average of 2.5 quintillion bytes – and counting. From smart watches that monitor our heartrates to chat-bot therapists who manage our anxiety, nearly every aspect of our lives can be digitized. This undoubtedly provides us with immense benefits – increased speed, convenience and personalisation to name a few. Yet it also gives rise to a challenge: how do we protect our right to privacy?

Anxieties over internet privacy are nothing new. As the data pool continues to expand however, they have been picking up steam. Hacks and other tech-related scare stories are now a daily occurrence on our newsfeeds – and they are increasingly hitting closer to home. Back in May, the credit card details and passwords of nearly 700,000 UK citizens were compromised when Equifax fell victim to a hack. Even our private conversations don’t feel safe, as it emerged last month that Google’s new Home Mini had been accidentally recording its users without their knowledge.

Corporations themselves are also a target of consumer fear, and they are beginning to pay the price. According to recent research, US organisations alone lost $756 billion last year to lack of trust and poor personalisation, as consumers sought out alternatives. UK consumers share similar anxieties; nearly 80% of cite lack of confidence in the way that companies to handle their information as an extreme source of concern, while just under half now view data sharing as a “necessary evil”- something they will do reluctantly if they deem the reward high enough.

These findings aren’t an anomaly. Statistics gathered last year by the ICO show that only 22% of UK consumers trust internet brands with their personal data; more shockingly, they highlight that while over 50% of consumers trust High Street banks, only 36% have confidence in Governmental bodies to manage their data properly.

The price of complacency

So far, companies have largely managed to side-step the more serious consequences for consumer mistrust and data mismanagement. Not all have been lucky though. The notorious Ashley Madison hack in 2015 is a prime example of just how damaging loss of trust can be. The website, which provided an online platform enabling married people to conduct affairs, fell victim to hackers who published a digital “name and shame” list of its clients. For a business whose model was so dependent on trust and confidentiality, this proved disastrous. Despite the organisation’s insistent claims otherwise, analysis by SimilarWeb revealed that monthly site traffic had plunged since the attack, dropping by nearly 140 million a mere four months after the attack.

For some, the fallout is less dramatic – but still worrying. Take Uber’s recent breach for example, which dragged its already battered corporate reputation through the mud once again after it was revealed that the ride-sharing company had tried to cover up a 2016 data hack affecting 57 million customers. The immediate furore that followed this has raised some immediate problems for the firm, including the threat of prosecution and impending investigations by multiple countries worldwide. Even more problematic for Uber are the wider-ranging consequences of this cover up. In combination with their potential loss of the London market and recent workplace scandals, this disastrous year has materialised into real financial impact; at the close of this quarter, Uber logged record losses of $1.5 billion, a $400 million increase on previous quarter and a far cry from their triumphant predictions of growth at the beginning of 2017. In a particularly telling sign, Uber’s investors also appear to be hedging their bets. Fidelity, who already have a significant stake in Uber, announced last week that they had participated in a funding round for Uber’s closest competitor, Lyft, pushing the latter’s valuation up to $11.5 billion.

Unlike Ashley Madison, Uber’s problems arose not so much from the hack itself, but from their attempt to cover it up. But despite the evident lesson here, this is a scenario we could see again. Over 2/3 of UK boards currently have no training to deal with a cyber-incident and estimates suggest that only 20% of companies have appropriate response plans in place. For Uber, the ultimate consequences of its misconduct remain to be seen; for the moment, they are protected by their largely unique offering, which gives consumers limited alternatives. Should it happen to a business without Uber’s dominance, it could prove fatal.

Monetising trust

How can organisations move forward from here? In the current climate, it is unlikely that consumers will ever wholly withhold their data, as they place value on the services that giving away that data provide- as much has been shown by the fact that risky “data trade-offs” like Uber manage to survive.  However, as awareness of the risks and the stakes of losing data to a hacker increase, they are looking increasingly selective about who they choose to share their information with. As more and more information shifts from physical to digital, businesses must be prepared for change. We may be heading towards a future where access to data is no longer a handout but a privilege, hard won by effective risk management and transparent, secure systems that hand back sovereignty to the customer.

Yet it is this data that may ultimately decide who wins and who loses in our future digital economy. Consumer data is the life blood of capabilities like AI and predictive analytics, and is essential for providing the personalised services such as smart home devices that are becoming increasingly popular. Businesses that are cut off from this valuable information source will inevitably find themselves undercut by better-placed competitors.

To protect themselves against this eventuality, businesses in crowded markets should make effective data strategies an utmost priority. Companies like Uber may be shielded for the time being; nevertheless, even they can’t afford to breathe easy. As the surging interest in Lyft is demonstrating, rivals are never far behind.

Look out for my next blog about how GDPR can help your business build a future-proof data strategy.

What do you think? Leave a response below or contact me by email.