The Apple of my AI – GDPR for Good

Artwork by @aga_banach

Our common perception of machine learning and AI is that it needs an immense amount of data to work. That data is collected and annotated by humans or IoT type sensors to ensure the AI has access to all the vast information it needs to make the correct decisions. With new regulations to protect stored personal data like GDPR, does this mean AI will be at a disadvantage from the headache on restrictions for IoT and data collection? Maybe not!

What is GDPR and why does it matter?

For those who are outside of the European Union, GDPR (General Data Protection Regulation) is designed to “protect and empower all EU citizens data privacy”. Intending to return the control of personal data to individual citizens, it grants powers like requests for all data a business holds on them, a right to explanation for decisions made and even a right to be forgotten. Great for starting a new life in Mexico but will this impact on how much an AI can learn due to the limiting of information?

What’s the solution?

A new type of black box learning means we may not need human data at all. Falling into the category of ‘deep reinforcement learning’, we are now able to create systems which achieve super human performance in a fairly broad spread of domains. AIs are able to generate all training data themselves from simulated worlds. The poster-boy of this type of machine learning is AlphaZero and its derivatives from Google’s Deep Mind. In 2015 we saw the release AlphaGo which demonstrated the ability for a machine to become better than a human in a 5–0 victory against Go (former) champion Mr Fan Hui. AlphaGo reached this level by using human generated data of recorded professional and amateur games of Go. The evolution of this however was to remove the human data with AlphaGo Zero, beating its predecessor AlphaGo Lee 100:0 using 1/12th the processing power over a fraction of the time, and without any human training data. Instead AlphaGo Zero generated its own data by playing games against itself. While GDPR could force a drought of machine learning data in the EU, simulated data from this kind of deep reinforcement learning could re-open the flood gates.

Playing Go is a pretty limited area (though AlphaZero can play other board games!) and is defined by very clear rules. We want machine learning which can cover a broad spread of tasks, often in far more dynamic environments. Enter Google… again… Or rather Alphabet, the parent company of Google and their self-driving car spinoff Waymo. Level 4 and 5 autonomous driving presents a much more challenging goal for AI. In real time the AI needs to categorise huge numbers of objects, predict their paths in the future and translate that into the right control inputs. All to get the car and it’s passengers where they need to be on time and in one piece. This level of autonomy is being pursued by both Waymo and Tesla, but seemingly Tesla gets the majority of the press. This has a lot to do with Tesla’s physical presence.

Tesla has around 150,000 cars on the road equipped and boasted over 100 million miles driven by AutoPilot by 2016. This doesn’t even include data gathered while the feature is not active or more recent data (which I am struggling to find — if you know please comment below!). Meanwhile Waymo has covered a comparatively tiny 3.5 million real world miles, perhaps explaining the smaller public exposure. Google thinks it has the answer to this, again using deep reinforcement learning, meaning that their vehicles have driven billions of miles in their own simulated worlds, not using any human generated data. Only time will tell whether we can build a self-driving car, which is safe and confident on our roads alongside human drivers without human data and guidance in the training process. The early signs for deep reinforcement learning look promising. If we can do this for driving, what’s to say it can’t work in many other areas?

Beyond being a tick in the GDPR box there are other benefits to this type of learning. DeepMind describes human data as being ‘too expensive, unreliable or simply unavailable’, the second of these points (with a little artistic license) is critical. Human data will always have some level of bias, making it unreliable. On a very obvious level, Oakland Police Department’s ‘PredPol’, a system designed to predict areas of crime to dispatch police, trained on historical and biased crime data. It resulted in a system which dispatched police to those same historical hotspots. It’s entirely possible that just as much crime was going on in other areas, but by focusing its attention on the same old area and turning a blind eye to others the machine struggled to break human bias. Even when we think we’re not working on an unhealthy bias our lives are surrounded by unconscious bias and assumptions. I make an assumption each time I sit down on this chair that it will support my weight. I no doubt have a bias towards people similar to me, believing that we could work towards a common goal. Think you hold no bias? Try this implicit association test from Harvard. AlphaGo learned according to this bias, whereas AlphaGo Zero had no bias and performed better. Looking at the moves the machine made we tend to see creativity, a seemingly human attribute in its actions, when in reality their thought processes may have been entirely unlike human experience. By removing human data and therefore our bias machine learning could find solutions in possibly any domain which we might never have thought of, but in hindsight appear a stroke of creative brilliance.

Personally I still don’t think this type of deep reinforcement learning is perfect, or at least the environment it is implemented in. Though the learning itself may be free from bias, the rules and play board, be that a physical game board or rather road layout, factory, energy grid or anything else we are asking the AI to work on, is still designed by a human meaning it will include some human bias. With Waymo, the highway code and road layouts are still built by humans. We could possibly add another layer of abstraction, allowing the AI to develop new road rules or games for us, but then perhaps they will lose their relevance to us lowly humans who intend to make some use from the AI.

For AI, perhaps we’re beginning to see GDPR as an Apple in the market, throwing out the old CD drive, USB-A ports or even (and it still stings a little) headphone jacks, initially with consumer uproar. GDPR pushing us towards black box learning might feel like we’re losing the headphone jack a few generations before the market is ready, but perhaps it’s just this kind of thing that creates a market leader.

Data, consumers and trust: the quiet crisis

Building trust-based relationships with clients has always been important for successful business practice.  As the global data pool grows and consumer fears over personal privacy increase, it may become make-or-break.  Digilab’s Olivia Green investigates.

In the last two years, we have created 90% of the total data in the world today. In a day, we spit out an average of 2.5 quintillion bytes – and counting. From smart watches that monitor our heartrates to chat-bot therapists who manage our anxiety, nearly every aspect of our lives can be digitized. This undoubtedly provides us with immense benefits – increased speed, convenience and personalisation to name a few. Yet it also gives rise to a challenge: how do we protect our right to privacy?

Anxieties over internet privacy are nothing new. As the data pool continues to expand however, they have been picking up steam. Hacks and other tech-related scare stories are now a daily occurrence on our newsfeeds – and they are increasingly hitting closer to home. Back in May, the credit card details and passwords of nearly 700,000 UK citizens were compromised when Equifax fell victim to a hack. Even our private conversations don’t feel safe, as it emerged last month that Google’s new Home Mini had been accidentally recording its users without their knowledge.

Corporations themselves are also a target of consumer fear, and they are beginning to pay the price. According to recent research, US organisations alone lost $756 billion last year to lack of trust and poor personalisation, as consumers sought out alternatives. UK consumers share similar anxieties; nearly 80% of cite lack of confidence in the way that companies to handle their information as an extreme source of concern, while just under half now view data sharing as a “necessary evil”- something they will do reluctantly if they deem the reward high enough.

These findings aren’t an anomaly. Statistics gathered last year by the ICO show that only 22% of UK consumers trust internet brands with their personal data; more shockingly, they highlight that while over 50% of consumers trust High Street banks, only 36% have confidence in Governmental bodies to manage their data properly.

The price of complacency

So far, companies have largely managed to side-step the more serious consequences for consumer mistrust and data mismanagement. Not all have been lucky though. The notorious Ashley Madison hack in 2015 is a prime example of just how damaging loss of trust can be. The website, which provided an online platform enabling married people to conduct affairs, fell victim to hackers who published a digital “name and shame” list of its clients. For a business whose model was so dependent on trust and confidentiality, this proved disastrous. Despite the organisation’s insistent claims otherwise, analysis by SimilarWeb revealed that monthly site traffic had plunged since the attack, dropping by nearly 140 million a mere four months after the attack.

For some, the fallout is less dramatic – but still worrying. Take Uber’s recent breach for example, which dragged its already battered corporate reputation through the mud once again after it was revealed that the ride-sharing company had tried to cover up a 2016 data hack affecting 57 million customers. The immediate furore that followed this has raised some immediate problems for the firm, including the threat of prosecution and impending investigations by multiple countries worldwide. Even more problematic for Uber are the wider-ranging consequences of this cover up. In combination with their potential loss of the London market and recent workplace scandals, this disastrous year has materialised into real financial impact; at the close of this quarter, Uber logged record losses of $1.5 billion, a $400 million increase on previous quarter and a far cry from their triumphant predictions of growth at the beginning of 2017. In a particularly telling sign, Uber’s investors also appear to be hedging their bets. Fidelity, who already have a significant stake in Uber, announced last week that they had participated in a funding round for Uber’s closest competitor, Lyft, pushing the latter’s valuation up to $11.5 billion.

Unlike Ashley Madison, Uber’s problems arose not so much from the hack itself, but from their attempt to cover it up. But despite the evident lesson here, this is a scenario we could see again. Over 2/3 of UK boards currently have no training to deal with a cyber-incident and estimates suggest that only 20% of companies have appropriate response plans in place. For Uber, the ultimate consequences of its misconduct remain to be seen; for the moment, they are protected by their largely unique offering, which gives consumers limited alternatives. Should it happen to a business without Uber’s dominance, it could prove fatal.

Monetising trust

How can organisations move forward from here? In the current climate, it is unlikely that consumers will ever wholly withhold their data, as they place value on the services that giving away that data provide- as much has been shown by the fact that risky “data trade-offs” like Uber manage to survive.  However, as awareness of the risks and the stakes of losing data to a hacker increase, they are looking increasingly selective about who they choose to share their information with. As more and more information shifts from physical to digital, businesses must be prepared for change. We may be heading towards a future where access to data is no longer a handout but a privilege, hard won by effective risk management and transparent, secure systems that hand back sovereignty to the customer.

Yet it is this data that may ultimately decide who wins and who loses in our future digital economy. Consumer data is the life blood of capabilities like AI and predictive analytics, and is essential for providing the personalised services such as smart home devices that are becoming increasingly popular. Businesses that are cut off from this valuable information source will inevitably find themselves undercut by better-placed competitors.

To protect themselves against this eventuality, businesses in crowded markets should make effective data strategies an utmost priority. Companies like Uber may be shielded for the time being; nevertheless, even they can’t afford to breathe easy. As the surging interest in Lyft is demonstrating, rivals are never far behind.

Look out for my next blog about how GDPR can help your business build a future-proof data strategy.

What do you think? Leave a response below or contact me by email.

Have you heard the latest buzz from our DigiLab Hackathon winners?

The innovative LiveHive project was crowned winner of the Sopra Steria UK “Hack the Thing” competition which took place last month.

Sopra Steria DigiLab hosts quarterly Hackathons with a specific challenge, the most recent named – Hack the Thing. Whilst the aim of the hack was sensor and IoT focused, the solution had to address a known sustainability issue. The LiveHive team chose to focus their efforts on monitoring and improving honey bee health, husbandry and supporting new beekeepers.

A Sustainable Solution 

Bees play an important role in sustainability within agriculture. Their pollinating services are worth around £600 million a year in the UK in boosting yields and the quality of seeds and fruits[1]. The UK had approximately 100,000 beekeepers in 1943 however this number had dropped to 44,000 by 2010[2]. Fortunately, in recent years there has been a resurgence of interest in beekeeping which has highlighted a need for a product that allows beekeepers to explore and extend their knowledge and capabilities through the use of modern, accessible technology.

LiveHive allows beekeepers to view important information about the state of their hives and receive alerts all on their smartphone or mobile device. The social and sharing side of the LiveHive is designed to engage and support new beekeepers and give them a platform for more meaningful help from their mentors. The product also allows data to be recorded and analysed aiding national/international research and furthering education on the subject.

The LiveHive Model

The LiveHive Solution integrates three services – hive monitoring, hive inspection and a beekeeping forum offering access to integrated data and enabling the exchange of data.

“As a novice beekeeper I’ve observed firsthand how complicated it is to look after a colony of bees. When asking my mentor questions I find myself having to reiterate the details of the particular hive and history of the colony being discussed. The mentoring would be much more effective and valuable if they had access to the background and context of the hives scenario.”

LiveHive integrates the following components:

  • Technology Sensors: to monitor conditions such as temperature and humidity in a bee hive, transmitting the data to Azure cloud for reporting.
  • Human Sensors: a Smartphone app that enables the beekeeper to record inspections and receive alerts.
  • Sharing Platform: to allow the novice beekeeper to share information with their mentors and connect to a forum where beekeepers exchange knowledge, ideas and experience. They can also share the specific colony history to help members to understand the context of any question.

How does it actually work?

A Raspberry Pi measures temperature, humidity and light levels in the hive transmits measurements to Microsoft Azure cloud through its IoT Hub.

Sustainable Innovation

On a larger scale, the data behind the hive sensor information and beekeepers inspection records creates a large, unique source of primary beekeeping data. This aids research and education into the effects of beekeeping practice on yields and bee health presenting opportunities to collaborate with research facilities and institutions.

The LiveHive roadmap plans to also put beekeepers in touch with the local community through the website allowing members of the public to report swarms, offer apiary sites and even find out who may be offering local honey!

What’s next? 

The team have already created a buzz with fellow bee projects and beekeepers within Sopra Steria by forming the Sopra Steria International Beekeepers Association which will be the beta test group for LiveHive. Further opportunities will also be explored with the service design principle being applied to other species which could aid in Government inspection. The team are also looking at methods to collaborate with Government directorates in Scotland.

It’s just the start for this lot of busy bees but a great example of some of the innovation created in Sopra Steria’s DigiLab!

[1] Mirror, 2016. Why are bee numbers dropping so dramatically in the UK?  

[2] Sustain, 2010. UK bee keeping in decline

If you’re not assessing you’re guessing: the value of an evidence based approach to strategic resource allocation

There are signs at my gym, that say ‘If you’re not assessing you’re guessing’. It’s something that is easy to ignore in your personal life, but in a business context measurement is becoming mission critical. At the Police Superintendents’ Association of England and Wales (PSAEW) Annual Conference last week, there’s been considerable talk about stretched resources – starting with the opening speech from the President of the Association, Gareth Thomas.

“I suggest we have a perfect storm developing, comprised of fewer resources, reduced public services, new threats, and a worrying increase in some types of traditional crime. If the model for delivering policing services in the future is fewer people, working longer, each doing ever more, then I suggest that model is fundamentally flawed.”

Other presentations and conversations also highlighted the fatigue officers are feeling from heavy workloads and indeed 72.2% of respondents to the 2017 Police Federation Pay and Morale Survey said that their workload had increased in the last year.

With talk of fewer resources and overworked officers and teams, the importance of measurement takes another dimension, with forces needing to have access to the evidence which not only enables them to clearly understand the impact of changing demand and resource levels for budgeting purposes, but also helps them to balance the welfare of officers.

For the team at Cleveland Police, this ‘Evidence Based approach to strategic resource allocation’ is something that they’ve been working on for some time. In one of the breakout sessions at the PSAEW Conference Brian Thomas, Assistant Chief Officer at Cleveland Police shared his force’s story about the great strides they’ve taken in organisational planning and how this has had a huge impact in working with teams across the force to take some of the stress out of resource decision making.

Supported by a new tool, PrediKt developed in conjunction with Sopra Steria, Brian and his team are able to operate in a more informed way.

He shared three areas where the force is now regularly using PrediKt:

Reality testing – Validating actual performance against planned performance. It is giving an evidence base to quickly identify what teams are busy doing and, through a dashboard, they have information which highlights automatically when teams’ actual workload is outstripping their resource. An example is when Neighbourhood teams are recording a greater percentage of response work and less time on preventative activities. The force is now able investigate the reasons behind the inconsistency and put action plans in place to resolve the issue.

Evidence based resource planning – moving from examining performance at an individual team level, here Cleveland Police are now able to examine resourcing at an organisational level and look at different scenarios based around the changing shape of crime, for example the impact of an increase in domestic burglary and how resources can be reallocated across the Force to ensure the workload is balanced across all teams and crime types.

Futures planning – the final example was to examine a resource profile change and identify what future resource profiling could look like if we need to increase training days per annum for example to comply with new statutory course requirements. A further example was what would be the impact of reducing officer numbers.

It’s clear that workload isn’t decreasing, as NPCC Chief Sara Thornton told the conference, ‘everybody knows what police should do more of; few say what we could do less of’. The final presentations also brought home the reality of cyber crime and the changing nature of crime, which will have a huge impact on policing and resourcing in the future.

It’s a world where forces really should be ‘assessing and not guessing’.

Getting a formal evidence base will transform resourcing so forces can truly assess the impact of changes to demand and resource levels, as well as helping to balance the welfare of officers.

Find more about PrediKt, Sopra Steria’s Police Resource and Demand Modelling Tool or contact me by email.

Regulation and compliance: the new certainties in life

by Miles Elliott, Director of Credit Risk

Benjamin Franklin once wrote that ‘in this world nothing can be said to be certain except death and taxes’. But in these more modern times, especially for financial services organisations – we should perhaps add ‘regulation and compliance’ to the list. In 2018, a wave of new regulation is being introduced – and one of the most far reaching is the General Data Protection Regulation (GDPR).

GDPR: are you ready…?

From 25 May 2018, organisations across Europe will have to strengthen controls associated with collecting, managing and using personal data. Resulting activity will see significant changes to IT systems as well as the way organisations engage with their customers.

There’s less than a year to go until GDPR becomes a way of life, but a survey in May 2017 suggested that only 10% of organisations have mature GDPR plans in place – with a further 40% at an intermediate phase.

That leaves half of organisations at the beginning of their compliance journey – and the clock is ticking!

GDPR: the cost of non-compliance…

Becoming fully GDPR compliant will be challenging and will require a holistic approach to data management and governance. Organisations run the risk of failing to respond to the scope of activity involved and the amount of time needed to ensure compliance. Another common issue is the lack of skills and experience to deliver such a comprehensive change to governance controls across a business. To put this into context, in 2016 alone there were 1.4 billion data breaches across the industry.

Fines for failing to comply with GDPR are expected to be highly penal as well as leading to material reputational damage.

Don’t go it alone – work with an expert in assured compliance

So what should today’s hard-pressed organisations do, especially if they don’t understand the full extent of GDPR?  The answer is to work with an organisation like Sopra Steria that’s got a track record in complex data management AND offers a ‘comprehensive’ approach to GDPR compliance. Our pragmatic ‘think, build and run’ approach empowers organisations to pick and choose the path to GDPR compliance that is right for them. As experts in Data, Analytics and Technology, we can help you quickly identify data gaps and risks, work with you to develop remediation solutions and support you moving forward with on-going compliance monitoring.

The clock is ticking…

So don’t get caught out! Make sure you aren’t one of the 50% of companies still asking “What is this GDPR”?  Take your first steps today to GDPR compliance and get fully prepared for the 2018 deadline. Remember, 2018 is the year of new regulation – make sure it’s a happy one!

See more information about how we can help you get compliant.

Get in touch to discuss how to meet your GDPR challenge and support your journey to assured compliance.

Information Chaos: the next big business challenge

“Every budget is an IT budget.  Every company is an IT company.  Every business leader is becoming a digital leader. Every person is becoming a technology company. We are entering the era of the Digital Industrial Economy.” – Peter Sondergaard, Gartner.

Most organisations now recognise that managing their information assets is just as important as managing their physical, human, and financial assets. So why are so many still drowning in a flood of unmanaged content and information chaos? The symptoms are plain to see: servers overflowing and multiplying, making it hard to find anything; sensitive information leaking, losing competitive advantage and exposing the organisation to litigation risk; information silos continue to develop, frustrating secure collaborative working; and because of cheap cloud storage, accessible from personal smartphones and tablets, knowledge assets are migrating to places beyond the reach of the company’s information governance processes – if indeed they have any!

Meanwhile new information continues to pour in, in an ever-changing array of formats, through multiple channels and on multiple devices. Organisations face rising costs for maintaining their legacy systems of record, and struggle to keep control of new systems.

No wonder many leaders in Knowledge Management believe that Information Chaos is the next big business challenge.

The core of all these difficulties is a lack of Information Governance.  With no rules, users can put their stuff wherever they like: the ‘C’ drive of their laptop, flash drives, Dropbox, etc. Shared network drives, intended to support collaboration, bring irritating access issues – and if no governance process is in place, users can create a folder anywhere, and give it any name. So no one knows where to look for things, and people mostly share files with colleagues using email attachments – leading to increased risk of data breaches, massive duplication, loss of version control, and excessive network traffic.

Information governance means:

  • identifying what information classes make up the knowledge assets of the organisation;
  • appointing someone to be the owner (and custodian) of each class of information – this will usually be the appropriate head of function; and
  • establishing rules for naming, storing, protecting and sharing knowledge assets.

The objectives of rationalising document management and introducing proper governance are:

  • To enable full exploitation of information assets, based on:
    • A business-led file plan and document management system (“A place for everything and everything in its place”)
    • Full Enterprise Search to improve productivity and consistency
    • No more repeating work (“re-inventing the wheel”)
  • To rationalise data storage and make savings, by:
    • Keeping one master copy of everything (wherever possible)
    • Maintaining clear version control (because sometimes it’s necessary to keep earlier drafts)
    • Eliminating duplication
    • Deleting ephemeral and superseded documents
  • To ensure the security and integrity of information, by
    • Applying appropriate access control to all information
    • Ensuring that sensitive information is classified and labelled correctly
    • Ensuring that approved and published information cannot be changed or deleted until the proper time

1.    Developing the Taxonomy

Information Governance requires a clear understanding of the kinds of information the organisation needs in order to function. At Sopra Steria I’ve worked with several clients on this problem using both top-down and bottom-up methods.  In a top-down approach, we help subject matter experts in the business to build a hierarchical taxonomy of their areas of expertise. The classes in the taxonomy will eventually correspond to folders in the idealised corporate file plan.

2.    Knowledge Audit

I supplement this top-down analysis with a bottom-up review of existing file structures, on the basis that frequently occurring document and folder names are likely to signify knowledge classes that need to be represented at the lower levels in the file plan hierarchy. I make use of a disk space analyser tool for this information discovery exercise, or knowledge audit. The more sophisticated tools not only keep track of the most commonly-used terms but also assess the scope and severity of the Information Chaos problem. They can identify where the duplicate, redundant and corrupt files are, together with their volumes. This information can also later support the cleansing and migration stage; i.e. partially automating the process of deleting “bad” files, and moving “useful” information to a new home in the revised corporate file plan.

In summary, an Information Governance project might consist of the following phases:

flow diagram through the sub head topics listed here

Experience has shown that developing a taxonomy is very difficult to do across an entire business (of any size). In fact, both the first two (parallel) steps in this process are best carried out piecemeal; i.e. team by team, business unit by business unit, project by project; joining the models together later, eliminating any class duplication en route.  This has the added advantage of delivering early benefits and demonstrating steady progress to management.

3.    Information Architecture

In stage three, the results of the top-down taxonomy work and the bottom-up knowledge audit are combined to develop a new Information Architecture for the business. The core of this will be a hierarchical folder structure similar to the familiar Windows Explorer layout, but with important differences. In the Information Architecture hierarchy the nodes are classes of information. For example, it may consist of generic terms such as Project or Supplier, while a File Plan would have a specific folder for each real-world instance of the class.  So the class, Project, spawns Project Alpha, Project Bravo, Project Charlie, etc; the Supplier class creates GoliathCo, Bloggs & Sons, and so on.

The other important difference is the association of metadata with each class, and with the corresponding folders in the File Plan.  This is likely to include the standard maintenance metadata (author, owner, creation date, last modified date, etc); plus the document type; any access constraints; and retention schedules and disposal triggers.

Carefully selected business metadata is an invaluable support to Enterprise Search, but can be seen as a nuisance when saving documents. For this reason, metadata should be set as high up in the hierarchy as possible so that content placed in lower level folders can “inherit” the correct values without the need for additional data entry by the user.

4.    Set up the new File Plan

The next step in the project will be to implement the Information Architecture in a File Plan. How this is done will depend on the selected platform; for example, an Electronic Document and Records Management (EDRM) system, SharePoint, or network shared drives (although the latter will not be able to support a rich metadata schema such as is described above).

5.    Cleansing and Migration

With the target File Plan in place the last stage of the project can begin. Owners sort through their holdings, deleting the documents they no longer need and moving the valuable content to the proper places in the File Plan. This is a “housekeeping” exercise, an inevitable chore for many, and management must be careful to allow their staff sufficient time to complete it.

With an agreed Information Architecture, and a File Plan based on it that all staff can use, proper Information Governance can be introduced.

ConclusionsHINTS AND TIPS 1. Solving your Information Chaos problem will mean an unavoidable “House-keeping” exercise to identify your useful content and delete the rubbish. 2. You can reduce the pain, and avoid a future recurrence, by developing a new File Plan to move your cleansed content into. 3. Develop the File Plan by a combination of “top-down” and “bottom-up” – but do it in small bites, joining all the pieces up later.

Addressing the Information Chaos problem requires: first, the development of a target Information Architecture; and second, an extensive “housekeeping” exercise to eliminate the dross and migrate the organisation’s vital knowledge assets. The benefits of such a project will be:

  • Reduction of business risk by ensuring:
    • full traceability of decision making
    • an increased ability to respond to enquiries (legal, regulatory, FoI, audit, etc)
    • a reduced risk of litigation
  • Boosted user productivity by
    • minimising the admin burden on end users
    • providing secure collaborative working through a shared Information Architecture
    • better re-use of existing knowledge assets
  • Cost reduction
  • Enhanced information quality
  • Streamlined document and records management processes

Satisfaction as information chaos eliminated…

Share with me any experiences you have of successful information cleansing and migration, and any tips on how you’ve made the process work in your organisation. Leave a reply below or contact me by email.

Why regulatory compliance offers a win-win situation

by Tej Sembi, Business Development Sopra Steria

A number of scandals in recent years, like the flawed reporting of hip replacement devices leading to huge compensation payouts and fines, suggest that the medical device industry has a problem. Do the big players really care? Well, with the work we have been doing shows that all concerned in this industry do care – patient safety is their number one concern.

The world of regulation is changing and catching up with technology. New standards and medical device directives are being introduced worldwide – from the US, to the UK, Europe and beyond. These make it clear that the industry must behave more responsibly. For example, ISO 13485 2016 extends the previous edition of the quality management system requirements for medical devices and risk.

A driver for differentiation

While this is clearly great news for the end user, there is also another positive outcome from these changes. I believe new regulatory regimes present a fantastic opportunity for medical device and implant companies to radically change the way they use and interpret product data to provide business benefit. In fact, with the right mindset, they represent a driver for differentiation and increased competitiveness.

Let me explain. Companies have to comply with the legislation, which means that they are committed to spending in this area, so does it not make sense to maximise this investment?  The data will need to be collated and managed, so why not look at how it is also used by other business areas and tap into this much underused resource?

On average, companies are said to base decisions on around 20% of available data so what could be achieved if they could harness more? These untapped sources of data contain a whole myriad of information.  Complying with the new regulations will give companies the opportunity to have better visibility and control over clinical outcomes and supporting data which could be used across the organisation to enhance patient safety, improve portfolio management, and improve sales and marketing alongside its vital role of compliance.

Reducing exposure to risk

Ultimately the right solution to the compliance challenge should deliver a better understanding of  customer/patient needs and outcomes, gaining clarity of validation, verification and design activities and support the prediction of product lifecycles in terms of maintenance, performance, end-of-life and potential usage-based issues or damage.

The more an organisation knows about each of these areas of its business, the better able it will be to reduce the company’s exposure to litigation, improve operational efficiencies and sales opportunities and, crucially, enhance product development and patient outcomes.

Thus, regulatory compliance becomes a win-win situation all round: healthcare providers have confidence in the efficacy of the medical devices they procure, patients trust that the devices they depend on are safe and robust and manufacturers gain the customer and product insight they need to differentiate and protect their brand reputation.

What do you think, am I mad to suggest compliance is really an opportunity? Leave a reply below, or contact me by email, I’d love to hear your thoughts.