Quantum Computers: A Beginner’s Guide

What they are, what they do, and what they mean for you

What if you could make a computer powerful enough to process all the information in the universe?

This might seem like something torn straight from fiction, and up until recently, it was. However with the arrival of quantum computing, we are about to make it reality. Recent breakthroughs by Intel and Google have catapulted the technology into the news. We now have lab prototypes, Silicon Valley start-ups and a multi-billion dollar research industry. Hype is on the rise, and we are seemingly on the cusp of a quantum revolution so powerful that it will completely transform our world.

On the back of this sensationalism trails confusion. What exactly are these machines and how do they work? And, most importantly, how will they change the world in which we live?

6

At the most basic level, the difference between a standard computer and a quantum computer boils down to one thing: information storage. Information on standard computers is represented as bits– values of either 0 or 1, and these provide operational instructions for the computer.

This differs on quantum computers, as they store information on a physical level so microscopic that the normal laws of nature no longer apply. At this minuscule level, the laws of quantum mechanics take over and particles begin to behave in bizarre and unpredictable ways. As a result, these devices have an entirely different system of storing information: qubits, or rather, quantum bits.

Unlike the standard computer’s bit, which can have the value of either 0 or 1, a qubit can have the value of 0, 1 or both 0 and 1 at the same time. It can do this because of one of the fundamental (and most baffling) principles of quantum mechanics- quantum superposition, which is the idea that one particle can exist in multiple states at the same time. Put another way: imagine flipping a coin. In the world as we know it (and therefore the world of standard computing), you can only have one of two results: heads or tails. In the quantum world, the result can be heads and tails.

What does all of this this mean in practice? In short, the answer is speed. Because qubits can exist in multiple states at the same time, they are capable of running multiple calculations simultaneously. For example, a 1 qubit computer can conduct 2 calculations at the same time, a 2 qubit computer can conduct 4, and a 3 qubit computer can conduct 8- increasing exponentially. Operating under these rules, quantum computers bypass the “one-at-a-time” sequence of calculation that a classical computer is bound by. In the process, they become the ultimate multi-taskers.

To give you a taste of what that kind speed might look like in real terms, we can look back to 2015, when Google and Nasa partnered up to test an early prototype of a quantum computer called D-Wave 2X. Taking on a complex optimisation problem, D-Wave was able to work at a rate roughly 100 million times faster than a single core classical computer and produced a solution in seconds. Given the same problem, a standard laptop would have taken 10,000 years.

7

Given their potential for speed, it is easy to imagine a staggering range of possibilities and use cases for these machines. The current reality is slightly less glamorous. It is inaccurate to think of quantum computers as simply being “better” versions of classical computers. They won’t simply speed up any task run through them (although they may do that in some instances). They are, in fact, only suited to solving highly specific problems in certain contexts- but there’s still a lot to be excited about.

One possibility that has attracted a lot of fanfare lies in the field of medicine. Last year, IBM made headlines when they used their quantum computer to successfully simulate the molecular structure of beryllium hydride, the most complex molecule ever simulated on a quantum machine. This is a field of research which classical computers usually have extreme difficulty with, and even supercomputers struggle to cope with the vast range of atomic (and sometimes quantum) complexities presented by complex molecular structures. Quantum computers, on the other hand, are able to read and predict the behaviour of such molecules with ease, even at a minuscule level. This ability is significant not just in an academic context; it is precisely this process of simulating molecules that is currently used to produce new drugs and treatments for disease. Harnessing the power of quantum computing for this kind of research could lead to a revolution in the development of new medicines.

But while quantum computers might set in motion a new wave of scientific innovation, they may also give rise to significant challenges. One such potentially hazardous use case is the quantum computer’s ability to factorise extremely large numbers. While this might seem relatively harmless at first sight, it is already stirring up anxieties in banks and governments around the world. Modern day cryptography, which ensures the security of the majority of data worldwide, relies on complex mathematical problems- tied to factorisation- that classical computers have insufficient power to solve. Such problems, however, are no match for quantum computers, and the arrival of these machines could render modern methods of cryptography meaningless, leaving everything from our passwords and bank details to even state secrets extremely vulnerable, able to be hacked, stolen or misused in the blink of an eye.

8

Despite the rapid progress that has been made over the last few years, an extensive list of obstacles still remain, with hardware right at the top. Quantum computers are extremely delicate machines, and a highly specialised environment is required to produce the quantum state that gives qubits their special properties. For example, they must be cooled to near absolute zero (roughly the temperature of outer space) and are extremely sensitive to any kind of interference from electricity or temperature. As a result, today’s machines are highly unstable, and often only maintain their quantum states for just a few milliseconds before collapsing back into normality- hardly practical for regular use.

Alongside these hardware challenges marches an additional problem: a software deficit. Like a classical computer, quantum computers need software to function. However, this software has proved extremely challenging to create. We currently have very few effective algorithms for quantum computers, and without the right algorithms, they are essentially useless- like having a Mac without a power button or keyboard. There are some strides being made in this area (QuSoft, for example) but we would need to see vast advances in this field before widespread adoption becomes plausible. In other words, don’t expect to start “quoogling” any time soon.

So despite all the hype that has recently surrounded quantum computers, the reality is that now (and for the foreseeable future) they are nothing more than expensive corporate toys: glossy, futuristic and fascinating, but with limited practical applications and a hefty price tag attached. Is the quantum revolution just around the corner? Probably not. Does that mean you should forget about them? Absolutely not.

The Geek Shall Inherit

AI has the potential to be the greatest ever invention for humanity.  And it should be for the benefit of all humanity equally, but instead we’re heading towards a particular group, the geeks, who will benefit most from AI. AI is fundamentally more likely to favour the values of its designers, and whether we train our AI on a data set gathered from humans, or with pure simulated data through a system like deep reinforcement learning bias will, to a greater or lesser extent, remain.

A disclaimer – Humans are already riddled with bias.  Be it confirmation, selective or inclusive bias, we constantly create unfair systems and draw inaccurate conclusions which can have a devastating effect on society.  I think AI can be a great step in the right direction, even if it’s not perfect.  AI can analyse dramatically more data than a human and by doing so generate a more rounded point of view.  More rounded however is not completely rounded, and this problem is significant given any AI which can carry out a task orders of magnitude faster than a human.

To retain our present day levels of inequality while building a significantly faster AI we must dramatically reduce the number of unethical decisions it produces.  For example, if we automate a process with a system which produces only 10% as many unethical decisions as a human per transaction, but we make it 1000x faster, we end up with 100x more injustice in the world.  To retain todays levels that same system would need to make only 0.1% as many unethical decisions per transaction.

For the sake of rhyme, I’ve titled this blog the geek shall inherit.  I am myself using a stereotype, but I want to identify the people that are building AI today.  Though I firmly support the idea that anyone can and should be involved in building these systems that’s not a reflection of our world today.  Our society and culture has told certain people, women for instance, from a young age that boys work on computers and girls do not.  This is wrong, damaging and needs remedying.  That’s a problem to tackle in a different blog!  Simply accepting in this instance that the people building AI tend to be a certain type of person – Geeks.  And if we are to stereotype a geek, we’re thinking about someone who is highly knowledgeable in an area, but also socially inept, and probably a man.

With more manual forms of AI creation the problem is at its greatest.  Though we may be using a dataset gathered from a more diverse group of people, there’s still going to be selection bias in that data, as well as bias directly from the developers if they are tasked with the annotation of that data.  Whether intentionally or not , humans are always going to favour things more alike themselves and code nepotism into a system, meaning the system is going to favour geeky men like themselves more so than any other group.

In 2014 the venture capital fund ‘Deep Knowledge Ventures’ developed an algorithm called ‘VITAL’ to join their board and vote on investments for the firm.  VITAL shared a bias with it’s creators, nepotism, showing a preference to invest in businesses which valued algorithms in their own decision making (Homo Deus, Harari, 2015).  Perhaps VITAL developed this bias independently, but the chances area it’s developers unconsciously planted the seed of nepotism, and even the preference towards algorithms due to their own belief in them.

A step beyond this is deep reinforcement learning.  This is the method employed by Google’s Deep Mind in the Alpha Zero project.  The significant leap between Alpha Go and Alpha Go Zero is that Alpha Go used data recorded from humans playing Go, whereas Alpha Go Zero learned simply by playing against itself in a simulated world.  By doing this, the system can make plays which seem alien to human players, as it’s not constrained by human knowledge of the games.  The exception here is ‘move 37’ against Lee Sedol, which Alpha Go Lee used,  prior to the application of Deep Reinforcement Learning.  This move was seen as a stroke of creative brilliance that no human would ever have played, even though this system was trained on human data.

Humans also use proxies to determine success in these games.  An example of this is Alpha Go playing chess.  Where humans use a points system on pieces as a proxy to understand their performance in a game, Alpha Go doesn’t care about its score.  It’ll sacrifice valuable pieces for cheap ones when other moves which appear more beneficial are available, because it doesn’t care about its score, only about winning.  And win it does, if only by a narrow margin.

So where is the bias in this system?  Though the system may be training in a simulated world, two areas for bias remain.  For one, the layers of the artificial neural network are decided upon by those same biased developers.  Second, it is simulating a game designed by humans – Where the game board and rules of Go were designed.  Both Go and Chess for instance offer a first move advantage to black.  Though I prefer to believe that the colours of pieces on a game board has everything to do with contrast and nothing to do with race, we may be subtly teaching a machine that one colour is guaranteed by rules an advantage over others in live.

The same issue however remains in more complex systems.  The Waymo driverless car is trained predominantly in a simulated world, where it learns free from human input, fatigue and mistakes.  It is however, still fed the look and feel of human designed and maintained roads, and the human written rules of the highway code.  We might shift here from ‘the geek shall inherit’ to ‘the lawyer shall inherit’.  Less catchy, but simply by making the system learn from a system or rules that was designed by a select group of people will introduce some bias, even if it’s simulating it’s training data within the constraints of those rules.

So, what should we do?

AI still has the potential to be incredibly beneficial for all humanity.  Terminator scenarios permitting, we should pursue the technology.  I would propose tackling this issue from two fronts.

2

This would be hugely beneficial to the technology industry as a whole, but it’s of paramount concern in the creation of thinking machines.  We want our AI to think in a way that suits everyone, and our best chance of success is to have fair and equal representation throughout its development.  We don’t know how much time remains before a hard take-off of an artificial general intelligence, and we may not have time to fix the current diversity problem, but we should do everything we can to fix it.

3

Because damage caused by biased humans, though potentially catastrophic will always be limited by our inherent slowness.  AI on the other hand can implement biased actions much faster than us humans and may simply accelerate an unfair system.  If we want more equality in the world a system must focus more heavily on equality as a metric than speed, and ensure at the very least that it reduces inequality by as much as the process speed is increased e.g.;

  1. If we make a process 10x faster, we must reduce the prevalence and impact of unequal actions by at least 90%.
  2. If we create a system 1,000x faster, this reduction must be for a 99.9% reduction of inequality in its actions.

Doing this only retains our current baseline.  To make progress in this area we need go a step further with the reduction in inequality before increasing the speed.

AI, VR and the societal impact of technology: our takeaways from Web Summit 2017

Together with my Digital Innovation colleague Morgan Korchia, I was lucky enough to go to Web Summit 2017 in Lisbon – getting together with 60,000 other nerds, inventors, investors, writers and more. Now that a few weeks have passed, we’ve had time to collect our thoughts and reflect on what turned out to be a truly brilliant week.

We had three goals in mind when we set out:

  1. Investigate the most influential and disruptive technologies of today, so that we can identify those which we should begin using in our business
  2. Sense where our market is going so that we can place the right bets now to benefit our business within a 5-year timeframe
  3. To meet the start-ups and innovators who are driving this change and identify scope for collaboration with them

Web Summit proved useful for this on all fronts – but it wasn’t without surprises.  It’s almost impossible to go to an event like this without some preconceptions about the types of technologies we are going to be hearing about. On the surface, it seemed like there was a fairly even spread between robotics, data, social media, automation, health, finance, society and gaming (calculated from the accurate science of ‘what topic each stage focused on’). However, after attending the speeches themselves, we detected some overarching themes which seemed to permeate through all topics. Here are my findings:

  • As many as 1/3rd of all presentations strongly focus on AI – be that in the gaming, finance, automotive or health stage
  • Around 20% of presentations primarily concern themselves with society, or the societal impact of technology
  • Augmented and virtual reality feature in just over 10% of presentations, which is significantly less than we have seen in previous years

This is reflective my own experience at Web Summit, although I perhaps directed myself more towards the AI topic, spending much of my time between the ‘autotech / talkrobot’ stage and the main stage. From Brian Krzanich, the CEO of Intel, to Bryan Johnson, CEO of Kernel and previously Braintree, we can see that AI is so prevalent today that a return to the AI winter is unimaginable. It’s not just hype; it’s now too closely worked into the fabric of our businesses to be that anymore. What’s more, too many people are implementing AI and machine learning in a scalable and profitable way for it to be dispensable. It’s even getting to the point of ubiquity where AI just becomes software, where it works, and we don’t even consider the incredible intelligence sitting behind it.

An important sub-topic within AI is also picking up steam- AI ethics. A surprise keynote from Stephen Hawking reminded us that while successful AI could be the most valuable achievement in our species’ history, it could also be our end if we get it wrong. Elsewhere, Max Tegmark, author of Life 3.0 (recommended by Elon Musk… and me!) provided an interesting exploration of the risks and ethical dilemmas that face us as we develop increasingly intelligent machines.

Society was also a themed visited by many stages. This started with an eye-opening performance from Margrethe Vestager, who spoke about how competition law clears the path for innovation. She used Google as an example, who, while highly innovative themselves, abuse their position of power, pushing competitors down their search rankings to hamper the chances of other innovations from becoming successful. The Web Summit closed with an impassioned speech from Al Gore, who gave us all a call to action to use whatever ability, creativity and funding we have to save our environment and protect society as a whole for everyone’s benefit.

As for AR and VR, we saw far less exposure this year than seen at events previously (although it was still the 3rd most presented-on theme). I don’t necessarily think this means it’s going away for good, although it may mean that in the immediate term it will have a smaller impact on our world than we thought it might. As a result, rather than shouting about it today, we are looking for cases where it provides genuine value beyond a proof of concept.

I also take some interest from the topics which were missing, or at least presented less frequently. Amongst these I put voice interfaces, cyber security and smart cities. I don’t think this is because any of these topics have become less relevant. Cyber security is more important now than ever, and voice interfaces are gaining huge traction in consumer and professional markets. However, an event like Web Summit doesn’t need to add much to that conversation. I think that without a doubt we now regard cyber security as intrinsic to everything we do, and aside from a few presentations including Amazon’s own Werner Vogels, we know that voice is here and that we need to be finding viable implementations. Rather than simply affirming our beliefs, I think a decision was made to put our focus elsewhere, on the things we need to know more about to broaden our horizons over the week.

We also took the time to speak to the start-ups dotted around the event space.  Some we took an interest in like Nam.r, who are using AI in a way which drives GDPR compliance, rather than causing the headache many of us assume it may result in. Others like Mapwize.io and Skylab.global are making use of primary technological developments, which were formative and un-scalable a year ago. We also took note of the start-ups spun out of bigger businesses, like Waymo, part of Google’s Alphabet business, which is acting as a bellwether on which many of the big players are placing their bets.

The priority for us now is to build some of these findings into our own strategy- much more of a tall order than spending a week in Lisbon absorbing.  If you’re wondering what events to attend next year, Web Summit should be high up on your list, and I hope to see you there!

What are your thoughts on these topics? Leave a reply below, or contact me by email.

Learn more about Aurora, Sopra Steria’s horizon scanning team, and the topics that we are researching.

Have you heard the latest buzz from our DigiLab Hackathon winners?

The innovative LiveHive project was crowned winner of the Sopra Steria UK “Hack the Thing” competition which took place last month.

Sopra Steria DigiLab hosts quarterly Hackathons with a specific challenge, the most recent named – Hack the Thing. Whilst the aim of the hack was sensor and IoT focused, the solution had to address a known sustainability issue. The LiveHive team chose to focus their efforts on monitoring and improving honey bee health, husbandry and supporting new beekeepers.

A Sustainable Solution 

Bees play an important role in sustainability within agriculture. Their pollinating services are worth around £600 million a year in the UK in boosting yields and the quality of seeds and fruits[1]. The UK had approximately 100,000 beekeepers in 1943 however this number had dropped to 44,000 by 2010[2]. Fortunately, in recent years there has been a resurgence of interest in beekeeping which has highlighted a need for a product that allows beekeepers to explore and extend their knowledge and capabilities through the use of modern, accessible technology.

LiveHive allows beekeepers to view important information about the state of their hives and receive alerts all on their smartphone or mobile device. The social and sharing side of the LiveHive is designed to engage and support new beekeepers and give them a platform for more meaningful help from their mentors. The product also allows data to be recorded and analysed aiding national/international research and furthering education on the subject.

The LiveHive Model

The LiveHive Solution integrates three services – hive monitoring, hive inspection and a beekeeping forum offering access to integrated data and enabling the exchange of data.

“As a novice beekeeper I’ve observed firsthand how complicated it is to look after a colony of bees. When asking my mentor questions I find myself having to reiterate the details of the particular hive and history of the colony being discussed. The mentoring would be much more effective and valuable if they had access to the background and context of the hives scenario.”

LiveHive integrates the following components:

  • Technology Sensors: to monitor conditions such as temperature and humidity in a bee hive, transmitting the data to Azure cloud for reporting.
  • Human Sensors: a Smartphone app that enables the beekeeper to record inspections and receive alerts.
  • Sharing Platform: to allow the novice beekeeper to share information with their mentors and connect to a forum where beekeepers exchange knowledge, ideas and experience. They can also share the specific colony history to help members to understand the context of any question.

How does it actually work?

A Raspberry Pi measures temperature, humidity and light levels in the hive transmits measurements to Microsoft Azure cloud through its IoT Hub.

Sustainable Innovation

On a larger scale, the data behind the hive sensor information and beekeepers inspection records creates a large, unique source of primary beekeeping data. This aids research and education into the effects of beekeeping practice on yields and bee health presenting opportunities to collaborate with research facilities and institutions.

The LiveHive roadmap plans to also put beekeepers in touch with the local community through the website allowing members of the public to report swarms, offer apiary sites and even find out who may be offering local honey!

What’s next? 

The team have already created a buzz with fellow bee projects and beekeepers within Sopra Steria by forming the Sopra Steria International Beekeepers Association which will be the beta test group for LiveHive. Further opportunities will also be explored with the service design principle being applied to other species which could aid in Government inspection. The team are also looking at methods to collaborate with Government directorates in Scotland.

It’s just the start for this lot of busy bees but a great example of some of the innovation created in Sopra Steria’s DigiLab!

[1] Mirror, 2016. Why are bee numbers dropping so dramatically in the UK?  

[2] Sustain, 2010. UK bee keeping in decline

Using digital technologies to address complex problems – what can we learn from other governments?

It goes without saying that governments face incredibly complex challenges. Sustaining cohesive communities in the face of demographic, economic, security and other challenges will test the ingenuity of politicians and civil servants.

In recent blogs I’ve questioned the industrial-age organization of government and highlighted how the private sector is improving services through digital technologies. Now I would like to shift the emphasis and highlight how governments around the world employ digital technology to drive problem solving.

And I will start by looking at the one of the most significant problems facing individuals, families and communities – mental health.

Nearly one fifth of the UK population have a mental health condition

Mental health conditions cover a wide range of disorders and vary from mild to severe problems. The most common types are anxiety and depressive disorders (9% of all adults). More severe and psychotic disorders are much less common.

Recent research has found that a third of fit notes (they used to be called sick notes) issued by GPs are for psychiatric problems. The employment rate for people with mental health conditions is 21% compared with 49% for all disabled people and over 80% for non-disabled people.

Almost half of benefits claimants of Employment and Support Allowance in England are receiving payments as the result of mental and behavioural disorders. Recent independent studies estimate that cash benefits paid to those with mental health conditions are around £9.5 billion a year and administrative costs are £240 million.

This illustrates the financial costs of mental health conditions. But it fails to address the personal impact on individuals, their families and the wider community. That is why the NHS is putting mental health front and centre, in what was recently described as ‘the world’s most ambitious effort to treat depression, anxiety and other common mental illnesses’.

Using technology to create community solutions

Although overall spending on mental health will rise by over 4% in 2017/18, many areas of the country are under pressure to provide enough high quality services.

We also know that mental health is a very complex problem that goes beyond the capacity of any one organisation to understand and respond to. There is disagreement about the causes of the problems and the best way to tackle them.

Which is why Creating Community Solutions is such an exciting project.

In the US, following the Sandy Hook tragedy, the Obama administration launched a national dialogue on mental health. It soon became clear that, while mental illness affects nearly every family, there is a continued struggle to have an open and honest conversation around the issue. Misperceptions, discrimination, fears of social consequences, and the discomfort associated with talking about such illnesses all tend to keep people silent

The challenge facing the administration was how to convene a national participation process that would help Americans to learn more about mental health issues, assess how mental health problems affect their communities and younger populations, and decide what actions to take to improve mental health in their families, schools, and communities.

Officials from across the administration collaborated under the umbrella of Creating Community Solutions. They designed an online platform and process that integrated online / offline and national / local levels of collaboration. The platform has promoted a nationwide discussion on mental health. It has given Americans a chance to learn more about mental health issues – from each other and from research. For example, in December last year, and all over the country, hundreds of thousands of people used their mobile phones to get together in small groups for one-hour discussions on mental health.

What can we learn from the US?

Creating Community Solutions is an amazing example of how technology can be used overcome barriers, give access to relevant information and promote participation and mutual support. As a platform, rather than a conventionally structured project, it straddles traditional administrative boundaries and provides support in a distributed way.

I’d like to see our government adopting a similar approach, using technology to break down hierarchical barriers and using platforms to promote collaboration across public services and with communities.

I’ll be writing about other innovative ideas in future blogs. In the meantime, do you know of other innovative solutions to complex public problems? What are the exciting ideas informing your own work —particularly if you are working in the public sector – and how are you implementing them?

Let me know in the comments below or contact me by email.

The #DigiInventorsChallenge finalists face the Dragons: rather than breathing fire, we were blown away!

“The difference between ordinary and extraordinary is that little extra.” Jimmy Johnson – American Football Coach

In a Scottish competition, The #DigiInventorsChallenge in association with Andy Murray and the Digital Health & Care Institute (DHI), sponsored by Sopra Steria , six teams involving more than 30 teenagers across Scotland were shortlisted to compete in the final of the #DigiInventorsChallenge 2017.

I was honoured to be part of the #DigiInventorsBootcamp and judging panel to evaluate the six talented finalist team’s ideas that will transform health, fitness and wellbeing amongst Scotland’s young people. The teams all oozed confidence, passion and flair for their inventions and we really wished we could take all six from idea to invention!

I harnessed my inner ‘Dragon’ and took my seat in the judges den with my fellow judges from DHI, Vodafone, Microsoft, Toshiba and Aberlour Children’s Trust.  I was not prepared to be as blown away as I was by the innovation, insight, planning and forward thinking these young Scots had put into their pitches. It was very clear to me that finalists had learned loads from the masterclasses that included from ‘Idea to Invention’, ‘Developing your idea with users in mind’, ‘Marketing you and your product’, and meet the expert salons. I couldn’t help thinking how impressive this whole experience will be on all their CVs and personal statements and how much older I was than them before I gave my first pitch which was nowhere near as glossy or polished!

The 77 Group presented, which included a video message from Andy. In this he asked the teens to take on board all they’d heard and learned over the two days. It was great to hear one of them quote Sopra Steria’s keynote speaker, Head of Regional Government Alison McLaughlin, by repeating her mantra

“Work Hard – Have fun – Make a Difference”

We all recognised it was powerful to deliver strong messages to the  teens, giving them the drive and passion needed to make the most of their experience.

What’s next?

There can only be one winner and the winning team will be announced at Andy Murray Live  on 7 November 2017 where they will also get the chance to meet Andy himself. The winning team will receive iPads, a cheque for £2,000 and the opportunity to see their design developed into a prototype by DHI and Sopra Steria. I can’t wait to blog after the 7 November to share the winning idea and photos from the event.

Find out more about the inaugral #DigiInventorsChallenge and the six shortlisted teams.

When fast gets very fast: the dizzying pace of technology in the private sector and what this means for the public sector

In recent blogs I described why I think organisations are compelled to introduce new business models due to intense competition. And this competition is accelerating because of global markets and the introduction of new technology.

Contrast this with the system that is supposed to drive innovation and service improvement in public services.  Innovation in a global market does not – and cannot – rely upon a best practice circular. Yet our mindset in government and across the public sector is that this is precisely how we expect innovation and continuous improvement to be stimulated and reproduced.

We still have a distinctly top down system based on sucking in best practice to some central agency.  There it is checked, audited and inspected.  Then it is spat out over the next five years to a reluctant audience on the front line.  The manager in the local hospital or council has neither the incentive nor the inclination to accept what a ‘colleague’ down the road is doing because, as you would have heard many times, ‘it might work there but we are different’.

This mechanism is clumsy and ineffectual. Yet in the private sector, we appear to have found a different way to share best practice – we pinch it.

The intense pressure from competition forces the best companies to copy and refine whatever they can from their competitors to become best in class.  And the rate of innovation and adoption will continue to accelerate. Take, for example, the smartphone technology that gave rise to Uber (despite their recent problems in London) and how, before the world figures out how to regulate ride-sharing, self-driving cars will have made those regulations obsolete.

It is in that vein that I am increasingly struck by the dichotomy of language that describes the difference between the public and private sphere. It is not uncommon to hear the Government, when talking about the economy, to constantly emphasise the challenge to improve private sector productivity and to create a more entrepreneurial society.

Yet, when it comes to reforming the public sector, the emphasis tends to default to centralised controls.  There is unease and opposition in some quarters to flexibility and change, with insistence on preserving structures and centralised systems.  These two worlds, public and private, which you and I inhabit daily, cannot remain artificially divided forever because, contrary to popular belief, these two worlds are not made up of fundamentally different people.

Nor are the pressures on the public and private sectors completely different.

Both face the challenge of becoming more responsive and accountable to their customers or service users, their employees and wider society.  Also, if we are to remain true to concepts of the welfare state, universal provision, social justice and equity in the delivery of public services, we need to address the pressures of global markets and the challenge to representative government.

Why?  Because these pressures are calling into question the ability of traditional tools and levers – such as the way the Government exercises legitimacy, ownership and control – to respond to modern needs and pressures.

Our challenge is to construct new tools and levers that stimulates public services to find a way of promoting practitioners whose experience and reputation gives them the self-confidence to lead others to innovate. And for the system to develop a set of incentives, and the institutions a set of capacities, to continuously reinvent themselves in ways that align individual interest with the wider public realm.  I am not saying the private sector has all the answers, but it is certainly worth exchanging ideas.

If you enjoyed this, you might also enjoy another recent post inspired by the innovation demonstrated by Apple.

I future blogs I plan to dig deeper into how public services can be reformed and the role of competition and choice in public service supply chains. As always, I’d be grateful for your thoughts and comments – please get in touch.