The Geek Shall Inherit

AI has the potential to be the greatest ever invention for humanity.  And it should be for the benefit of all humanity equally, but instead we’re heading towards a particular group, the geeks, who will benefit most from AI. AI is fundamentally more likely to favour the values of its designers, and whether we train our AI on a data set gathered from humans, or with pure simulated data through a system like deep reinforcement learning bias will, to a greater or lesser extent, remain.

A disclaimer – Humans are already riddled with bias.  Be it confirmation, selective or inclusive bias, we constantly create unfair systems and draw inaccurate conclusions which can have a devastating effect on society.  I think AI can be a great step in the right direction, even if it’s not perfect.  AI can analyse dramatically more data than a human and by doing so generate a more rounded point of view.  More rounded however is not completely rounded, and this problem is significant given any AI which can carry out a task orders of magnitude faster than a human.

To retain our present day levels of inequality while building a significantly faster AI we must dramatically reduce the number of unethical decisions it produces.  For example, if we automate a process with a system which produces only 10% as many unethical decisions as a human per transaction, but we make it 1000x faster, we end up with 100x more injustice in the world.  To retain todays levels that same system would need to make only 0.1% as many unethical decisions per transaction.

For the sake of rhyme, I’ve titled this blog the geek shall inherit.  I am myself using a stereotype, but I want to identify the people that are building AI today.  Though I firmly support the idea that anyone can and should be involved in building these systems that’s not a reflection of our world today.  Our society and culture has told certain people, women for instance, from a young age that boys work on computers and girls do not.  This is wrong, damaging and needs remedying.  That’s a problem to tackle in a different blog!  Simply accepting in this instance that the people building AI tend to be a certain type of person – Geeks.  And if we are to stereotype a geek, we’re thinking about someone who is highly knowledgeable in an area, but also socially inept, and probably a man.

With more manual forms of AI creation the problem is at its greatest.  Though we may be using a dataset gathered from a more diverse group of people, there’s still going to be selection bias in that data, as well as bias directly from the developers if they are tasked with the annotation of that data.  Whether intentionally or not , humans are always going to favour things more alike themselves and code nepotism into a system, meaning the system is going to favour geeky men like themselves more so than any other group.

In 2014 the venture capital fund ‘Deep Knowledge Ventures’ developed an algorithm called ‘VITAL’ to join their board and vote on investments for the firm.  VITAL shared a bias with it’s creators, nepotism, showing a preference to invest in businesses which valued algorithms in their own decision making (Homo Deus, Harari, 2015).  Perhaps VITAL developed this bias independently, but the chances area it’s developers unconsciously planted the seed of nepotism, and even the preference towards algorithms due to their own belief in them.

A step beyond this is deep reinforcement learning.  This is the method employed by Google’s Deep Mind in the Alpha Zero project.  The significant leap between Alpha Go and Alpha Go Zero is that Alpha Go used data recorded from humans playing Go, whereas Alpha Go Zero learned simply by playing against itself in a simulated world.  By doing this, the system can make plays which seem alien to human players, as it’s not constrained by human knowledge of the games.  The exception here is ‘move 37’ against Lee Sedol, which Alpha Go Lee used,  prior to the application of Deep Reinforcement Learning.  This move was seen as a stroke of creative brilliance that no human would ever have played, even though this system was trained on human data.

Humans also use proxies to determine success in these games.  An example of this is Alpha Go playing chess.  Where humans use a points system on pieces as a proxy to understand their performance in a game, Alpha Go doesn’t care about its score.  It’ll sacrifice valuable pieces for cheap ones when other moves which appear more beneficial are available, because it doesn’t care about its score, only about winning.  And win it does, if only by a narrow margin.

So where is the bias in this system?  Though the system may be training in a simulated world, two areas for bias remain.  For one, the layers of the artificial neural network are decided upon by those same biased developers.  Second, it is simulating a game designed by humans – Where the game board and rules of Go were designed.  Both Go and Chess for instance offer a first move advantage to black.  Though I prefer to believe that the colours of pieces on a game board has everything to do with contrast and nothing to do with race, we may be subtly teaching a machine that one colour is guaranteed by rules an advantage over others in live.

The same issue however remains in more complex systems.  The Waymo driverless car is trained predominantly in a simulated world, where it learns free from human input, fatigue and mistakes.  It is however, still fed the look and feel of human designed and maintained roads, and the human written rules of the highway code.  We might shift here from ‘the geek shall inherit’ to ‘the lawyer shall inherit’.  Less catchy, but simply by making the system learn from a system or rules that was designed by a select group of people will introduce some bias, even if it’s simulating it’s training data within the constraints of those rules.

So, what should we do?

AI still has the potential to be incredibly beneficial for all humanity.  Terminator scenarios permitting, we should pursue the technology.  I would propose tackling this issue from two fronts.


This would be hugely beneficial to the technology industry as a whole, but it’s of paramount concern in the creation of thinking machines.  We want our AI to think in a way that suits everyone, and our best chance of success is to have fair and equal representation throughout its development.  We don’t know how much time remains before a hard take-off of an artificial general intelligence, and we may not have time to fix the current diversity problem, but we should do everything we can to fix it.


Because damage caused by biased humans, though potentially catastrophic will always be limited by our inherent slowness.  AI on the other hand can implement biased actions much faster than us humans and may simply accelerate an unfair system.  If we want more equality in the world a system must focus more heavily on equality as a metric than speed, and ensure at the very least that it reduces inequality by as much as the process speed is increased e.g.;

  1. If we make a process 10x faster, we must reduce the prevalence and impact of unequal actions by at least 90%.
  2. If we create a system 1,000x faster, this reduction must be for a 99.9% reduction of inequality in its actions.

Doing this only retains our current baseline.  To make progress in this area we need go a step further with the reduction in inequality before increasing the speed.

Programmed Perspective; Empathy > Emotion for Digital Assistants

Personal assistants are anything but personal.  When I ask Alexa what the weather is, I receive an answer about the weather in my location.  When someone on the other side of the world asks Alexa that same question, they too will find out what the weather is like in their location. Neither of us will find Alexa answering with a different personality or the interaction further cementing our friendship. It is an impersonal experience.

When we talk about personal assistants, we want them to know when we need pure expediency from a conversation, when we want expansion on details, and the different way each one of us likes to be spoken to.

I would like to propose two solutions to this problem – emotion and empathy.  I’d like you to see from my view why empathy is the path we should be taking.



An emotional assistant would be personal. It would require either a genuine internal experience of emotion (which is just not possible today), or an accurate emulation of emotion.  In the same way that we build up relationships with people overtime, starting from niceties and formality, to gradually developing a relationship unique to the two parties that guides all their interactions.  Sounds great, but it’s not all plain sailing.  I’m sure everyone has experienced a time where we’ve inadvertently offended a friend in a way that has made it more difficult for us to communicate for some time afterwards, or even damaging a relationship in a way that it’ll never repair itself.

We really don’t want this with a personal assistant.  If you were a bit short with Alexa yesterday because you were tired, you still want it to set off your alarm the next morning.  You don’t want Alexa to tell you that it can’t be in the same room as you and to refuse to answer your questions until it gets a heartfelt apology.


Empathy does not need to be emotional.  Empathy requires that we put ourselves in the place of others to imagine how they feel, and to act appropriately.  This ideally is what doctors do.  A doctor must empathise with a patient, putting themselves in their shoes to understand how they will react to difficult news, and how to describe the treatment to ensure they feel as comfortable as possible.  Importantly though, the doctor should be removed emotionally from the situation.  If they are to personally feel the emotion with each appointment it could become unbearable.  Empathy helps them to add a layer of abstraction, allowing them to shed as much of the emotion as possible when they return home.

This idea is described in Jean Paul Sartre’s ‘Being and Nothingness’.  Sartre describes two types of things;

  • Beings in themselves – An unconscious object, like a mug or a pen.
  • Beings for themselves – People and conscious things.

In our every day lives we are a hybrid of the two.  Though we are people, and naturally become a thing for itself, we adopt roles like doctors, managers, parents and more. These roles are objects, like a pen or a mug as they have an unspoken definition and structure. We use these roles/objects to guide how we interact in different situations in life. In a new role we ask ourselves ‘what should a manager do in this situation’, or ‘what would a good doctor say’.  It may become less obvious as we grow into a role, but it’s still there.

When you go into a store, we have an accepted code of conduct and a type of conversation we expect to have with the retailer.  We naturally expect them to be polite, to ask us how we are, to share knowledge of different products and services, and that their aim is to sell us something.  We believe that we can approach them, a stranger, and ask question upon question in our preamble.

Sartre states ‘A grocer who dreams is less a grocer’ (to paraphrase).  Though the grocer may be more honest to themselves as a person, they’re reducing their utility as a grocer.  It’s easy to imagine stopping to buy some vegetables, and getting stuck in an irrelevant conversation for half an hour.  It might be a nice break from the norm, and a funny story to tell when you get home, but in general we want our grocers to be…. Grocers…

If we apply this to personal assistants, it really comes together.  We want to receive the kind of personal service that we would get from a person who is really great at customer service.  We want it to communicate information to us in a way which works best for us.  By making an empathetic assistant over what we have today we gain personalisation and utility

If we go fully emotional we gain more personalisation, but the trade-off is utility.  What we don’t want is an emotion assistant, which becomes depressed, and gets angry at us.  Or even on the other extreme which becomes giddy with emotion and struggles to structure a coherent sentence to us because of the digital butterflies in its stomach.  That’s both deeply unsettling, and unproductive.

So, let’s build empathetic assistants.

The Apple of my AI – GDPR for Good

Artwork by @aga_banach

Our common perception of machine learning and AI is that it needs an immense amount of data to work. That data is collected and annotated by humans or IoT type sensors to ensure the AI has access to all the vast information it needs to make the correct decisions. With new regulations to protect stored personal data like GDPR, does this mean AI will be at a disadvantage from the headache on restrictions for IoT and data collection? Maybe not!

What is GDPR and why does it matter?

For those who are outside of the European Union, GDPR (General Data Protection Regulation) is designed to “protect and empower all EU citizens data privacy”. Intending to return the control of personal data to individual citizens, it grants powers like requests for all data a business holds on them, a right to explanation for decisions made and even a right to be forgotten. Great for starting a new life in Mexico but will this impact on how much an AI can learn due to the limiting of information?

What’s the solution?

A new type of black box learning means we may not need human data at all. Falling into the category of ‘deep reinforcement learning’, we are now able to create systems which achieve super human performance in a fairly broad spread of domains. AIs are able to generate all training data themselves from simulated worlds. The poster-boy of this type of machine learning is AlphaZero and its derivatives from Google’s Deep Mind. In 2015 we saw the release AlphaGo which demonstrated the ability for a machine to become better than a human in a 5–0 victory against Go (former) champion Mr Fan Hui. AlphaGo reached this level by using human generated data of recorded professional and amateur games of Go. The evolution of this however was to remove the human data with AlphaGo Zero, beating its predecessor AlphaGo Lee 100:0 using 1/12th the processing power over a fraction of the time, and without any human training data. Instead AlphaGo Zero generated its own data by playing games against itself. While GDPR could force a drought of machine learning data in the EU, simulated data from this kind of deep reinforcement learning could re-open the flood gates.

Playing Go is a pretty limited area (though AlphaZero can play other board games!) and is defined by very clear rules. We want machine learning which can cover a broad spread of tasks, often in far more dynamic environments. Enter Google… again… Or rather Alphabet, the parent company of Google and their self-driving car spinoff Waymo. Level 4 and 5 autonomous driving presents a much more challenging goal for AI. In real time the AI needs to categorise huge numbers of objects, predict their paths in the future and translate that into the right control inputs. All to get the car and it’s passengers where they need to be on time and in one piece. This level of autonomy is being pursued by both Waymo and Tesla, but seemingly Tesla gets the majority of the press. This has a lot to do with Tesla’s physical presence.

Tesla has around 150,000 cars on the road equipped and boasted over 100 million miles driven by AutoPilot by 2016. This doesn’t even include data gathered while the feature is not active or more recent data (which I am struggling to find — if you know please comment below!). Meanwhile Waymo has covered a comparatively tiny 3.5 million real world miles, perhaps explaining the smaller public exposure. Google thinks it has the answer to this, again using deep reinforcement learning, meaning that their vehicles have driven billions of miles in their own simulated worlds, not using any human generated data. Only time will tell whether we can build a self-driving car, which is safe and confident on our roads alongside human drivers without human data and guidance in the training process. The early signs for deep reinforcement learning look promising. If we can do this for driving, what’s to say it can’t work in many other areas?

Beyond being a tick in the GDPR box there are other benefits to this type of learning. DeepMind describes human data as being ‘too expensive, unreliable or simply unavailable’, the second of these points (with a little artistic license) is critical. Human data will always have some level of bias, making it unreliable. On a very obvious level, Oakland Police Department’s ‘PredPol’, a system designed to predict areas of crime to dispatch police, trained on historical and biased crime data. It resulted in a system which dispatched police to those same historical hotspots. It’s entirely possible that just as much crime was going on in other areas, but by focusing its attention on the same old area and turning a blind eye to others the machine struggled to break human bias. Even when we think we’re not working on an unhealthy bias our lives are surrounded by unconscious bias and assumptions. I make an assumption each time I sit down on this chair that it will support my weight. I no doubt have a bias towards people similar to me, believing that we could work towards a common goal. Think you hold no bias? Try this implicit association test from Harvard. AlphaGo learned according to this bias, whereas AlphaGo Zero had no bias and performed better. Looking at the moves the machine made we tend to see creativity, a seemingly human attribute in its actions, when in reality their thought processes may have been entirely unlike human experience. By removing human data and therefore our bias machine learning could find solutions in possibly any domain which we might never have thought of, but in hindsight appear a stroke of creative brilliance.

Personally I still don’t think this type of deep reinforcement learning is perfect, or at least the environment it is implemented in. Though the learning itself may be free from bias, the rules and play board, be that a physical game board or rather road layout, factory, energy grid or anything else we are asking the AI to work on, is still designed by a human meaning it will include some human bias. With Waymo, the highway code and road layouts are still built by humans. We could possibly add another layer of abstraction, allowing the AI to develop new road rules or games for us, but then perhaps they will lose their relevance to us lowly humans who intend to make some use from the AI.

For AI, perhaps we’re beginning to see GDPR as an Apple in the market, throwing out the old CD drive, USB-A ports or even (and it still stings a little) headphone jacks, initially with consumer uproar. GDPR pushing us towards black box learning might feel like we’re losing the headphone jack a few generations before the market is ready, but perhaps it’s just this kind of thing that creates a market leader.

AI, VR and the societal impact of technology: our takeaways from Web Summit 2017

Together with my Digital Innovation colleague Morgan Korchia, I was lucky enough to go to Web Summit 2017 in Lisbon – getting together with 60,000 other nerds, inventors, investors, writers and more. Now that a few weeks have passed, we’ve had time to collect our thoughts and reflect on what turned out to be a truly brilliant week.

We had three goals in mind when we set out:

  1. Investigate the most influential and disruptive technologies of today, so that we can identify those which we should begin using in our business
  2. Sense where our market is going so that we can place the right bets now to benefit our business within a 5-year timeframe
  3. To meet the start-ups and innovators who are driving this change and identify scope for collaboration with them

Web Summit proved useful for this on all fronts – but it wasn’t without surprises.  It’s almost impossible to go to an event like this without some preconceptions about the types of technologies we are going to be hearing about. On the surface, it seemed like there was a fairly even spread between robotics, data, social media, automation, health, finance, society and gaming (calculated from the accurate science of ‘what topic each stage focused on’). However, after attending the speeches themselves, we detected some overarching themes which seemed to permeate through all topics. Here are my findings:

  • As many as 1/3rd of all presentations strongly focus on AI – be that in the gaming, finance, automotive or health stage
  • Around 20% of presentations primarily concern themselves with society, or the societal impact of technology
  • Augmented and virtual reality feature in just over 10% of presentations, which is significantly less than we have seen in previous years

This is reflective my own experience at Web Summit, although I perhaps directed myself more towards the AI topic, spending much of my time between the ‘autotech / talkrobot’ stage and the main stage. From Brian Krzanich, the CEO of Intel, to Bryan Johnson, CEO of Kernel and previously Braintree, we can see that AI is so prevalent today that a return to the AI winter is unimaginable. It’s not just hype; it’s now too closely worked into the fabric of our businesses to be that anymore. What’s more, too many people are implementing AI and machine learning in a scalable and profitable way for it to be dispensable. It’s even getting to the point of ubiquity where AI just becomes software, where it works, and we don’t even consider the incredible intelligence sitting behind it.

An important sub-topic within AI is also picking up steam- AI ethics. A surprise keynote from Stephen Hawking reminded us that while successful AI could be the most valuable achievement in our species’ history, it could also be our end if we get it wrong. Elsewhere, Max Tegmark, author of Life 3.0 (recommended by Elon Musk… and me!) provided an interesting exploration of the risks and ethical dilemmas that face us as we develop increasingly intelligent machines.

Society was also a themed visited by many stages. This started with an eye-opening performance from Margrethe Vestager, who spoke about how competition law clears the path for innovation. She used Google as an example, who, while highly innovative themselves, abuse their position of power, pushing competitors down their search rankings to hamper the chances of other innovations from becoming successful. The Web Summit closed with an impassioned speech from Al Gore, who gave us all a call to action to use whatever ability, creativity and funding we have to save our environment and protect society as a whole for everyone’s benefit.

As for AR and VR, we saw far less exposure this year than seen at events previously (although it was still the 3rd most presented-on theme). I don’t necessarily think this means it’s going away for good, although it may mean that in the immediate term it will have a smaller impact on our world than we thought it might. As a result, rather than shouting about it today, we are looking for cases where it provides genuine value beyond a proof of concept.

I also take some interest from the topics which were missing, or at least presented less frequently. Amongst these I put voice interfaces, cyber security and smart cities. I don’t think this is because any of these topics have become less relevant. Cyber security is more important now than ever, and voice interfaces are gaining huge traction in consumer and professional markets. However, an event like Web Summit doesn’t need to add much to that conversation. I think that without a doubt we now regard cyber security as intrinsic to everything we do, and aside from a few presentations including Amazon’s own Werner Vogels, we know that voice is here and that we need to be finding viable implementations. Rather than simply affirming our beliefs, I think a decision was made to put our focus elsewhere, on the things we need to know more about to broaden our horizons over the week.

We also took the time to speak to the start-ups dotted around the event space.  Some we took an interest in like Nam.r, who are using AI in a way which drives GDPR compliance, rather than causing the headache many of us assume it may result in. Others like and are making use of primary technological developments, which were formative and un-scalable a year ago. We also took note of the start-ups spun out of bigger businesses, like Waymo, part of Google’s Alphabet business, which is acting as a bellwether on which many of the big players are placing their bets.

The priority for us now is to build some of these findings into our own strategy- much more of a tall order than spending a week in Lisbon absorbing.  If you’re wondering what events to attend next year, Web Summit should be high up on your list, and I hope to see you there!

What are your thoughts on these topics? Leave a reply below, or contact me by email.

Learn more about Aurora, Sopra Steria’s horizon scanning team, and the topics that we are researching.

Journey interrupted

Contrary to what you might expect, this blog isn’t a reflection of my experience with Southern Rail. Instead it’s a look to the future, inspired by the Sopra Steria Horizon Scanning Team’s trip to Wired 2016.

In our horizon scanning programme, Aurora, we try to look beyond the technologies that are shaping our future and include the behavioural and social changes that are also making an impact, and this is where Wired’s annual event fits in so perfectly with our interests. Though Wired 2016 takes no shame in celebrating the advancements we’ve seen in technology and imagining what may come next, it also takes account of wider sociological and environmental changes, such as mass migration, climate change and global conflict.

The running theme throughout was ‘journey interrupted’ which seemed both to reflect on the individual journeys of many speakers who had set out with what seemed like a clear direction, but ended up somewhere entirely different to what they had planned, but also the inevitable interruption in our unsustainable way of living which needs change more urgently than ever.

In the technology content, an overwhelmingly strong theme was data. Now, data is nothing new at these kinds of events, and has not been for years.  Data in this instance was framed most clearly in machine learning and AI, which again isn’t anything new to us, but what we’re beginning to see is how achievable it is becoming to us.  Historically the privilege of huge projects backed by a great deal of money, we’re now seeing machine learning in the hands of start-ups and individual people who are able to apply the same technology to problems which receive little or no funding, but are important none the less. Applications ranged from health, limiting the spread of Ebola and the Zika virus to cancer discovery and treatment, to migrant demographics, through to the future of AI and the singularity.

The most poignant moments in Wired 2016 however did not focus wholly on technology. They were about the big shifts happening to people and our environment. Predictions on climate change are looking more devastating than ever with even fairly conservative scientists predicting that we may go well beyond the 2C maximum limit for warming since pre-industrial weather reports, going as far as 7C which could wreak untold havoc on our world. The speakers at Wired 2016 were looking both at how we can change the way we live in the developed world to reduce our environmental impact, but also how we can curtail the impact of the developing world, ideally skipping straight to renewable power, in much the same way as they have largely missed out internet on PCs, experiencing the internet for the first time on a mobile device.  The refugee crisis was also a recurring theme, where the journey that people had set for their own lives has been completely torn apart, exploring how communities around the world have found ways for them to get their journeys back on track, through enabling their work and encouraging entrepreneurship.

The story from Wired 2016 is that if we continue on this express train, we’re heading to a bad place. We need to instead take a look at the journey we’re taking, or better still find an entirely new mode of transport to take us there. The technology is there, but the community and widespread adoption is not, and if we want success this is going to need to be a journey we take together.

What do you think? Leave a reply below, or contact me by email.

Learn more about Aurora, Sopra Steria’s horizon scanning team, and the topics that we are researching.

The Brave Little Toaster

We are currently sitting on the precipice of the fourth industrial revolution which is set to re-think the way we live and work on a global scale.  As with the first industrial revolution, what we know roughly is that change is being driven by technology, but we lack any concrete knowledge of how great the change will be or just how dramatically it will disrupt the world we live in.

The technologies driving the upcoming revolution are artificial intelligence and robotics, technologies which have been the territory of sci-fi for generations which think and act as humans would.  Just as steam power, electricity and ultimately computers have replaced  human labour for mechanical and often mathematical tasks, AI looks set to supplant human thinking and creativity in a way which many see as unsettling.  If the first industrial revolution was too much for the ‘luddites’ doing their best to stamp out mechanical progress, the reaction to AI and robotics is going to be even more unsettling.  There are several clear reasons I can perceive that may drive people away from AI which are:

  • Fear of redundancy: the first reason we can see replicates that of the first industrial revolution. People don’t want technology to do what they do, because if a machine is able to do it faster, better and stronger than they can then what will they do?
  • Fear of the singularity: this one is like our fear of nuclear bombs and fusion. There’s an intrinsic fear people hold, entrenched in stories of Pandora’s Box where we believe certain things should not be investigated.  The singularity of AI is when a computer achieves sentience, and though we’re some way off that (without an idea of how we’d get there) the perceived intelligence of a machine can still be very unnerving.
  • The uncanny valley: the valley is the point where machines start to become more human-like, appearing very close, but not exactly like a human in the way they look or interact. If you’re still wondering what it is, I’d recommend watching these Singing Androids.

Just like we’ve seen throughout history, there is resistance to this revolution.  But if history is anything to go by, while it’s likely to be a bumpy road, the rewards will be huge.  Although it’s the back office, nuts and bolts which are driving change behind the scenes, it’s the front end where we interact with it that’s being re-thought to maximize potential and minimize resistance.  What we’re seeing are interfaces designed to appear dumb, or mask their computational brains to make us feel more comfortable, and that’s where the eponymous title of this blog comes in.

“The Brave Little Toaster” is a book from 1980, or – if you’re lazy like me – it’s a film from about 8 years later, ‘set in a world where household appliances and other electronics come to life, pretending to be lifeless in the presence of humans’.  Whilst the film focused on the adventure of these appliances to find their way back to their owner, what I’d like to focus on is how they hide intelligence when they come into sight – and this is what we’re beginning to see being followed by industry.

Journalism is a career typically viewed as creative and the product of human thought, but did you know that a fairly significant chunk of the news that you read isn’t written by a person at all?  For years now weather reports from the BBC have been written by machines using Natural Language Generation algorithms to take data and turn it into words, which can even be tailored to suit different audiences with simple configuration changes.  Earlier this month The Washington Post also announced that their writing on the Rio Olympics would be carried out by robots.  From a consumer standpoint it’s unlikely that we’ll notice that the stories have been written by machines, and if we don’t even notice it shouldn’t be creepy to us at all.  Internally, rather than seeing it as a way to replace reporters, it’s being seen as an opportunity to ‘free them up’, just like the industrial revolution before which saw people be freed up from repetitive manual tasks to more thought based ones.

Platforms like IBMs Watson begin to add a two-way flow to this, with both natural language generation and recognition, so that a person can ask a question just as they would to a person, with a machine understanding their phrasing and replying in turn without ever hinting that it’s an AI.  At the stage when things become too complicated, the AI asks for a person to take action and from there on the conversation is controlled by them, with no obvious transition.

A gradual approach to intelligence and automated systems is also being adopted by some businesses.  Tesla’s autopilot can be seen as an example of this, continuing a story which began with ABS (automatic breaking) over a decade ago, and developed in recent years to develop a car which, in some instances, can drive itself.  In its current state, autopilot is a combination of existing technologies like adaptive cruise control, automatic steering on a motorway and collision avoidance, but the combination of this with the huge amount of data it generates has allowed the system to learn routes and handling, carefully navigating tight turns and traffic (albeit with an alert driver ready to take over control at all times!).  Having seen this progression, it’s easy to imagine a time not too far from the present day where human drivers are no longer needed, with a system that learns, generates data and continually improves itself just as a human would as they learn to drive, only without the road rage, fatigue or human error.

The future as I see it is massively augmented and improved by artificial intelligence and advanced automation.  Only, it’ll be designed so that we don’t see it, where the boundary between human and machine input is perceivable only if you know exactly where to look.

What do you think? Leave a reply below, or contact me by email.

Augmentation, AI and automation are just some of the topics researched by Aurora, Sopra Steria’s horizon scanning team.

Is Blockchain in the MASH for Local Government?

In their latest insight briefing, SOCITM pose the question, Blockchain technology: could it transform digital-enabled councils?

They urge councils and wider public sector authorities to follow developments around blockchain Distributed Ledger technologies with a view to experimenting with their potential use in the development of future service transformation plans.

It is safe to say that blockchain is currently one of the hot technology topics trying to establish itself as a new way of handling trusted transactions. The rise and publicity surrounding BitCoin has driven this current hype and whilst the underlying technology of blockchain is very appropriate for financial-based systems, it is still unclear what viable (and practical) uses there will be across other sectors.

UK Government has issued a number of articles and papers regarding this topic, and they are actively investigating the potential of the technology to support a number of public-facing services. But the challenge is: ‘what is the use case that can exploit the capabilities of blockchain?’.

As an organisation, Sopra Steria sees the potential of this technology to provide immutable chain of evidence based systems and we are actively working on a number of potential use cases across a number of sectors.

The opportunities for Local Government need further investigation to consider how blockchain could be used to improve services, reduce costs, or help tackle fraud. As the SOCITM article suggests, these opportunities have yet to be clearly defined and articulated. Whilst G-Cloud 8 now shows services related to blockchain, there are only two of any real substance – one from a leading provider of blockchain Distributed Ledger Technologies, and the second a consultative service on what, and how, to use blockchain.  The others simply make reference to blockchain – so there is still a substantial way to go before there are pre-defined services available for Local Government.

Should Local Government be investigating the opportunities for blockchain/Distributed Ledger technology?  Absolutely!

There are a number of potential areas where the ability of providing chain of evidence based capabilities could be used, but the challenge for Local Government is to define the business and application processes needed to use blockchain. One of the areas in which we see major opportunities is the ability of coordinating MASH (Multi-Agency Safeguarding Hubs) by providing a means of identifying master records across different agencies. The ability of establishing a clear data level trust relationship is going to be critical to delivering successful MASH services.

Sopra Steria supports SOCITM’s call to identify the appropriate uses and applications of blockchain which will stand the test of time. As an integral part of their design process, councils should now be considering the advantages of using both blockchain, and other emerging technologies, when shaping future transformation programmes.

Take a look at our paper, “Blockchain: harnessing the power of distributed ledgers”, earlier posts on this topic on our blog or leave your thoughts on this subject below.