Keeping clients one step ahead – the DigiLab story (Part #2)

Immersion. Inspiration. Ideation. Implementation.

Digital transformation projects are often mission-critical, and therefore usually urgent. There’s a need to quickly unearth and interrogate challenges, sift the solution options and get things into test and development. To get this process powerfully kick-started, we start by immersing our customers in a rich universe of use cases, latest technologies and sector insight helping them bounce off best-practice learning and quickly leap-frog ahead.

Why cross-fertilize between sectors?

When the genetic material of two parents is recombined in nature, it delivers a greater variability on which natural selection can act. This increases a species’ ability to adapt to its changing environment and boosts its chances of survival. The same is true with transformation projects: the greater the importing, mixing and cross-fertilization of ideas from other sectors, processes and initiatives, the stronger and more adaptable products and services become.

Goodbye tail-chasing and closed loops

Silicon Valley is a great example of how cross-fertilization leads to innovation. As one of the most innovative ecosystems in the world, it nurtures a culture that is open to new people and thinking, promoting the healthy circulation of fresh ideas and profitable exploration of approaches outside their own industries and business practices. Similarly, using cross-fertilization lets us proactively free ourselves from cognitive immobility so we stop going around in circles, locked in our own stale and habitual thinking.

By transposing proven use cases and adapting systems already developed by another industry, we can:

  • Invent a platform for true market disruption breaking away from locked in patterns
  • Implement faster because preparatory work is already in place
  • Greatly reduce time to market and increase the possibility of competitive advantage
  • Co-create innovation with other client organizations to reduce costs
  • Connect with like-minded leaders around results and user experience to aid buy-in

Keeping clients one step ahead – the DigiLab story (Part #1)

Stasis is the enemy of success

Sopra Steria is lucky to work with some the world’s most exciting companies. It’s our job to help them transform digitally, across sectors as diverse as education, hospitality and aerospace. That means our organization is packed with know-how, experience and progressive thinking. Clients trust us to roll out integrated IT platforms and modernize their application stacks, but they are not always aware of how innovative, disruptive and forward-thinking our organization can be. This is why DigiLabs exists

In 2014, Eric Maman — one of our senior innovation consultants — decided to create a dedicated hub ruthlessly focused on innovation: cross-fertilized, federated, multi-disciplinary. A way for clients to immerse themselves in the wealth of Sopra Steria insight across our areas of expertise, sectors and technologies and turbo-charge their own digital transformation projects to rapidly eliminate waste and create new value.

He created the first DigiLabs based at our Paris HQ — today we have 24 innovation hubs around the world working as one seamless network from France, Spain and the UK to Germany, Norway, India and Singapore. This series of blogs tells their story and explains how public and private sector organizations are working with DigiLabs right now to foster creativity, strengthen idea generation and transform perennial operational problems into feasible and profitable new ways of working. Because in today’s fast-moving world, standing still is a dangerous strategy .

Shaping smarter thinking, together

Delivering tech for tech’s sake is not the DigiLab way. Instead we shape innovation around our customers’ most urgent use-cases, asking ourselves: can we harness the best of what’s out there to craft robust new approaches and think in exciting new ways about their challenges, audiences and stakeholders?

Through the DigiLab experience, customers work with our expert teams to:

  • Brainstorm creatively around technology, people and process
  • Identify pains and weakness with field observation and interviews
  • Anticipate new uses of performance-enhancing technologies
  • Create robust use-cases for innovation, supported by best-practice learning
  • Cross-fertilize insight from sectors to adapt and optimize solution design
  • Roll out innovation enterprise-wide and keep it current as the world changes

Vern Davis wins Business Leader of the Year

Hilton’s Park Lane grand ballroom was the venue for last night’s Ex-Forces in Business Awards and we are delighted to announce that Vern Davis, Managing Director of the UK Aerospace, Defence and Security (ADS) sector was named Business Leader of the Year. The awards is the largest celebration and recognition of ex-military personnel in the UK workforce, and the employers that support current and former members of the British Armed Forces. In line with our pledge to the Armed Forces Covenant, Sopra Steria actively seeks to provide career opportunities for this community and has a number of veterans working in all areas of our business. Vern was recognised for his transformation of the ADS business since joining in 2017.

Vern commented ‘I am delighted to have been recognised in these awards. Sopra Steria has a fantastic culture that really values different backgrounds and experiences, including those of the armed forces community. It is an honour to have been named Business Leader of the Year and I thank my team for all their hard work and support throughout our transformation of the ADS business.’

‘Ethics Guidelines for Trustworthy AI’ Summarised

On the 8thof April 2019, the EU’s High-Level Expert Group (HLEG) on AI released their Ethics Guidelines for Trustworthy AI, building on over 500 recommendations received on the ‘Draft Ethics Guidelines’ released in December 2018.

In this blog, I want to help you understand what this document is, why it matters to us and how we may make use of it.

What is it?

The ‘Draft Ethics Guidelines’ is an advisory document, describing the components for ‘Trustworthy AI,’ a brand for AI which is lawful, ethical and robust.  As the title suggests, this document focuses on the ethical aspect of Trustworthy AI.  It does make some reference to the requirements for robust AI and to a lesser extent the law that surrounds AI but clearly states that it is not a policy document and does not attempt to offer advice on legal compliance for AI.  The HLEG is tasked separately with creating a second document advising the European Commission on AI Policy, due later in 2019.

The document is split into three chapters;

  1. Ethical principles, the related values and their application to AI
  2. Seven requirements that Trustworthy AI should meet
  3. A non-exhaustive assessment list to operationalise Trustworthy AI

This structure begins with the most abstract and ends with concrete information.  There is also an opportunity to pilot and feedback on the assessment list to help shape a future version of this document due in 2020.  Register your interest here.

Why does this matter?

I am writing this article as a UK national, working for a business in London.  Considering Brexit and the UK’s (potential) withdrawal from the European Union it’s fair to ask whether this document is still relevant to us.  TL;DR, yes. But why?

Trustworthy AI must display three characteristics, being lawful, ethical and robust.

Ethical AI extends beyond law and as such is no more legally enforceable to EU member states than those who are independent.  The ethical component of Trustworthy AI means that the system is aligned with our values, and our values in the UK are in turn closely aligned to the rest of Europe as a result of our physical proximity and decades of cultural sharing. The same may be true to an extent for the USA, who share much of their film, music and literature with Europe. The ethical values listed in this document still resonate with the British public, and this document stands as the best and most useful guide to operationalise those values.

Lawful AI isn’t the focus of this document but is an essential component for Trustworthy AI. The document refers to several EU laws like the EU Charter and European Convention of Human Rights, but it doesn’t explicitly say that Lawful AI needs to be compliant with EU law.  Trustworthy AI could instead implement the locally relevant laws to this framework.  Arguably compliance with EU laws is the most sensible route to take, with of 45% of the UK’s trade in Q4 2018 was with the EU[1]according to these two statistics from the ONS.  If people and businesses in EU member states only want to buy Trustworthy AI, compliant with EU law, they become an economic force rather than a legal requirement.  We can see the same pattern in the USA, with business building services compliant with GDPR, a law they do not have to follow, to capture a market that matters to them.

The final component, Robust AI, describes platforms which continue to operate in the desired way in the broad spectrum of situations that it could face throughout its operational life and in the face of adversarial attacks.  If we agree in principle with the lawful and ethical components of Trustworthy AI and accept that unpredictable or adversarial attacks may challenge either then the third component, Robust AI, becomes logically necessary.

 

What is Trustworthy AI?

Trustworthy AI is built from three components; it’s lawful, ethical and robust.

Diagram

Lawful AI may not be ethical where our values extend beyond policy.  Ethical AI may not be robust where, even with the best intentions, undesirable actions result unexpectedly or as the result of an adversarial attack. Robust AI may be neither ethical nor legal, for instance, if it were designed to discriminate, robustness would only ensure that it discriminates reliably, and resists attempts to take it down.

This document focuses on the ethical aspect of Trustworthy AI, and so shall I in this summary.

What is Ethical AI?

The document outlines four ethical principles in Chapter I (p.12-13) which are;

  • Respect for human autonomy
  • Prevention of harm
  • Fairness
  • Explicability

These four principles are expanded in chapter II, Realising Trustworthy AI, translating them into seven requirements that also make some reference to robustness and lawful aspects. They are;

  1. Human agency and oversight

AI systems have the potential to support or erode fundamental rights.  Where there is a risk of erosion, a ‘fundamental rights impact assessment’ should be carried out before development, identifying whether risks can be mitigated and determine whether the risk is justifiable given any benefits. Human agencymust be preserved, allowing people to make ‘informed autonomous decisions regarding AI system [free from] various forms of unfair manipulation, deception, herding and conditioning’ (p.16).   For greater safety and protection of autonomy human oversightis required, and may be present at every step of the process (HITL), at the design cycle (HOTL) or in a holistic overall position (HIC), allowing the human override the system, establish levels of discretion, and offer public enforces oversight (p.16).

  1. Technical robustness and safety

Fulfilling the requirements for robust AI, a system must have resilience to attack and security, taking account for additional requirements unique to AI systems that extend beyond traditional software, considering hardware and software vulnerabilities, dual-use, misuse and abuse of systems. It must satisfy a level of accuracyappropriate to its implementation and criticality, assessing the risks from incorrect judgements, the system’s ability to make correct judgements and ability to indicate how likely errors are. Reliability and reproducibilityare required to ensure the system performs as expected across a broad range of situations and inputs, with repeatable behaviour to enable greater scientific and policy oversight and interrogation.

  1. Privacy and data governance

This links to the ‘prevention of harm’ ethical principle and the fundamental right of privacy.  Privacy and data protectionrequire that both aspects are protected throughout the whole system lifecycle, including data provided by the user and additional data generated through their continued interactions with the system. None of this data will be used unlawfully or to unfairly discriminate.  Both in-house developed and procured AI systems must consider the quality and integrity of data, prior to training as ‘it may contain socially constructed biases, inaccuracies, errors and mistakes’ (p.17) or malicious data that may influence its behaviour. Processes must be implemented to provide individuals access to dataconcerning them, administered only by people with the correct qualifications and competence.

  1. Transparency

The system must be documented to enable traceability, for instance identifying and reasons for a decision the system hade with a level of explainablity, using the right timing and tone to communicate effectively with the relevant human stakeholder.  The system should employ clear communicationto inform humans when they are interacting with an AI rather than a human and allow them to opt for a human interaction when required by fundamental rights.

  1. Diversity, non-discrimination and fairness

Avoidance of unfair biasis essential as AI has the potential to introduce new unfair biases and amplify existing historical types, leading to prejudice and discrimination.  Trustworthy AI instead advocates accessible and universal design, building and implementing systems which are inclusive of all regardless of ‘age, gender, abilities or characteristics’ (p.18), mindful that one-size does not fit all, and that particular attention may need to be given to vulnerable persons.  This is best achieved through regular stakeholder participation, including all those who may directly or indirectly interact with the system.

  1. Societal and environmental wellbeing

When considered in wider society, sustainable and environmentally friendly AImay offer a solution to urgent global concerns such as reaching the UN’s Sustainable Development Goals.  It may also have a social impact, and should ‘enhance social skills’, while taking care to ensure it does not cause them to deteriorate (p.19).  Its impact on society and democracyshould also be considered where it has the potential to influence ‘institutions, democracy and society at large (p.19).

  1. Accountability

‘Algorithms, data and design processes’ (p.19) must be designed for internal and external auditabilitywithout needing to give away IP or business model, but rather enhance trustworthiness.  Minimisation and reporting of negativeimpacts work proportionally to risks associated with the AI system, documenting and reporting the potential negative impacts of AI systems (p.20) and protecting those who report legitimate concerns.  Where the two above points conflict trade-offsmay be made, based on evidence and logical reasoning, and where there is no acceptable trade-off the AI system should not be used. When a negative impact occurs, adequate redressshould be provided to the individual.

Assessing Trustworthy AI

Moving to the most concrete guidance, Chapter III offers an assessment list for realising Trustworthy AI. This is a non-exhaustive list of questions, some of which will not be appropriate to the context of certain AI applications, while other questions need to be extended for the same reason. None of the questions in the list should be answered by gut instinct, but rather through substantive evidence-based research and logical reasoning.

The guidelines expect there will be moments of tension between ethical principles, where trade-offs need to be made, for instance where predictive policing may, on the one hand, keep people from harm, but on the other infringe on privacy and liberty. The same evidence-based reasoning is required at these points to understand where the benefits outweigh the costs and where it is not appropriate to employ the AI system.

In summary

This is not the end of the HLEG’s project.  We can expect policy recommendations later in 2019 to emerge from the same group which will likely give us a strong indication for the future requirements for lawful AI, and we will also see a new iteration on the assessment framework for Trustworthy AI in 2020.

This document represents the most comprehensive and concrete guideline towards building Ethical AI, expanding on what this means by complementing it with the overlapping lawful and robustness aspects.  Its usefulness extends beyond nations bound by EU law by summarising the ethical values which are shared by nations outside of the European Union, and a framework where location specific laws can be switched in and out where necessary.

[1]Source: ONS – Total UK exports £165,752m total, £74,568m to the EU – 44.98% (rounded to 45%) of UK trade is to the EU.

Sopra Steria’s Vern Davis and Mohammed Ahmed finalists in the British Ex-Forces in Business Awards.

Sopra Steria is delighted to announce that two colleagues have been named finalists in the British Ex-Forces in Business Awards. The awards celebrates the outstanding business achievements of service leavers demonstrating transferable skills gained in the military. This year, the awards attracted over 400 nominations across 18 categories.

Vern Davis, Managing Director of the Aerospace, Defence and Security sector is a finalist in the Business Leader of the Year category. Vern started his career in 1990 as an Army Officer in the British Army. This role took him on operational tours across Northern Ireland, Bosnia and Iraq while building his skillset in operational communications, SATCOM, systems training, operational planning, real estate management and budget control. Today, Vern helps organisations in their digital transformation journey, driven by delivering exceptional customer service. His wealth of knowledge, experience and expertise ensures Sopra Steria’s customers receive bespoke services that fit their needs as well as the best return on investment.

Mohammed Ahmed recently retired from the Royal Air Force as a Wing Commander. During his military career he specialised as an  Aero Systems and Communications Electronics engineer and, for Operational tours during Gulf War 2, he was awarded the MBE by Her Majesty the Queen. In August 2018 he joined Sopra Steria as Head of the Acquisition Support Partner for MOD Corsham. In his new role, Mohammed has run a multi-million pound profitable programme and a team of over 60 staff for Sopra Steria.  Within weeks he achieved the highest level of customer satisfaction and a perfect 100% NPS score. Mohammed is a finalist for the Service Leaver of the Year award.

Sopra Steria is committed to supporting the Armed Forces community and demonstrates that through our covenant pledge. We are delighted to also be sponsoring these awards and the category of Innovator of the Year.

What I learned using GPT-2 to write a novel

The story

On February the 14th 2019 Open AI posted their peculiar love-letter to the AI community. They shared a 21-minute long blog talking about their new language model named GPT-2, examples of the text it had generated, and a slight warning. The blog ends with a series of possible policy implications and a release strategy.

“…we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research” OpenAI Charter

While we have grown accustomed to OpenAI sharing their full code bases alongside announcements, OpenAI is committed to making AI safe. On this occasion, releasing the full code was deemed unsafe, citing concerns around impersonation, misleading news, fake content, and spam/phishing attacks. As a compromise, OpenAI shared a small model with us. While less impressive than the full GPT-2 model, it did give us something to test.

So that’s exactly what I did! Last week, I set up the small model of GPT-2 on my laptop to run a few experiments.

First, for a bit of fun, I thought I’d test its skill at creative writing. I didn’t hold great expectations with only the small model to hand, but I thought I could learn something about the capabilities of the model, and perhaps start a few interesting conversations about technology while I was at it.

I joined a popular online writing forum with an account named GPT2 and wrote a short disclaimer, which said;

** This is computer generated text created using the OpenAI GPT-2 ‘small model’. The full model is not currently available to the public due to safety concerns (e.g. fake news and impersonation). I am not affiliated with OpenAI. Click this link to find out more >> https://openai.com/blog/better-language-models/ **

The setup seemed perfect. I had a ready-made prompt to feed into GPT-2, and the model’s output is the exact length expected for submissions. I could even get feedback from other users on the quality of the submission. I chose a few specific blurbs and fed them into GPT-2 as a prompt, running the model multiple times before it created a plausible output.

I pasted the story into the platform with my disclaimer at the top, excited to see what sort of questions I would receive from the community. I hit enter, and within seconds.

‘You have been banned.’

I was confused. I had been abundantly transparent about my use of computer-generated text and had not attempted to submit a large number of posts, just one. This was where I learned my first lesson.

Lesson 1 — Being transparent might not always be enough

I had made a strong conscious effort to be as transparent as possible. I didn’t want to deceive anyone into believing this was anything other than computer generated text. Far from it, I wanted people to know it was created by GPT-2 to engage them in a conversation around AI safety. I naively thought I would avoid a negative response through my honesty, but that was not enough for this community.

I messaged the moderators. This is the reply I received;

GPT-2 response

This is how the conversation began but know that it ended happily!

Lesson 2 — It’s not GPT-2 that’s the risk, it’s how we use it

Shortly after the release of GPT-2 I saw two primary reactions to the limited release. There were parts of the mainstream media dusting off their favourite terminator photos, while some people in the AI community took the opinion that it was a marketing ploy — because any technology too dangerous to release must be very impressive indeed.

I only had access to the severely limited ‘small model’ of GPT-2. You need only use it for a few minutes to know just how far it is from being a terminator style risk, yet it still highlighted the need for thought through release strategy. Poor implementations of technology can have a negative impact on public sentiment, and in this instance, it was my choice of forum and application of the technology that raised the alarm.

Lesson 3 — Authenticity matters

It’s possible that GPT-2 could write a charming story, but it won’t hold the same place in our hearts if it’s not both charming and authentic. Max Tegmark makes this point in Life 3.0, suggesting that AI could create new drugs or virtual experiences for us in a world where there are no jobs left for humans. These drugs could allow us to feel the same kind of achievement that we would get from winning a Nobel prize. But it’d be artificial. Tegmark argues that no matter how real it feels, or how addictive the adrenaline rush is, knowing that you’ve not actually put in the groundwork and knowing that you’ve effectively cheated your way to that achievement will mean it’s never the same.

“Let’s say it produces great work”

For whatever reason, people desire the ‘real product’ even if it’s functionally worse in every way than an artificial version. Some people insist on putting ivory keytops on a piano because it’s the real thing — even though they go yellow, break easily and rely on a material harmful to animals. The plastic alternative is stronger and longer lasting, but it’s not the real thing. As the message (from the forum) says, even if ‘it produces great work’, possibly something functionally better than any story a human could have written, you don’t have the authentic product of ‘real people, who put time and effort into writing things.’

Lesson 4 — We don’t just care about the story — we care about the story behind the story

The message also highlights two things — a human submission takes effort and creativity to produce, and that matters, even if the actual output is functionally no better than computer generated text. I think I agree. I have always found that a great book means so much more to me when I discover the story behind that — the tale of the writers own conscious experience that led them to create the work.

Fahrenheit 451

Ray Bradbury’s magnum opus, Fahrenheit 451 is a brilliant book in itself, but it was made a little bit more special to me by the story behind its creation. Bradbury had a young child when the novel was conceived and couldn’t find a quiet place at home to write. He happened across an underground room full of typewriters hired at 10c an hour. Bradbury wrote the whole book in that room, surrounded by others, typing things he knew nothing about. Nine days and $9.80 later, we had Fahrenheit 451.

Vulfpeck — Sleepify

This doesn’t only apply to generated text. I recently spent far too much money importing a vinyl copy of Vulfpeck’s ‘Sleepify’ album. A record with 10 evenly spaced tracks, with completely smooth grooves. Why? It’s just pure silence! While this is an awful record based on its musical merit, and even the most basic music generation algorithm could have created something better, I love it for its story.

The band Vulfpeck put this album on Spotify in 2014 and asked their fans to play it overnight while they slept. After about 2 months the album was pulled from Spotify, but not before the band made a little over $20,000 in royalty payments, which they used to run the’ Sleepify Tour’ entirely for free.

As an aside, I think an AI like GPT-2 could also do a great job of creating a charming back-story behind the story. To the earlier point though, if it didn’t actually happen and if there wasn’t conscious human effort involved it lacks authenticity. As soon as I know that, it won’t mean the same thing to me.

Lesson 5 — Sometimes it’s about the writing process, not about being read

GPT-2 reponse 2

One thing that came out of my conversation with the moderators that I’d not even considered was that it’s not all about the people reading the content, sometimes there’s more pleasure and personal development to be gained from writing, and that’s something the forum actively wanted to promote.

In Netflix’s new show ‘After Life’, (no real spoilers!) the main character, Tony, works for a local newspaper. Throughout the series, Tony points fun at the newspaper, from the mundane selection of stories that their town has to report on to the man delivering their papers, who it turns out just dumps them in a skip nearby. Nobody actually reads the paper, and Tony takes that to mean their work is meaningless up until the very end of the series. Tony realises that it doesn’t matter who reads the paper, or if anyone reads it at all. What’s important instead is being in the paper. Everyone should have the chance to have their story heard, no matter how mundane it might sound to others. If it makes them feel special and part of something bigger, even just for a moment, then it’s doing good.

I’ve been writing for a few years now, and aside from the 18 blogs I now have on Medium, I have a vast amount of half-written thoughts, mostly garbage, and an ever-growing list of concepts that I’d like to expand on one day. Sometimes my writing is part of a bigger mission, to communicate around the uses, safety and societal impact of AI to a wide audience, and at those times I do care that my writing is read, and even better commented on and discussed. At other times, I use it as a tool to get my thoughts in order — to know the narrative and message behind a presentation that I need to make or a proposal that I need to submit. Sometimes, I write just because it’s fun.

If AI is capable of writing faster and better than humans, and it very much seems like it can (no doubt within some narrowing parameters)- it doesn’t mean that we can’t keep writing for these reasons. AI might capture the mindshare of readers, but I can still write for pleasure even if nobody is reading. However, I think it’ll mean people write a whole lot less.

I began writing because I was asked to for work. It was a chore at the start, difficult, time-consuming and something I’d only apply myself to because there was a real and definite need. Gradually it became easier, until suddenly I found myself enjoying it. If it weren’t for that concrete demand years ago, I don’t know if I ever would have begun, and if I’d be here now writing for pleasure.

Not a conclusion

It’s clear that language models like GPT-2 can have a positive impact on society. OpenAI has identified a handful of examples, like better speech recognition systems, more capable dialogue agents, writing assistants and unsupervised translation between languages.

None of this will be possible though unless we get the release strategy right, and have robust safety processes and policy to support it. Creative writing might not be the right application, so we need to ensure we identify the applications that society can agree are good uses of these language models. Codifying these applications that we approve and those that we want to protect in policy will help others to make the right decisions. Good communication will ensure that people understand what’s being used, why, and keep them onside. Robust security will prevent nefarious parties from circumventing policy and best practice.

It’s important to note that these lessons are anecdotal, derived from a single interaction. No sound policy is based on anecdotal evidence, but rather academic research, drawing in a wide range of opinions with an unbiased methodology before boiling down values into concrete rules for everyone to follow.

This isn’t a conclusion.

This story is just beginning.

GPT-2 response 3

A sneak peek inside a hothouse sprint week extravaganza

Most public and private sector leaders are acutely aware that they are supposed to be living and breathing digital: working smarter, serving people better, collaborating more intuitively. So why do front line realities so often make achieving a state of digital nirvana feel like just that: an achievable dream? The world is much messier and more complex for most organisations than they dare to admit, even internally. Achieving meaningfully digital transformation, with my staff/ customers/ deadlines/ management structure/ budgets? It’s just not realistic.

That’s where the Innovation Practice at Sopra Steria steps in.

I count myself lucky to be one of our global network of DigiLab Managers. My job is not just to help our clients re-imagine the future; anyone can do that. It’s to define and take practical steps to realising that new reality in meaningful ways, through the innovative use of integrated digital technologies, no matter what obstacles seem to bar the path ahead.

This is not innovation for the sake of it. Instead, our obsession is with delivering deep business performance, employee and customer experience transformation that really does make that living and breathing digital difference. Innovation for the sake of transformation taking clients from the land of make-believe to the tried and tested, in the here and now.

The beautiful bit? The only essentials for this process are qualities that we all have to hand: the ability to ask awkward questions, self-scrutinise and allow ourselves to be inquisitive and hopeful, fearlessly asking “What If?”.

Welcome to five days of relentless focus, scrutiny and radical thinking

The practical approach we adopt to achieving all this takes the form of an Innovation Sprint: a Google-inspired methodology which lets us cover serious amounts of ground in a short space of time. The Sopra Steria version of this Sprint is typically conducted over 5 days at one of our network of DigiLabs. These modular and open creative spaces are designed for free thinking, with walls you can write on, furniture on wheels and a rich and shifting roll-call of experts coming together to share their challenges, insights and aspirations. We also try to have a resident artist at hand, because once you can visualise something, solving it becomes that bit easier.

The only rule we allow? That anything legal and ethical is fair game as an idea.

Taking a crowbar and opening the box on aspiration

Innovation Sprints are the best way I know to shake up complex challenges, rid ourselves of preconceptions and reform for success. I want to take you through the structure of one of the recent Sprints we conducted to give you a peak at how they work, using the example of a Central Government client we have been working with. Due to the sensitive nature of the topics we discussed, names and details obviously need to stay anonymous.

In this Sprint we used a bulging kitbag of tools to drive out insight, create deliberate tensions, prioritise actions and, as one contributor neatly put it, ‘push beyond the obvious’. That kitbag included Journey Maps, Personas, Value Maps, Business Model Canvases and non-stop sketching alongside taking stacks of photos and videos of our work to keep us on track and help us capture new thinking.

Before we started, we outlined a framework for the five days in the conjunction with two senior service delivery and digital transformation leads from the Central Government Department in question. This allowed us to distil three broad but well-defined focus areas around their most urgent crunch points and pains. The three we settled on were ‘Channel shifting services’, ‘Tackling digital exclusion’  and ‘Upskilling teams with digital knowhow and tools’.

Monday: Mapping the problem

We kicked off by defining the problems and their context. Using a ‘Lightning Talks’ approach, we let our specialists and stakeholders rapidly download their challenges, getting it all out in the open and calling out any unhelpful defaults or limited thinking. In this particular Sprint, we covered legacy IT issues, employee motivation, citizen needs and vulnerabilities and how to deliver the most compassionate service, alongside PR, brand and press challenges, strategic aims and aspirations and major roadblocks. That was just Day One! By getting the tangle of challenges out there, we were able to start really seeing the size and shape of the problem.

Tuesday, Wednesday and Thursday: Diving into the molten core

This is where things always get fluid, heated and transformation. We looked in turn at the  three core topics that we wanted to address, following a set calendar each day. We would ‘decode’ in the morning, looking at challenges in more detail again using ‘Lightning Talks’ from key stakeholders to orientate us. Our experts shared their pains in a frank and open way.  We then drilled each of our key topics, ideating and value mapping, identifying  opportunities to harness innovation and adopt a more user-centric approach to technology.

At the heart of this activity we created key citizen and employee personas using a mixture of data-driven analysis and educated insight. An exercise called “How might we…?” helped us to free-think around scenarios, with key stakeholders deciding what challenges they wanted to prioritise for exploration. We were then directed by these to map key user journeys for our selected personas, quickly identifying roadblocks, testing or own assumptions, refining parameters and sparking ideas for smarter service design.

On each day we created Day +1 breakaway groups that were able to remain focused on the ideas generated the day before, ensuring that every topic had a chance to rest and enjoy a renewed focus.

Friday: Solidifying and reshaping for the future

On our final day, we pulled it all together and started to make the ideas real. We invited key stakeholders back into the room and revealed the most powerful insights and synergies that we had unearthed. We also explored how we could use the latest digital thinking to start solving their most pressing challenges now and evolve the service to where it would need to be in 3-5 years’ time. Our expert consultants and leads in automation and AI had already started to design prototypes and we honestly validated their potential as a group. Some ideas flew, new ones were generated, some were revealed to be unworkable and some were banked, to be pursued at a later date. We then discussed as a team how to achieve the transformations needed at scale (the department is predicting a rapid 4-fold growth in service use) while delivering vital quick wins that would make a palpable difference, at speed. This would help us to secure the very senior buy in our clients needed for the deeper digital transformations required.  To wrap up, we explored how we could blueprint the tech needed, work together to build tight business cases, design more fully fledged prototypes, strike up new partnerships and financial models and do it all with incredible agility.

Some photos from the week

Fast forward into the new

My personal motto is: How difficult could that be? When you’re dealing with huge enterprises and Central Government departments devoted to looking after the needs of some of the most vulnerable and disenfranchised in our society, the answer is sometimes: Very! But in my experience, there is nothing like this Sprint process for helping organisations of all stripes and sizes to move beyond unhelpful default thinking and get contributions from the people who really know the challenges inside out. With this client, we were able to map their challenges and talk with real insight and empathy about solutions, in ways they had never experienced before. We were also able to think about how we could leverage Sopra Steria’s own knowledge and embedded relationships with other government departments to create valuable strategic synergies and economies of scale.

A Sprint is never just about brainstorming around past challenges. It’s about fast-forwarding into a better, more digital, seamless and achievable future, marrying micro-steps with macro-thinking to get there. It’s an incredibly satisfying experience for all involved and one that delivers deep strategic insight and advantage, at extreme speed. And which organisation doesn’t need that?

Let’s innovate! If you’d like to book your own hothouse sprint week extravaganza or just want to know more about the process, please get in touch