A sneak peek inside a hothouse sprint week extravaganza

Most public and private sector leaders are acutely aware that they are supposed to be living and breathing digital: working smarter, serving people better, collaborating more intuitively. So why do front line realities so often make achieving a state of digital nirvana feel like just that: an achievable dream? The world is much messier and more complex for most organisations than they dare to admit, even internally. Achieving meaningfully digital transformation, with my staff/ customers/ deadlines/ management structure/ budgets? It’s just not realistic.

That’s where the Innovation Practice at Sopra Steria steps in.

I count myself lucky to be one of our global network of DigiLab Managers. My job is not just to help our clients re-imagine the future; anyone can do that. It’s to define and take practical steps to realising that new reality in meaningful ways, through the innovative use of integrated digital technologies, no matter what obstacles seem to bar the path ahead.

This is not innovation for the sake of it. Instead, our obsession is with delivering deep business performance, employee and customer experience transformation that really does make that living and breathing digital difference. Innovation for the sake of transformation taking clients from the land of make-believe to the tried and tested, in the here and now.

The beautiful bit? The only essentials for this process are qualities that we all have to hand: the ability to ask awkward questions, self-scrutinise and allow ourselves to be inquisitive and hopeful, fearlessly asking “What If?”.

Welcome to five days of relentless focus, scrutiny and radical thinking

The practical approach we adopt to achieving all this takes the form of an Innovation Sprint: a Google-inspired methodology which lets us cover serious amounts of ground in a short space of time. The Sopra Steria version of this Sprint is typically conducted over 5 days at one of our network of DigiLabs. These modular and open creative spaces are designed for free thinking, with walls you can write on, furniture on wheels and a rich and shifting roll-call of experts coming together to share their challenges, insights and aspirations. We also try to have a resident artist at hand, because once you can visualise something, solving it becomes that bit easier.

The only rule we allow? That anything legal and ethical is fair game as an idea.

Taking a crowbar and opening the box on aspiration

Innovation Sprints are the best way I know to shake up complex challenges, rid ourselves of preconceptions and reform for success. I want to take you through the structure of one of the recent Sprints we conducted to give you a peak at how they work, using the example of a Central Government client we have been working with. Due to the sensitive nature of the topics we discussed, names and details obviously need to stay anonymous.

In this Sprint we used a bulging kitbag of tools to drive out insight, create deliberate tensions, prioritise actions and, as one contributor neatly put it, ‘push beyond the obvious’. That kitbag included Journey Maps, Personas, Value Maps, Business Model Canvases and non-stop sketching alongside taking stacks of photos and videos of our work to keep us on track and help us capture new thinking.

Before we started, we outlined a framework for the five days in the conjunction with two senior service delivery and digital transformation leads from the Central Government Department in question. This allowed us to distil three broad but well-defined focus areas around their most urgent crunch points and pains. The three we settled on were ‘Channel shifting services’, ‘Tackling digital exclusion’  and ‘Upskilling teams with digital knowhow and tools’.

Monday: Mapping the problem

We kicked off by defining the problems and their context. Using a ‘Lightning Talks’ approach, we let our specialists and stakeholders rapidly download their challenges, getting it all out in the open and calling out any unhelpful defaults or limited thinking. In this particular Sprint, we covered legacy IT issues, employee motivation, citizen needs and vulnerabilities and how to deliver the most compassionate service, alongside PR, brand and press challenges, strategic aims and aspirations and major roadblocks. That was just Day One! By getting the tangle of challenges out there, we were able to start really seeing the size and shape of the problem.

Tuesday, Wednesday and Thursday: Diving into the molten core

This is where things always get fluid, heated and transformation. We looked in turn at the  three core topics that we wanted to address, following a set calendar each day. We would ‘decode’ in the morning, looking at challenges in more detail again using ‘Lightning Talks’ from key stakeholders to orientate us. Our experts shared their pains in a frank and open way.  We then drilled each of our key topics, ideating and value mapping, identifying  opportunities to harness innovation and adopt a more user-centric approach to technology.

At the heart of this activity we created key citizen and employee personas using a mixture of data-driven analysis and educated insight. An exercise called “How might we…?” helped us to free-think around scenarios, with key stakeholders deciding what challenges they wanted to prioritise for exploration. We were then directed by these to map key user journeys for our selected personas, quickly identifying roadblocks, testing or own assumptions, refining parameters and sparking ideas for smarter service design.

On each day we created Day +1 breakaway groups that were able to remain focused on the ideas generated the day before, ensuring that every topic had a chance to rest and enjoy a renewed focus.

Friday: Solidifying and reshaping for the future

On our final day, we pulled it all together and started to make the ideas real. We invited key stakeholders back into the room and revealed the most powerful insights and synergies that we had unearthed. We also explored how we could use the latest digital thinking to start solving their most pressing challenges now and evolve the service to where it would need to be in 3-5 years’ time. Our expert consultants and leads in automation and AI had already started to design prototypes and we honestly validated their potential as a group. Some ideas flew, new ones were generated, some were revealed to be unworkable and some were banked, to be pursued at a later date. We then discussed as a team how to achieve the transformations needed at scale (the department is predicting a rapid 4-fold growth in service use) while delivering vital quick wins that would make a palpable difference, at speed. This would help us to secure the very senior buy in our clients needed for the deeper digital transformations required.  To wrap up, we explored how we could blueprint the tech needed, work together to build tight business cases, design more fully fledged prototypes, strike up new partnerships and financial models and do it all with incredible agility.

Some photos from the week

Fast forward into the new

My personal motto is: How difficult could that be? When you’re dealing with huge enterprises and Central Government departments devoted to looking after the needs of some of the most vulnerable and disenfranchised in our society, the answer is sometimes: Very! But in my experience, there is nothing like this Sprint process for helping organisations of all stripes and sizes to move beyond unhelpful default thinking and get contributions from the people who really know the challenges inside out. With this client, we were able to map their challenges and talk with real insight and empathy about solutions, in ways they had never experienced before. We were also able to think about how we could leverage Sopra Steria’s own knowledge and embedded relationships with other government departments to create valuable strategic synergies and economies of scale.

A Sprint is never just about brainstorming around past challenges. It’s about fast-forwarding into a better, more digital, seamless and achievable future, marrying micro-steps with macro-thinking to get there. It’s an incredibly satisfying experience for all involved and one that delivers deep strategic insight and advantage, at extreme speed. And which organisation doesn’t need that?

Let’s innovate! If you’d like to book your own hothouse sprint week extravaganza or just want to know more about the process, please get in touch

A more caring conference: ITSMF 2018

The key themes at this year’s ITSMF conference were about ensuring the ongoing relevance of IT Service Management (ITSM) and the importance of the people that work in the profession  These themes were constant throughout the various sessions be they digital transformation of the year or the debate on the future ethics of AI.

The keynote opening speech was delivered by the Mental Health charity “Sane”, which was received like no other I have witnessed before at an ITSMF conference.   It really is OK to talk about mental health and loudly applaud a speaker who opens up on issues which some may see as a taboo.

Of the 46 sessions that ran this year, 29 of the sessions were people focussed.  Personal journeys, the support and benefits of being in the profession.  It really was People first at ITSMF 2018 and not the usual People, Process and Technology Mantra.  Whether it was process automation or chatbots, the focus was on the people using these technologies or enabling them.  Some of my personal highlights from the conference are below:-

The Great Relevance Debate

This was the headline panel session with industry experts including our very own Dave Green.  The debate centred on the relevance of ITSM in the digital age.  The conclusion was that there would always need to be an approach for managing IT Services.  The principles of ITIL, COBIT, Lean, IT4IT etc. will therefore remain relevant.  VeriSM, (a service management approach for the digital age) and the forthcoming ITIL4 demonstrate the evolution of best ITSM practice thinking and alignment to the digital age.  In the future, key ITSM activities will be automated, accountability will be pushed to the coalface and metrics will be based on the customer experience.  There will though still be a need for operational frameworks and ITSM professionals measuring and improving service.   It was also noted by the panel many organisations are tied long term to Bi-Modal operations.   Legacy systems may best be managed with the disciplines of what we can call legacy ITSM.  In short, ITSM is still relevant but not in the same way as it was 10 years ago.

Experience Level Agreements (XLA) – Kicking the KPI habit

This session was all about creating measures of IT performance that are relevant to the End User of the Services.  The customer experience will become the critical success factor in the truly digital world.  It is driving a power-shift from the business to the customer, so to drive higher user demand businesses need to understand customers and their expectations. It’s important, therefore a means of effectively measuring the customer experience needs to be in place. If XLAs are not in place, customers may go elsewhere even with all the IT Metrics green. IT Metrics should be kept for IT and relevant XLA metrics developed for the end customer.  An XLA is created through starting with a targeted end result and re-engineering backwards.  A key principle was that IT shouldn’t just be looking to align to business, it should be aiming to ENABLE business. More information can be found here https://xla.rocks/

The New Management of Service – Joining up the Enterprise

This session talked of the New Management of Service, joining up the Enterprise and the concept of Enterprise Service Management rather than just the ITSM in isolation.  The speaker talked of 2 key concepts.  The first being the benefits of applying best practice ITSM techniques to the wider enterprise.  The HR department could use the technologies and processes of the IT Request Management was an example cited. The second concept was of everything as a Service and the mapping of customer journeys end to end across all organisational pillars; IT, finance, sales, marketing, procurement, customer support, facilities management, HR.  Break down the silos and manage enterprise services end to end from the customer’s perspective to reduce costs, eliminate waste and increase organisational efficiency. Other speakers at the conference championed the concept of Enterprise Service Management.

Going digital isn’t Transformation, its evolution

The speaker stated that 22% of companies think they completed their digital transformation, which indicates they do not understand the nature of being a digital business.  There were several sessions on digital transformation at the conference but this session had some good pragmatic content.  The speaker stated that business users often have better IT at home than at work as home IT doesn’t get business priority.  Going digital by just changing the front-end is not transformation, it’s like a new coat of paint on a building, only the 1st step in refurbishment that needs to move on to other areas like flooring, wiring etc.  I especially like the term GADU to describe the expectations of the digital consumer.  It must search like Google, order like Amazon, be packaged/bundled like Dell and track like UPS for each step of the activity (GADU).  Anything less than GADU capability is viewed less favourably by the customer.  I also liked the speakers view that there is no such thing as the cloud just someone else’s computer J.  The speaker also talked of the importance of properly marketing digital transformations in the same way an organisation would market a new product.  This applies to both internal and external digital transformations.

The Ethics of AI

There has been a lot of talk about AI and the ethics around it as we approach “the 4th industrial revolution”. The speaker had some interesting ideas on empathy engines that could take Siri and Alexa to the next levels.  The speaker talked of the emergence of “Robophyschologists” as persons that would bridge the gap between human and machine learning and interaction.  They would create algorithms that would enable machines to learn in the same way a human babies do.  This all felt a little far off for me but the speaker cited things that are happening now around the ethics of AI.  Laws already enshrined in Germany ensure AI favours human life over anything when making emergency decisions for example.  A very thought provoking session.

Overall I felt the ITSMF 2018 conference to be forward looking and compassionate but still with a nod to the past.  I met the man who first coined the terminology “Incident” and “Problem” whose lanyard displayed the words Malcolm Fry “ITSM Legend”.

Sopra Steria to host 2 internal hackathons in Edinburgh and Glasgow!

Sopra Steria are hosting 2 internal hackathons this week across our Edinburgh and Glasgow offices where participants will be making use of DevOps tooling to deploy and manage applications on InnerShift. InnerShift is Sopra Steria’s internal container platform based on Red Hat OpenShift and will be used to facilitate the deployment and management of containers, standalone pieces of software that include everything needed to be able to run an application – from code and runtime to system tools, libraries and settings.

Attendees will work in teams of 3-4 people and will have 3 hours to work through a list of pre-defined objectives such as deployment through source to image and the creation of CI/CD pipelines. The teams will be required to make changes to their application/InnerShift to make use of some of the rich feature sets available within the platform. The teams will be encouraged to work together and experienced Sopra Steria architects will be in attendance to support and help with any issues that may arise.

The main aim of these events is to help our employees upskill in the area of DevOps/OpenShift and facilitate knowledge transfer from more experienced employees to members of staff who may be new to the company or who may not have worked with OpenShift before. The events are open to all colleagues and our RSVPs so far range from graduates and developers to business analysts and UX consultants.

Sopra Steria are always working to roll out innovation across the organisation and we are sure that the output of these events will help to establish innovative uses of technology that we can share with both coworkers and clients alike. A blog will be published on the Sopra Steria website post-event that will discuss the content of the evenings – watch this space!

Quantum Computers: A Beginner’s Guide

What they are, what they do, and what they mean for you

What if you could make a computer powerful enough to process all the information in the universe?

This might seem like something torn straight from fiction, and up until recently, it was. However with the arrival of quantum computing, we are about to make it reality. Recent breakthroughs by Intel and Google have catapulted the technology into the news. We now have lab prototypes, Silicon Valley start-ups and a multi-billion dollar research industry. Hype is on the rise, and we are seemingly on the cusp of a quantum revolution so powerful that it will completely transform our world.

On the back of this sensationalism trails confusion. What exactly are these machines and how do they work? And, most importantly, how will they change the world in which we live?

6

At the most basic level, the difference between a standard computer and a quantum computer boils down to one thing: information storage. Information on standard computers is represented as bits– values of either 0 or 1, and these provide operational instructions for the computer.

This differs on quantum computers, as they store information on a physical level so microscopic that the normal laws of nature no longer apply. At this minuscule level, the laws of quantum mechanics take over and particles begin to behave in bizarre and unpredictable ways. As a result, these devices have an entirely different system of storing information: qubits, or rather, quantum bits.

Unlike the standard computer’s bit, which can have the value of either 0 or 1, a qubit can have the value of 0, 1 or both 0 and 1 at the same time. It can do this because of one of the fundamental (and most baffling) principles of quantum mechanics- quantum superposition, which is the idea that one particle can exist in multiple states at the same time. Put another way: imagine flipping a coin. In the world as we know it (and therefore the world of standard computing), you can only have one of two results: heads or tails. In the quantum world, the result can be heads and tails.

What does all of this this mean in practice? In short, the answer is speed. Because qubits can exist in multiple states at the same time, they are capable of running multiple calculations simultaneously. For example, a 1 qubit computer can conduct 2 calculations at the same time, a 2 qubit computer can conduct 4, and a 3 qubit computer can conduct 8- increasing exponentially. Operating under these rules, quantum computers bypass the “one-at-a-time” sequence of calculation that a classical computer is bound by. In the process, they become the ultimate multi-taskers.

To give you a taste of what that kind speed might look like in real terms, we can look back to 2015, when Google and Nasa partnered up to test an early prototype of a quantum computer called D-Wave 2X. Taking on a complex optimisation problem, D-Wave was able to work at a rate roughly 100 million times faster than a single core classical computer and produced a solution in seconds. Given the same problem, a standard laptop would have taken 10,000 years.

7

Given their potential for speed, it is easy to imagine a staggering range of possibilities and use cases for these machines. The current reality is slightly less glamorous. It is inaccurate to think of quantum computers as simply being “better” versions of classical computers. They won’t simply speed up any task run through them (although they may do that in some instances). They are, in fact, only suited to solving highly specific problems in certain contexts- but there’s still a lot to be excited about.

One possibility that has attracted a lot of fanfare lies in the field of medicine. Last year, IBM made headlines when they used their quantum computer to successfully simulate the molecular structure of beryllium hydride, the most complex molecule ever simulated on a quantum machine. This is a field of research which classical computers usually have extreme difficulty with, and even supercomputers struggle to cope with the vast range of atomic (and sometimes quantum) complexities presented by complex molecular structures. Quantum computers, on the other hand, are able to read and predict the behaviour of such molecules with ease, even at a minuscule level. This ability is significant not just in an academic context; it is precisely this process of simulating molecules that is currently used to produce new drugs and treatments for disease. Harnessing the power of quantum computing for this kind of research could lead to a revolution in the development of new medicines.

But while quantum computers might set in motion a new wave of scientific innovation, they may also give rise to significant challenges. One such potentially hazardous use case is the quantum computer’s ability to factorise extremely large numbers. While this might seem relatively harmless at first sight, it is already stirring up anxieties in banks and governments around the world. Modern day cryptography, which ensures the security of the majority of data worldwide, relies on complex mathematical problems- tied to factorisation- that classical computers have insufficient power to solve. Such problems, however, are no match for quantum computers, and the arrival of these machines could render modern methods of cryptography meaningless, leaving everything from our passwords and bank details to even state secrets extremely vulnerable, able to be hacked, stolen or misused in the blink of an eye.

8

Despite the rapid progress that has been made over the last few years, an extensive list of obstacles still remain, with hardware right at the top. Quantum computers are extremely delicate machines, and a highly specialised environment is required to produce the quantum state that gives qubits their special properties. For example, they must be cooled to near absolute zero (roughly the temperature of outer space) and are extremely sensitive to any kind of interference from electricity or temperature. As a result, today’s machines are highly unstable, and often only maintain their quantum states for just a few milliseconds before collapsing back into normality- hardly practical for regular use.

Alongside these hardware challenges marches an additional problem: a software deficit. Like a classical computer, quantum computers need software to function. However, this software has proved extremely challenging to create. We currently have very few effective algorithms for quantum computers, and without the right algorithms, they are essentially useless- like having a Mac without a power button or keyboard. There are some strides being made in this area (QuSoft, for example) but we would need to see vast advances in this field before widespread adoption becomes plausible. In other words, don’t expect to start “quoogling” any time soon.

So despite all the hype that has recently surrounded quantum computers, the reality is that now (and for the foreseeable future) they are nothing more than expensive corporate toys: glossy, futuristic and fascinating, but with limited practical applications and a hefty price tag attached. Is the quantum revolution just around the corner? Probably not. Does that mean you should forget about them? Absolutely not.

The Geek Shall Inherit

AI has the potential to be the greatest ever invention for humanity.  And it should be for the benefit of all humanity equally, but instead we’re heading towards a particular group, the geeks, who will benefit most from AI. AI is fundamentally more likely to favour the values of its designers, and whether we train our AI on a data set gathered from humans, or with pure simulated data through a system like deep reinforcement learning bias will, to a greater or lesser extent, remain.

A disclaimer – Humans are already riddled with bias.  Be it confirmation, selective or inclusive bias, we constantly create unfair systems and draw inaccurate conclusions which can have a devastating effect on society.  I think AI can be a great step in the right direction, even if it’s not perfect.  AI can analyse dramatically more data than a human and by doing so generate a more rounded point of view.  More rounded however is not completely rounded, and this problem is significant given any AI which can carry out a task orders of magnitude faster than a human.

To retain our present day levels of inequality while building a significantly faster AI we must dramatically reduce the number of unethical decisions it produces.  For example, if we automate a process with a system which produces only 10% as many unethical decisions as a human per transaction, but we make it 1000x faster, we end up with 100x more injustice in the world.  To retain todays levels that same system would need to make only 0.1% as many unethical decisions per transaction.

For the sake of rhyme, I’ve titled this blog the geek shall inherit.  I am myself using a stereotype, but I want to identify the people that are building AI today.  Though I firmly support the idea that anyone can and should be involved in building these systems that’s not a reflection of our world today.  Our society and culture has told certain people, women for instance, from a young age that boys work on computers and girls do not.  This is wrong, damaging and needs remedying.  That’s a problem to tackle in a different blog!  Simply accepting in this instance that the people building AI tend to be a certain type of person – Geeks.  And if we are to stereotype a geek, we’re thinking about someone who is highly knowledgeable in an area, but also socially inept, and probably a man.

With more manual forms of AI creation the problem is at its greatest.  Though we may be using a dataset gathered from a more diverse group of people, there’s still going to be selection bias in that data, as well as bias directly from the developers if they are tasked with the annotation of that data.  Whether intentionally or not , humans are always going to favour things more alike themselves and code nepotism into a system, meaning the system is going to favour geeky men like themselves more so than any other group.

In 2014 the venture capital fund ‘Deep Knowledge Ventures’ developed an algorithm called ‘VITAL’ to join their board and vote on investments for the firm.  VITAL shared a bias with it’s creators, nepotism, showing a preference to invest in businesses which valued algorithms in their own decision making (Homo Deus, Harari, 2015).  Perhaps VITAL developed this bias independently, but the chances area it’s developers unconsciously planted the seed of nepotism, and even the preference towards algorithms due to their own belief in them.

A step beyond this is deep reinforcement learning.  This is the method employed by Google’s Deep Mind in the Alpha Zero project.  The significant leap between Alpha Go and Alpha Go Zero is that Alpha Go used data recorded from humans playing Go, whereas Alpha Go Zero learned simply by playing against itself in a simulated world.  By doing this, the system can make plays which seem alien to human players, as it’s not constrained by human knowledge of the games.  The exception here is ‘move 37’ against Lee Sedol, which Alpha Go Lee used,  prior to the application of Deep Reinforcement Learning.  This move was seen as a stroke of creative brilliance that no human would ever have played, even though this system was trained on human data.

Humans also use proxies to determine success in these games.  An example of this is Alpha Go playing chess.  Where humans use a points system on pieces as a proxy to understand their performance in a game, Alpha Go doesn’t care about its score.  It’ll sacrifice valuable pieces for cheap ones when other moves which appear more beneficial are available, because it doesn’t care about its score, only about winning.  And win it does, if only by a narrow margin.

So where is the bias in this system?  Though the system may be training in a simulated world, two areas for bias remain.  For one, the layers of the artificial neural network are decided upon by those same biased developers.  Second, it is simulating a game designed by humans – Where the game board and rules of Go were designed.  Both Go and Chess for instance offer a first move advantage to black.  Though I prefer to believe that the colours of pieces on a game board has everything to do with contrast and nothing to do with race, we may be subtly teaching a machine that one colour is guaranteed by rules an advantage over others in live.

The same issue however remains in more complex systems.  The Waymo driverless car is trained predominantly in a simulated world, where it learns free from human input, fatigue and mistakes.  It is however, still fed the look and feel of human designed and maintained roads, and the human written rules of the highway code.  We might shift here from ‘the geek shall inherit’ to ‘the lawyer shall inherit’.  Less catchy, but simply by making the system learn from a system or rules that was designed by a select group of people will introduce some bias, even if it’s simulating it’s training data within the constraints of those rules.

So, what should we do?

AI still has the potential to be incredibly beneficial for all humanity.  Terminator scenarios permitting, we should pursue the technology.  I would propose tackling this issue from two fronts.

2

This would be hugely beneficial to the technology industry as a whole, but it’s of paramount concern in the creation of thinking machines.  We want our AI to think in a way that suits everyone, and our best chance of success is to have fair and equal representation throughout its development.  We don’t know how much time remains before a hard take-off of an artificial general intelligence, and we may not have time to fix the current diversity problem, but we should do everything we can to fix it.

3

Because damage caused by biased humans, though potentially catastrophic will always be limited by our inherent slowness.  AI on the other hand can implement biased actions much faster than us humans and may simply accelerate an unfair system.  If we want more equality in the world a system must focus more heavily on equality as a metric than speed, and ensure at the very least that it reduces inequality by as much as the process speed is increased e.g.;

  1. If we make a process 10x faster, we must reduce the prevalence and impact of unequal actions by at least 90%.
  2. If we create a system 1,000x faster, this reduction must be for a 99.9% reduction of inequality in its actions.

Doing this only retains our current baseline.  To make progress in this area we need go a step further with the reduction in inequality before increasing the speed.

AI, VR and the societal impact of technology: our takeaways from Web Summit 2017

Together with my Digital Innovation colleague Morgan Korchia, I was lucky enough to go to Web Summit 2017 in Lisbon – getting together with 60,000 other nerds, inventors, investors, writers and more. Now that a few weeks have passed, we’ve had time to collect our thoughts and reflect on what turned out to be a truly brilliant week.

We had three goals in mind when we set out:

  1. Investigate the most influential and disruptive technologies of today, so that we can identify those which we should begin using in our business
  2. Sense where our market is going so that we can place the right bets now to benefit our business within a 5-year timeframe
  3. To meet the start-ups and innovators who are driving this change and identify scope for collaboration with them

Web Summit proved useful for this on all fronts – but it wasn’t without surprises.  It’s almost impossible to go to an event like this without some preconceptions about the types of technologies we are going to be hearing about. On the surface, it seemed like there was a fairly even spread between robotics, data, social media, automation, health, finance, society and gaming (calculated from the accurate science of ‘what topic each stage focused on’). However, after attending the speeches themselves, we detected some overarching themes which seemed to permeate through all topics. Here are my findings:

  • As many as 1/3rd of all presentations strongly focus on AI – be that in the gaming, finance, automotive or health stage
  • Around 20% of presentations primarily concern themselves with society, or the societal impact of technology
  • Augmented and virtual reality feature in just over 10% of presentations, which is significantly less than we have seen in previous years

This is reflective my own experience at Web Summit, although I perhaps directed myself more towards the AI topic, spending much of my time between the ‘autotech / talkrobot’ stage and the main stage. From Brian Krzanich, the CEO of Intel, to Bryan Johnson, CEO of Kernel and previously Braintree, we can see that AI is so prevalent today that a return to the AI winter is unimaginable. It’s not just hype; it’s now too closely worked into the fabric of our businesses to be that anymore. What’s more, too many people are implementing AI and machine learning in a scalable and profitable way for it to be dispensable. It’s even getting to the point of ubiquity where AI just becomes software, where it works, and we don’t even consider the incredible intelligence sitting behind it.

An important sub-topic within AI is also picking up steam- AI ethics. A surprise keynote from Stephen Hawking reminded us that while successful AI could be the most valuable achievement in our species’ history, it could also be our end if we get it wrong. Elsewhere, Max Tegmark, author of Life 3.0 (recommended by Elon Musk… and me!) provided an interesting exploration of the risks and ethical dilemmas that face us as we develop increasingly intelligent machines.

Society was also a themed visited by many stages. This started with an eye-opening performance from Margrethe Vestager, who spoke about how competition law clears the path for innovation. She used Google as an example, who, while highly innovative themselves, abuse their position of power, pushing competitors down their search rankings to hamper the chances of other innovations from becoming successful. The Web Summit closed with an impassioned speech from Al Gore, who gave us all a call to action to use whatever ability, creativity and funding we have to save our environment and protect society as a whole for everyone’s benefit.

As for AR and VR, we saw far less exposure this year than seen at events previously (although it was still the 3rd most presented-on theme). I don’t necessarily think this means it’s going away for good, although it may mean that in the immediate term it will have a smaller impact on our world than we thought it might. As a result, rather than shouting about it today, we are looking for cases where it provides genuine value beyond a proof of concept.

I also take some interest from the topics which were missing, or at least presented less frequently. Amongst these I put voice interfaces, cyber security and smart cities. I don’t think this is because any of these topics have become less relevant. Cyber security is more important now than ever, and voice interfaces are gaining huge traction in consumer and professional markets. However, an event like Web Summit doesn’t need to add much to that conversation. I think that without a doubt we now regard cyber security as intrinsic to everything we do, and aside from a few presentations including Amazon’s own Werner Vogels, we know that voice is here and that we need to be finding viable implementations. Rather than simply affirming our beliefs, I think a decision was made to put our focus elsewhere, on the things we need to know more about to broaden our horizons over the week.

We also took the time to speak to the start-ups dotted around the event space.  Some we took an interest in like Nam.r, who are using AI in a way which drives GDPR compliance, rather than causing the headache many of us assume it may result in. Others like Mapwize.io and Skylab.global are making use of primary technological developments, which were formative and un-scalable a year ago. We also took note of the start-ups spun out of bigger businesses, like Waymo, part of Google’s Alphabet business, which is acting as a bellwether on which many of the big players are placing their bets.

The priority for us now is to build some of these findings into our own strategy- much more of a tall order than spending a week in Lisbon absorbing.  If you’re wondering what events to attend next year, Web Summit should be high up on your list, and I hope to see you there!

What are your thoughts on these topics? Leave a reply below, or contact me by email.

Learn more about Aurora, Sopra Steria’s horizon scanning team, and the topics that we are researching.

Have you heard the latest buzz from our DigiLab Hackathon winners?

The innovative LiveHive project was crowned winner of the Sopra Steria UK “Hack the Thing” competition which took place last month.

Sopra Steria DigiLab hosts quarterly Hackathons with a specific challenge, the most recent named – Hack the Thing. Whilst the aim of the hack was sensor and IoT focused, the solution had to address a known sustainability issue. The LiveHive team chose to focus their efforts on monitoring and improving honey bee health, husbandry and supporting new beekeepers.

A Sustainable Solution 

Bees play an important role in sustainability within agriculture. Their pollinating services are worth around £600 million a year in the UK in boosting yields and the quality of seeds and fruits[1]. The UK had approximately 100,000 beekeepers in 1943 however this number had dropped to 44,000 by 2010[2]. Fortunately, in recent years there has been a resurgence of interest in beekeeping which has highlighted a need for a product that allows beekeepers to explore and extend their knowledge and capabilities through the use of modern, accessible technology.

LiveHive allows beekeepers to view important information about the state of their hives and receive alerts all on their smartphone or mobile device. The social and sharing side of the LiveHive is designed to engage and support new beekeepers and give them a platform for more meaningful help from their mentors. The product also allows data to be recorded and analysed aiding national/international research and furthering education on the subject.

The LiveHive Model

The LiveHive Solution integrates three services – hive monitoring, hive inspection and a beekeeping forum offering access to integrated data and enabling the exchange of data.

“As a novice beekeeper I’ve observed firsthand how complicated it is to look after a colony of bees. When asking my mentor questions I find myself having to reiterate the details of the particular hive and history of the colony being discussed. The mentoring would be much more effective and valuable if they had access to the background and context of the hives scenario.”

LiveHive integrates the following components:

  • Technology Sensors: to monitor conditions such as temperature and humidity in a bee hive, transmitting the data to Azure cloud for reporting.
  • Human Sensors: a Smartphone app that enables the beekeeper to record inspections and receive alerts.
  • Sharing Platform: to allow the novice beekeeper to share information with their mentors and connect to a forum where beekeepers exchange knowledge, ideas and experience. They can also share the specific colony history to help members to understand the context of any question.

How does it actually work?

A Raspberry Pi measures temperature, humidity and light levels in the hive transmits measurements to Microsoft Azure cloud through its IoT Hub.

Sustainable Innovation

On a larger scale, the data behind the hive sensor information and beekeepers inspection records creates a large, unique source of primary beekeeping data. This aids research and education into the effects of beekeeping practice on yields and bee health presenting opportunities to collaborate with research facilities and institutions.

The LiveHive roadmap plans to also put beekeepers in touch with the local community through the website allowing members of the public to report swarms, offer apiary sites and even find out who may be offering local honey!

What’s next? 

The team have already created a buzz with fellow bee projects and beekeepers within Sopra Steria by forming the Sopra Steria International Beekeepers Association which will be the beta test group for LiveHive. Further opportunities will also be explored with the service design principle being applied to other species which could aid in Government inspection. The team are also looking at methods to collaborate with Government directorates in Scotland.

It’s just the start for this lot of busy bees but a great example of some of the innovation created in Sopra Steria’s DigiLab!

[1] Mirror, 2016. Why are bee numbers dropping so dramatically in the UK?  

[2] Sustain, 2010. UK bee keeping in decline