Journey of BB8 (Part 1)

We all have dreamt of flying, fighting with a lightsabre, and controlling objects with our mind. I was lucky enough to make one of my dreams come true when DigiLab UK went on an exploration journey of brain-computer interfaces. I recruited one fellow dreamer, a UX designer, along with me, the software engineer. We started to look at different aspects of BCI.  The initial task chosen was to control an object with our mind, and in the journey, learn more about the technology. I was staring at my desk thinking about which object to control. Then there was my answer staring back at me, BB8 on my desk.   Whether by fate or the force, we knew what we had to do. We would control BB8 using a BCI device, the Emotiv EPOC+, which was also available and previously used for hackathon project in Norway. I will take you through my journey of making this prototype with the help of a two-part series blog in the hopes of helping others who are starting to explore BCI technology.

Setup

The Emotiv EPOC+ headset comes along with 14 electrodes.  Setup of the device is easy but tedious as you are required to soak the electrodes with saline solution each time before screwing them onto the device. This process is needed to get good connectivity between the user’s scalp and electrodes. For people with more hair, it is naturally more difficult to get good connectivity as they must adjust their hair to make sure there is nothing bet­ween the electrodes and scalp. For some connectivity levels were sufficient with dry electrodes but to save time I recommend that always soak the electrodes before using the device as you are more likely to get fast and good connectivity.  There are many videos available online that guide you through the initial setup of the device.

20180913_114255
Electrodes need to be screwed on the device

 

 

20180913_113314
Emotiv EPOC+ with fourteen electrodes and the EEG head device

Training mental commands

I aimed to control BB8 with EPOC+ headset, so I started to investigate the mental commands and its various functionalities. To use the mental commands you first need to train them. The training process enables the EPOC+ to analyze individual brainwaves and develop a personalized signature corresponding to the different mental action.

Emotiv Xavier

Emotiv Xavier control panel is an application that configures and demonstrates the Emotiv detection suites. It provides the user with an interface to train mental commands, view facial expressions, performance metric, raw data, and to upload data to Emotiv account. The user has the option to sign in to their account or use the application as a guest.

The user is required to make a training profile. Users have the option to have multiple training profiles under one Emotiv account. Each user needs their profile as each one of us possesses unique brain waves.

Let’s train the commands

The first mental command or action user must record is their “neutral” state.  The neutral state is like a baseline or passive mental command. While recording this state, it is advisable to remain relaxed like when you are reading or watching TV. If the neutral state has not recorded correctly, the user will not be able to get any other mental commands working properly. For some recording, the neutral state results in better detection of other mental commands.

The “record neutral”button allows the user to record up to 30 seconds of neutral training data.  The recording automatically finishes after 30 seconds, but the user has the option to stop recording any time they feel that enough data has been collected. At least 6 seconds of recorded data is required to update the signature.

After recording the neutral state, the user can start to train any one of the 13 different actions available. For my research, I only focused on two mental actions “push” and “pull.” Emotiv website provides tips and instruction on how to train the mental commands. It suggests remaining consistent in thoughts while training. To perform any mental action, users must replicate their exact thoughts process or mental state that they had during the training process.  For example, if a user wants to train “push” command, it’s up to the user what they want to think or visualized for that action. Some users might imagine a cube going away from them, or some might imagine a cube shrinking, whatever works for them, but they need to remain consistent in their thoughts and mental state. If the user is distracted even for a second, it is advisable to retrain the action. As the user is able to train a distinct and reproducible mental state for each action, the detection of these actions become more precise. Mostly, the users must train an action several times before getting accurate results.

While I was trying to train the “push” action, I placed the BB8 on a white table and imagined it moving away from me. I replicated same thought, imagining BB8 going away from me on the table and was able to perform the mental action. However, when I placed the BB8 on the carpet, I failed. This may have been because the different colour of the carpet distracted me and I was unable to replicate my exact mental state, therefore, failed to perform the mental action. For me, the environment needed to be the same to reproduce my specific mental state. However, this varies from user to user.

Emotiv Xavier gives the option to view an animated 3D cube on the screen while training an action. Some users find it easier to maintain the necessary focus and consistency if the cube is automatically animated to perform the intended action as a visualization aid during the training process. A user can, therefore view themselves performing an action by viewing the cube. The cube remains stationary unless the user is performing one of the mental actions (if already trained) or unless the user selects “Animate model according to training action checkbox for training purposes. It is advisable to train one action fully before moving on to the next one. It gets harder and harder to train as you add more mental actions.

Is the training process easy?

There are lots of tips and guidance given on Emotiv website for training mental commands. Users are given an interface to help them train and perform mental actions with the aid of animated 3D or 2D models. However, during my three days of training, I was not able to find an easy and generic way to train the mental commands. People are different. Some are more focused than others. Some like to close their eyes to visualize and perform the command. Some want help with animation. What I observed was that it depends on the person and how focused they are, and how readily they can replicate a state of mind. There is no straightforward equation. You need time and patience. I was only able to achieve 15 % skill rating after training two mental actions. Only one of my colleagues got 70% skill rating which he wasn’t able to reproduce later.

NeuroFeedback

While searching for simpler ways to train mental commands I came across a process known as neurofeedback. Neurofeedback is a procedure for observing your brain activity to understand and train your brain. A user observes what their brain is actually doing as compared to what they want it to be doing. The user monitors their brain waves, and if they are nearing the desired mental state, then they are rewarded with a positive response which can be music, video or advancing in a game. Neurofeedback is used to help reduce stress, anxiety, aid in sleeping, and for other forms of therapeutic assistance.

Neurofeedback is a great way to train your brain for mental commands. For example, if someone is trying to do “push” command,” they can observe their brain activities on screen and see if they are consistent. Then they can slowly and steadily train their brain to replicate a specific state. Emotiv provides the “Emotive 3D Brain Activity Map” and “Emotiv Brain Activity Map”, a paid application that can be used to monitor, visualize and adjust brainwaves in real time.  For our research, we didn’t try these applications.  If you try it out, let us know how you got on!

Training is like developing a new skill. Remember how you learned to ride a bike, or how you learned to drive? It took time and practice, and it’s the same for training mental commands. Companies do provide help by giving tips, instruction and software applications to help users train and visualize, but in the end, it’s acquiring a new skill, and users need practice. Some might learn faster than others, but for everyone it takes time.

 

 

Google Dupe-lex

Google unveiled an interesting new feature at their I/O conference last week – Duplex.  The concept is this: want to use your Google assistant to make bookings for you but the retailer doesn’t have an online booking system?  Looks like your going to be stuck making a phone call yourself.

Google wants to save you from that little interaction.  Ask the Google assistant to make a booking for you and Duplex will make a call to the place, let them know when you’re free, what you want to book, when, and talk the retailer through it…. With a SUPER convincing voice.

It’s incredibly convincing, and nothing like the Google assistant voice that we’re use to.  It uses seemingly perfect human intonations, pauses, umms and ahs at the right moments.  Knowing that it’s a machine, you feel like you can spot the moments where it sounds a little bit robotic, but if I’m being honest, if I didn’t know in advance I’d be hard pressed to notice anything out of the ordinary, and wouldn’t for a moment suspect it was anything but a human.

I think what they’re using here is likely a branch of the Tacotron 2 speech generation AI that was demoed last year.  It was a big leap up from the Google assistant voice we are used to, and it was difficult to tell the difference between it and a human voice.  If you want to see if you can tell the difference follow this link;

https://ai.googleblog.com/2017/12/tacotron-2-generating-human-like-speech.html

googledual

So, what’s the problem?

The big problem is that people are going to feel tricked (or ‘duped’ as me and likely 100 other people will like to joke).  Google addressed this a little bit, saying that Duplex will introduce itself and tell the person on the other end of the phone is a robot, but I’m still not sure it’s right.

I can absolutely see the utility in making this voice seem more human.  If you receive a call from a robotic sounding voice, you put the phone down.  We expect the robot is going to try to be polite for just long enough to ask us for our credit card details for some obscure reason.  By making the voice sound like a person our behaviour changes to give that person time to speak – To give them the respect that we expect to receive from another person, rather than the bluntness that we will tend to address our digital assistants with.  After all – Alexa doesn’t really care if you ask her to turn the lights off ‘please’, or just angrily bark at her to turn the lights off.

Making the booking could be just a little bit of a painful interaction. The second example that Google shows has a person trying to make a booking for 4 at a restaurant.  It turns out that the restaurant doesn’t make bookings for groups less than 5, and that it’s in fact fine just to turn up as there will most likely be tables available.  Imagine this same interaction with a machine.  Imagine that conversation with one of those annoying digital IVR systems when you call a company and try to get through to the right person – Saying ‘I want to book a table’…. ‘I want to book a table’…. ‘TABLE BOOKING’…. ‘DINNER’.   Our patience will run thin much faster if we’re waiting for a machine than if we’re waiting for a robot.

Just because there is utility, doesn’t mean this deception is fair.  I can see three issues with this.

  1. Even if the assistant introduces it as a machine, the person won’t believe it

It might just seem like a completely left of field comment and make people think they’ve just mis-heard something.  They’ll either laugh it off for a second and continue to believe it’s a person, or think they just couldn’t quite make the word our right – Especially as this conversation is happening over the phone.

  1. They know it’s a robot, but they still behave like it’s a human

Maybe we have people who hear it’s a robot, know that robots are now able to speak like a human, but still react as though it’s a person.  This is a bit like the uncanny valley.  They know it’s a machine, and the rational part of their mind is telling them it’s a machine, but the emotional or more instinctive part of their mind hears it as a human, and they still offer much the same kind of emotion and time to it that they would a human.

  1. They know it’s a machine and treat it like a machine.

This is interesting, because I think it’s exactly not what Google want people to do.  If there wasn’t some additional utility in making this system sound ‘human like’, they wouldn’t have spent the time or money on the new voice model and would have shipped the feature out with the old voice model long ago.  If people treat it like a machine, we may assume that the chance of making a booking, or the right kind of booking would be reduced.

If you believe the argument I’ve made here, then Duplex introducing itself as a machine is irrelevant.  Google’s intention is still for it to be treated like a human – And is this OK?

I’m not entirely sure it is.  When people make these conversations, they’re putting a bit of themselves into the relationship.  It reminds me of Jean-Paul Sartre talking about his trip to the café.  He was expecting to meet his friend Pierre, and left his house with all the expectations of the conversation he would have with Pierre, but when he arrives Pierre is not there.  Despite the café being full, it feels empty to Sartre.  I imagine a lot of people will feel the same when they realize that they’ve been speaking to a machine.  As superficial as the relationships might be when you are making a booking over the phone, they are still relationships.  When the person arrives for their meal, or their haircut, and they realise that person they spoke to before doesn’t really exist – that it has no conscious experience –  and they’ll feel empty.

They’ll feel kinda… duped…

Programmed Perspective; Empathy > Emotion for Digital Assistants

Personal assistants are anything but personal.  When I ask Alexa what the weather is, I receive an answer about the weather in my location.  When someone on the other side of the world asks Alexa that same question, they too will find out what the weather is like in their location. Neither of us will find Alexa answering with a different personality or the interaction further cementing our friendship. It is an impersonal experience.

When we talk about personal assistants, we want them to know when we need pure expediency from a conversation, when we want expansion on details, and the different way each one of us likes to be spoken to.

I would like to propose two solutions to this problem – emotion and empathy.  I’d like you to see from my view why empathy is the path we should be taking.

AdobeStock_49711430

Emotion

An emotional assistant would be personal. It would require either a genuine internal experience of emotion (which is just not possible today), or an accurate emulation of emotion.  In the same way that we build up relationships with people overtime, starting from niceties and formality, to gradually developing a relationship unique to the two parties that guides all their interactions.  Sounds great, but it’s not all plain sailing.  I’m sure everyone has experienced a time where we’ve inadvertently offended a friend in a way that has made it more difficult for us to communicate for some time afterwards, or even damaging a relationship in a way that it’ll never repair itself.

We really don’t want this with a personal assistant.  If you were a bit short with Alexa yesterday because you were tired, you still want it to set off your alarm the next morning.  You don’t want Alexa to tell you that it can’t be in the same room as you and to refuse to answer your questions until it gets a heartfelt apology.

Empathy

Empathy does not need to be emotional.  Empathy requires that we put ourselves in the place of others to imagine how they feel, and to act appropriately.  This ideally is what doctors do.  A doctor must empathise with a patient, putting themselves in their shoes to understand how they will react to difficult news, and how to describe the treatment to ensure they feel as comfortable as possible.  Importantly though, the doctor should be removed emotionally from the situation.  If they are to personally feel the emotion with each appointment it could become unbearable.  Empathy helps them to add a layer of abstraction, allowing them to shed as much of the emotion as possible when they return home.

This idea is described in Jean Paul Sartre’s ‘Being and Nothingness’.  Sartre describes two types of things;

  • Beings in themselves – An unconscious object, like a mug or a pen.
  • Beings for themselves – People and conscious things.

In our every day lives we are a hybrid of the two.  Though we are people, and naturally become a thing for itself, we adopt roles like doctors, managers, parents and more. These roles are objects, like a pen or a mug as they have an unspoken definition and structure. We use these roles/objects to guide how we interact in different situations in life. In a new role we ask ourselves ‘what should a manager do in this situation’, or ‘what would a good doctor say’.  It may become less obvious as we grow into a role, but it’s still there.

When you go into a store, we have an accepted code of conduct and a type of conversation we expect to have with the retailer.  We naturally expect them to be polite, to ask us how we are, to share knowledge of different products and services, and that their aim is to sell us something.  We believe that we can approach them, a stranger, and ask question upon question in our preamble.

Sartre states ‘A grocer who dreams is less a grocer’ (to paraphrase).  Though the grocer may be more honest to themselves as a person, they’re reducing their utility as a grocer.  It’s easy to imagine stopping to buy some vegetables, and getting stuck in an irrelevant conversation for half an hour.  It might be a nice break from the norm, and a funny story to tell when you get home, but in general we want our grocers to be…. Grocers…

If we apply this to personal assistants, it really comes together.  We want to receive the kind of personal service that we would get from a person who is really great at customer service.  We want it to communicate information to us in a way which works best for us.  By making an empathetic assistant over what we have today we gain personalisation and utility

If we go fully emotional we gain more personalisation, but the trade-off is utility.  What we don’t want is an emotion assistant, which becomes depressed, and gets angry at us.  Or even on the other extreme which becomes giddy with emotion and struggles to structure a coherent sentence to us because of the digital butterflies in its stomach.  That’s both deeply unsettling, and unproductive.

So, let’s build empathetic assistants.

AI, VR and the societal impact of technology: our takeaways from Web Summit 2017

Together with my Digital Innovation colleague Morgan Korchia, I was lucky enough to go to Web Summit 2017 in Lisbon – getting together with 60,000 other nerds, inventors, investors, writers and more. Now that a few weeks have passed, we’ve had time to collect our thoughts and reflect on what turned out to be a truly brilliant week.

We had three goals in mind when we set out:

  1. Investigate the most influential and disruptive technologies of today, so that we can identify those which we should begin using in our business
  2. Sense where our market is going so that we can place the right bets now to benefit our business within a 5-year timeframe
  3. To meet the start-ups and innovators who are driving this change and identify scope for collaboration with them

Web Summit proved useful for this on all fronts – but it wasn’t without surprises.  It’s almost impossible to go to an event like this without some preconceptions about the types of technologies we are going to be hearing about. On the surface, it seemed like there was a fairly even spread between robotics, data, social media, automation, health, finance, society and gaming (calculated from the accurate science of ‘what topic each stage focused on’). However, after attending the speeches themselves, we detected some overarching themes which seemed to permeate through all topics. Here are my findings:

  • As many as 1/3rd of all presentations strongly focus on AI – be that in the gaming, finance, automotive or health stage
  • Around 20% of presentations primarily concern themselves with society, or the societal impact of technology
  • Augmented and virtual reality feature in just over 10% of presentations, which is significantly less than we have seen in previous years

This is reflective my own experience at Web Summit, although I perhaps directed myself more towards the AI topic, spending much of my time between the ‘autotech / talkrobot’ stage and the main stage. From Brian Krzanich, the CEO of Intel, to Bryan Johnson, CEO of Kernel and previously Braintree, we can see that AI is so prevalent today that a return to the AI winter is unimaginable. It’s not just hype; it’s now too closely worked into the fabric of our businesses to be that anymore. What’s more, too many people are implementing AI and machine learning in a scalable and profitable way for it to be dispensable. It’s even getting to the point of ubiquity where AI just becomes software, where it works, and we don’t even consider the incredible intelligence sitting behind it.

An important sub-topic within AI is also picking up steam- AI ethics. A surprise keynote from Stephen Hawking reminded us that while successful AI could be the most valuable achievement in our species’ history, it could also be our end if we get it wrong. Elsewhere, Max Tegmark, author of Life 3.0 (recommended by Elon Musk… and me!) provided an interesting exploration of the risks and ethical dilemmas that face us as we develop increasingly intelligent machines.

Society was also a themed visited by many stages. This started with an eye-opening performance from Margrethe Vestager, who spoke about how competition law clears the path for innovation. She used Google as an example, who, while highly innovative themselves, abuse their position of power, pushing competitors down their search rankings to hamper the chances of other innovations from becoming successful. The Web Summit closed with an impassioned speech from Al Gore, who gave us all a call to action to use whatever ability, creativity and funding we have to save our environment and protect society as a whole for everyone’s benefit.

As for AR and VR, we saw far less exposure this year than seen at events previously (although it was still the 3rd most presented-on theme). I don’t necessarily think this means it’s going away for good, although it may mean that in the immediate term it will have a smaller impact on our world than we thought it might. As a result, rather than shouting about it today, we are looking for cases where it provides genuine value beyond a proof of concept.

I also take some interest from the topics which were missing, or at least presented less frequently. Amongst these I put voice interfaces, cyber security and smart cities. I don’t think this is because any of these topics have become less relevant. Cyber security is more important now than ever, and voice interfaces are gaining huge traction in consumer and professional markets. However, an event like Web Summit doesn’t need to add much to that conversation. I think that without a doubt we now regard cyber security as intrinsic to everything we do, and aside from a few presentations including Amazon’s own Werner Vogels, we know that voice is here and that we need to be finding viable implementations. Rather than simply affirming our beliefs, I think a decision was made to put our focus elsewhere, on the things we need to know more about to broaden our horizons over the week.

We also took the time to speak to the start-ups dotted around the event space.  Some we took an interest in like Nam.r, who are using AI in a way which drives GDPR compliance, rather than causing the headache many of us assume it may result in. Others like Mapwize.io and Skylab.global are making use of primary technological developments, which were formative and un-scalable a year ago. We also took note of the start-ups spun out of bigger businesses, like Waymo, part of Google’s Alphabet business, which is acting as a bellwether on which many of the big players are placing their bets.

The priority for us now is to build some of these findings into our own strategy- much more of a tall order than spending a week in Lisbon absorbing.  If you’re wondering what events to attend next year, Web Summit should be high up on your list, and I hope to see you there!

What are your thoughts on these topics? Leave a reply below, or contact me by email.

Learn more about Aurora, Sopra Steria’s horizon scanning team, and the topics that we are researching.

Reflecting on 2016: what was all that about?

If you were in the business of predicting the future, you could probably choose a better year than 2016 with which to try to take a guess at just what might come to pass. There’s no doubt that it has been a tumultuous year internationally, with the repercussions of huge social, economic and political changes still being felt across the globe.

Undaunted, Sopra Steria’s intrepid Horizon Scanning team, set out back in January with the aim of identifying those technological trends likely to have an impact on our clients, their businesses and their customers not only in 2016 but in the three to five years beyond that.

Creating the frame of reference below, within which to make our observations and tell their stories, the team has been working in Sopra Steria’s DigiLab throughout 2016 with clients across both public and private sectors to test and explore their observations and insights as the key disruptive technologies which they have identified have begun to evolve.

six topics + intersections between them giving us 15 lines of enquiry for 2017: see a text version of this diagram below

In this, the team’s final podcast of 2016, along with my colleagues Richard Potter and Ben Gilburt, I reflect on what we have seen and consider just what 2017 might have in store for us.

See more about Aurora and our London DigiLab.

What are your thoughts about 2016 and the technological trends for 2017? Leave a reply below or contact me by email.


Text version of Aurora’s horizon scanning topics:

Vertical view

  1. The digital human: interacting with services and each other through ubiquitous devices and data-driven experiences
  2. The organic enterprise: flexible, distributed, collaborative and networked organisations
  3. A smarter world: a crowded, ageing, more connected and fluid world

Horizontal view

  1. Intelligent insight and automation: the increase in the application of prescriptive analytics and automation to augment or displace human activity
  2. Ubiquitous interaction: the growth of sensing and interface technologies that make interactions between humans and computers more fluid, intuitive and pervasive
  3. Distributed disruption: the growth of decentralised processes enabled by the adoption of technologies which assure and automate security and trust

Journey interrupted

Contrary to what you might expect, this blog isn’t a reflection of my experience with Southern Rail. Instead it’s a look to the future, inspired by the Sopra Steria Horizon Scanning Team’s trip to Wired 2016.

In our horizon scanning programme, Aurora, we try to look beyond the technologies that are shaping our future and include the behavioural and social changes that are also making an impact, and this is where Wired’s annual event fits in so perfectly with our interests. Though Wired 2016 takes no shame in celebrating the advancements we’ve seen in technology and imagining what may come next, it also takes account of wider sociological and environmental changes, such as mass migration, climate change and global conflict.

The running theme throughout was ‘journey interrupted’ which seemed both to reflect on the individual journeys of many speakers who had set out with what seemed like a clear direction, but ended up somewhere entirely different to what they had planned, but also the inevitable interruption in our unsustainable way of living which needs change more urgently than ever.

In the technology content, an overwhelmingly strong theme was data. Now, data is nothing new at these kinds of events, and has not been for years.  Data in this instance was framed most clearly in machine learning and AI, which again isn’t anything new to us, but what we’re beginning to see is how achievable it is becoming to us.  Historically the privilege of huge projects backed by a great deal of money, we’re now seeing machine learning in the hands of start-ups and individual people who are able to apply the same technology to problems which receive little or no funding, but are important none the less. Applications ranged from health, limiting the spread of Ebola and the Zika virus to cancer discovery and treatment, to migrant demographics, through to the future of AI and the singularity.

The most poignant moments in Wired 2016 however did not focus wholly on technology. They were about the big shifts happening to people and our environment. Predictions on climate change are looking more devastating than ever with even fairly conservative scientists predicting that we may go well beyond the 2C maximum limit for warming since pre-industrial weather reports, going as far as 7C which could wreak untold havoc on our world. The speakers at Wired 2016 were looking both at how we can change the way we live in the developed world to reduce our environmental impact, but also how we can curtail the impact of the developing world, ideally skipping straight to renewable power, in much the same way as they have largely missed out internet on PCs, experiencing the internet for the first time on a mobile device.  The refugee crisis was also a recurring theme, where the journey that people had set for their own lives has been completely torn apart, exploring how communities around the world have found ways for them to get their journeys back on track, through enabling their work and encouraging entrepreneurship.

The story from Wired 2016 is that if we continue on this express train, we’re heading to a bad place. We need to instead take a look at the journey we’re taking, or better still find an entirely new mode of transport to take us there. The technology is there, but the community and widespread adoption is not, and if we want success this is going to need to be a journey we take together.

What do you think? Leave a reply below, or contact me by email.

Learn more about Aurora, Sopra Steria’s horizon scanning team, and the topics that we are researching.

Innovation: it’s at the heart of Sopra Steria’s DNA

by Eric Maman, Innovation Management, Sopra Steria

Thanks to a robust innovation policy, Sopra Steria devises innovative solutions for its customers using cutting-edge technologies. This policy fulfils two main missions: to anticipate new uses of technology and expand our range of offers. To achieve this, Sopra Steria deploys numerous resources both internally and in collaboration with its partners.

An innovative ecosystem

We provide daily support to our colleagues in the field of innovative research and development. Through investment, internal events and competitions, the group develops increasingly novel products and services. All of our colleagues are at the centre of the design thinking process.

Thanks to a policy of heavy investment in new technologies, Sopra Steria has the essential funds required to promote innovation. Furthermore, all of our colleagues work on researching and developing innovative technologies and services on a daily basis.

The creation of Digilabs supports the development of innovative projects within the group. Following the work undertaken by the teams, innovative projects are launched every year in various fields related to the digital transformation. We encourage colleagues to develop new projects which are awarded at internal events.

Each year, colleagues within the group are able to take part in competitions in order to present their ideas and projects. Thanks to the Innovation Awards, for example, teams have the possibility of presenting novel projects in terms of processes, services or products. A panel of judges determines the three most innovative projects and offers teams the option of developing and incorporating their project into the group’s solutions.

We also organise the “Innovation Crossroads” where colleagues from all countries and all professions are invited to discover and participate in discussions focusing on the group’s latest innovations. At each gathering, a Sopra Steria business unit or a country is honoured in front of colleagues from across the globe; one of our customers is also asked to present an innovative project conducted in collaboration with the Sopra Steria teams. In addition, one or our strategic partners is invited to present their latest innovations.

Numerous collaborations with our strategic partners and/or with start-ups are also created to share expertise on innovation. In the interest of offering an innovation with high added value, we forge partnerships with innovative companies in a variety of sectors. Thanks to these available resources, Sopra Steria is committed to offering state-of-the-art services and products.

Adding value through innovation

Because innovation adds value, our innovation policy also focuses not only on using technology, but also on reinterpreting processes and methods of working. Our sense of innovation today enables us to advise our customers both on strategic priorities in professional development and on processes.

Co-innovation and co-creation are concepts implemented on a daily basis in partnership with our customers to create value and invent the solutions of the future. Thanks to the sharing of knowledge, experience and resources, this method of co-creation enables both parties to develop innovative products.

Personalised customer support

Our colleagues are at the centre of the group’s innovation. The various actions and investment policy or events organised internally enable Sopra Steria to employ a large population of engineers and to deploy innovative projects.

Innovation is not only a question of technological breakthroughs. The company’s structure, its business model and its various operating processes are impacted. Professional knowledge and the co-creation process between Sopra Steria and its customers is the source of innovation and consequently the source of value.