Quantum Computers: A Beginner’s Guide

What they are, what they do, and what they mean for you

What if you could make a computer powerful enough to process all the information in the universe?

This might seem like something torn straight from fiction, and up until recently, it was. However with the arrival of quantum computing, we are about to make it reality. Recent breakthroughs by Intel and Google have catapulted the technology into the news. We now have lab prototypes, Silicon Valley start-ups and a multi-billion dollar research industry. Hype is on the rise, and we are seemingly on the cusp of a quantum revolution so powerful that it will completely transform our world.

On the back of this sensationalism trails confusion. What exactly are these machines and how do they work? And, most importantly, how will they change the world in which we live?

6

At the most basic level, the difference between a standard computer and a quantum computer boils down to one thing: information storage. Information on standard computers is represented as bits– values of either 0 or 1, and these provide operational instructions for the computer.

This differs on quantum computers, as they store information on a physical level so microscopic that the normal laws of nature no longer apply. At this minuscule level, the laws of quantum mechanics take over and particles begin to behave in bizarre and unpredictable ways. As a result, these devices have an entirely different system of storing information: qubits, or rather, quantum bits.

Unlike the standard computer’s bit, which can have the value of either 0 or 1, a qubit can have the value of 0, 1 or both 0 and 1 at the same time. It can do this because of one of the fundamental (and most baffling) principles of quantum mechanics- quantum superposition, which is the idea that one particle can exist in multiple states at the same time. Put another way: imagine flipping a coin. In the world as we know it (and therefore the world of standard computing), you can only have one of two results: heads or tails. In the quantum world, the result can be heads and tails.

What does all of this this mean in practice? In short, the answer is speed. Because qubits can exist in multiple states at the same time, they are capable of running multiple calculations simultaneously. For example, a 1 qubit computer can conduct 2 calculations at the same time, a 2 qubit computer can conduct 4, and a 3 qubit computer can conduct 8- increasing exponentially. Operating under these rules, quantum computers bypass the “one-at-a-time” sequence of calculation that a classical computer is bound by. In the process, they become the ultimate multi-taskers.

To give you a taste of what that kind speed might look like in real terms, we can look back to 2015, when Google and Nasa partnered up to test an early prototype of a quantum computer called D-Wave 2X. Taking on a complex optimisation problem, D-Wave was able to work at a rate roughly 100 million times faster than a single core classical computer and produced a solution in seconds. Given the same problem, a standard laptop would have taken 10,000 years.

7

Given their potential for speed, it is easy to imagine a staggering range of possibilities and use cases for these machines. The current reality is slightly less glamorous. It is inaccurate to think of quantum computers as simply being “better” versions of classical computers. They won’t simply speed up any task run through them (although they may do that in some instances). They are, in fact, only suited to solving highly specific problems in certain contexts- but there’s still a lot to be excited about.

One possibility that has attracted a lot of fanfare lies in the field of medicine. Last year, IBM made headlines when they used their quantum computer to successfully simulate the molecular structure of beryllium hydride, the most complex molecule ever simulated on a quantum machine. This is a field of research which classical computers usually have extreme difficulty with, and even supercomputers struggle to cope with the vast range of atomic (and sometimes quantum) complexities presented by complex molecular structures. Quantum computers, on the other hand, are able to read and predict the behaviour of such molecules with ease, even at a minuscule level. This ability is significant not just in an academic context; it is precisely this process of simulating molecules that is currently used to produce new drugs and treatments for disease. Harnessing the power of quantum computing for this kind of research could lead to a revolution in the development of new medicines.

But while quantum computers might set in motion a new wave of scientific innovation, they may also give rise to significant challenges. One such potentially hazardous use case is the quantum computer’s ability to factorise extremely large numbers. While this might seem relatively harmless at first sight, it is already stirring up anxieties in banks and governments around the world. Modern day cryptography, which ensures the security of the majority of data worldwide, relies on complex mathematical problems- tied to factorisation- that classical computers have insufficient power to solve. Such problems, however, are no match for quantum computers, and the arrival of these machines could render modern methods of cryptography meaningless, leaving everything from our passwords and bank details to even state secrets extremely vulnerable, able to be hacked, stolen or misused in the blink of an eye.

8

Despite the rapid progress that has been made over the last few years, an extensive list of obstacles still remain, with hardware right at the top. Quantum computers are extremely delicate machines, and a highly specialised environment is required to produce the quantum state that gives qubits their special properties. For example, they must be cooled to near absolute zero (roughly the temperature of outer space) and are extremely sensitive to any kind of interference from electricity or temperature. As a result, today’s machines are highly unstable, and often only maintain their quantum states for just a few milliseconds before collapsing back into normality- hardly practical for regular use.

Alongside these hardware challenges marches an additional problem: a software deficit. Like a classical computer, quantum computers need software to function. However, this software has proved extremely challenging to create. We currently have very few effective algorithms for quantum computers, and without the right algorithms, they are essentially useless- like having a Mac without a power button or keyboard. There are some strides being made in this area (QuSoft, for example) but we would need to see vast advances in this field before widespread adoption becomes plausible. In other words, don’t expect to start “quoogling” any time soon.

So despite all the hype that has recently surrounded quantum computers, the reality is that now (and for the foreseeable future) they are nothing more than expensive corporate toys: glossy, futuristic and fascinating, but with limited practical applications and a hefty price tag attached. Is the quantum revolution just around the corner? Probably not. Does that mean you should forget about them? Absolutely not.

Our Apprentices Answer

by Nadia Shafqat, Junior Software Engineer

In an ongoing series, we hear from both our current and past apprentices about their experiences. Apprenticeships are offered to anyone at any level, from those at the start of their career to those who wish to change the direction of their career and learn new skills. Here we hear from Nadia.

Tell me about your journey in Sopra Steria.

I joined the Sopra Steria’s Apprenticeship Programme in October 2016, to get a more hands on, practical experience in the IT sector. My journey started of small, contributing to internal data projects where I built up my coding skills in JAVA and XQuery, before progressing on to a ‘leading edge’ client project developing an application, which will help them to continuously monitor, validate and communicate safety, clinical and product performance data. Initially, I focused on acquiring the necessary skills across various elements of the programme, including becoming proficient in Agile Scrum methodology and practice, however with the support of the team, and the resources available in Sopra Steria, I feel I am now at a level where I am a contributing member of the team.

What skills have you learnt?

I have been exposed to a variety of technical skills and languages, such as Java, Angular, Marklogic, etc. However, I have also picked up general work skills, e.g. Agile, and used this to enhance my already-present skills, such as teamwork, etc.

What has been your greatest achievement?

My greatest achievement was being nominated for the FDM: Women in IT award.

Would you recommend an apprenticeship?

Definitely! An apprenticeship is a great way to experience the working industry and find what career you would like to pursue in the future, without any debt.

 

To learn more about our opportunities visit our apprenticeship page or if you have any further questions, email the team at early.careers@soprasteria.com

How to keep AI alive when death is inevitable

Uber was in the headlines again last week, this time because on of their driverless cars was involved in an accident which killed a cyclist.  Loss of life is always a tragedy and I don’t want to diminish the significance of this death, however, accidents like this are likely to happen as we develop AI and we should be able to agree on situations where projects must be shut down and times when they can continue.

We saw footage released showing the moments leading up to the collision.  If we take the footage to be an honest, accurate and unaltered representation of events it appears that the car had very little opportunity to avoid this happening, with the cyclist crossing the road away from a designated crossing, unlit and shortly after a corner.

It’s hard to watch the footage without imaging yourself in that situation, and it’s hard to see how a human driver could have avoided the incident.  There would only have been a split second to react.  It’s quite possible that both human and machine would have produced the same result – Yet humans continue to be allowed to drive, and Uber is shutting down its self-driving vehicle programme.

So – Following a human death, how can we decide when our projects must be axed and when they can continue?

Intentional vs Incidental vs accidental

I would like to propose three categories of machine caused death.  Under two of the three circumstances (intentional and incidental) I suggest that the programmes must be shut down.  Under the 3rd (accidental) the project may continue, depending on a benchmark I will set out shortly.

Intentional

Intentional death caused by AI will result from the likes of ‘lethal autonomous weapons’.  I would propose that these should be banned under all circumstances from ever being created.  As Max Tegmark described in Life 3.0, AI has the potential to be either the greatest tool ever created for humanity, or the most destructive – The latter being killbots.  We want AI to go in the first direction like Chemistry or biology, which became useful to humanity rather than becoming chemical and biological weapons respectively – We have international treaties to ban them.  Nuclear had the potential to be simply a power source to help humanity, but has ended up with a dual purpose – Generating energy to power our homes and incredibly destructive weapons.

Here are a few of the most poignant issues possible;

  • With AI the risk is potentially higher than nuclear weapons. A machine with the coded right to take human life could do so with an efficiency orders of magnitude higher than any human could – Infecting our healthcare systems, power, or even launching our own nuclear weapons against ourselves.
  • As a race we are yet to create our first piece of bug-free software, and until we do we do, we run the risk of this extremely fast automated system killing people we had never aimed for it to. And even if we regain control of the device within days, hours or minutes, the damage done could be thousands of times greater than any human could have achieved in that time.
  • Using a machine only adds in a layer of ethical abstraction that allows us to commit atrocities (Automating Inequality, Virginia Eubanks).

Incidental

An incidental death can be categorised as death which happens as the result of another action, but not as the primary motivation.  This would include any action where and automated system was sure, or attributed a high probability to the possibility of a person being seriously injured or killed as the result of its primary goal or the steps taken to achieve it.  We may imagine machines allowing this to happen ‘for the greater good’, as an acceptable step towards its primary goal.  This should also be avoided and a cause to shut off and prevent any AI projects which allow this to happen.

Why?

  • It’s a short distance between this and lethal autonomous weapons. An AI is highly unlikely to be human in the way it thinks and acts.  Unlike humans which are carbon based lifeforms, evolved over thousands of years, an AI will be silicon based and evolve quickly over years, months or days.  The chances of it feeling emotions, if it does at all… like guilt, empathy, love like a human is improbably.  If it is given the flexibility to allow human death, its idea of an atrocity may be very different to ours, and due to its speed and accuracy even the fastest reactions in stopping this type of AI may be far too late to prevent a disaster.

Accidental

This is the only area where I believe death with an AI is involved may be forgiven – And even in this case not in all circumstances.  I would describe an accidental death caused by an AI as one where in-spite of reasonable steps being taken to collect and analyse available data and accident happened, which resulted in death or injury, that was believed only to have a low level of probability and became unavoidable.  Here we may see this through the eyes of the Uber driverless vehicle;

  • ‘An accidental death’ – The car should never be allowed to sacrifice human life where it is aware of a significant risk (we will discuss the ‘significant risk’ shortly), opting instead to stop entirely in the safest possible manner.
  • ‘Reasonable steps’ – These should be defined through establishing a reasonable level of risk, above 0% which is tolerable to us. More on this below.
  • ‘Collect and analyse data’ – I think this is where the Uber project went wrong. Better sensors or processing hardware and software may have made this accident preventable.

An AI designed to tolerate only accidental death should not set the preservation of human life as its primary objective.  Clearly defined final objectives for AI seemingly have unintended results – With a matrix like human farm being possible to maximise human life but sacrificing pleasure.  Maximising pleasure similarly could result in the AI dedicating its resources to generating a new drug to make humans permanently happy, or putting us all in an ideal simulated world.  Indirect normativity (Nick Bostrom, SuperIntelligence) seems to be a more appealing proposition, instead teaching an AI to;

  1. Drive a car to its destination
  2. Take any reasonable steps to avoid human harm or death while fulfilling step A
  3. Do the intended meaning of this statement

But what if a driverless car finds itself in a situation where death is unavoidable, where it’s just choosing between one person or another dying?

If an AI designed only to tolerate accidental death finds itself in a situation where it’s only decision is between one life and another, even if inaction would result in a death, it may still be compliant with this rule.  We should instead measure this type of AI from an earlier moment, that the actions leading up to this situation should have been taken to minimise risk of death.  As new data becomes available which show no further options are possible which avoid death or injury the accident has already happened and a separate decision making system may come into force to decide what action to take.

A reasonable level of risk?

To enable AI to happen at all we need to establish the reasonable level for risk in these systems.  A Bayesian AI would always attribute a greater than 0% chance of anything happening, including harm or death of a human, in any action or inaction that it takes.  For example, if a robot were to make contact with a human, holding no weapons, travelling slowly and covered in bubble wrap, the chance of it transferring bacteria or viruses which have a small chance of causing harm is higher than 0%.  If we are to set the risk appetite for our AI at 0% it’s only option will be to shut itself down as quickly and safely as possible.  We must have a minimum accepted level for AI caused harm to progress, and I think we can reach some consensus for this.

With the example of the Uber self-driving car we may assume the equivalent number of deaths caused by human and machine.  The machine was unable to avoid the death of a human, and if the evidence presented is an accurate reflection of the circumstances it seems likely a human too would have been unable to avoid the death.  The reaction for this has been strongly anti-automation, so we can tell that a 1-to-1 exchange between human and machine deaths is not the right level – That we would prefer for a human to be responsible for a death if the number of casualties is not reduced by using a machine.

If we are to change this number to 2-to-1 this begins to look different.  If we could half the number of deaths caused by driving or any other human activity automation begins to look a lot more appealing to a far greater number of people.  If we extend this to a 99% reduction in deaths and injuries the vast majority of people will lean towards AI over human actors.

Where this number stands exactly I am not certain.  It’s also unlikely that the ratio would remain static as growing trust in AI may lead us either direction.  Indirect normativity may be our best option again in this instance, accounting for the moving standard which we would hold it to.

Setting a tolerance rate for error at 0% for anything is asking for failure.  No matter how safe or fool proof a plan may seem there will always be at least a tiny possibility of error.  AI can’t solve this, but it might be able to do a better job than us.  If our goal is to protect and improve human life… maybe AI can help us along the way.

 

“I am your Father”… my experience with Shared Parental Leave

By Dave Parslew, Senior Internal Recruiter.

As well as looking after internal recruitment, Dave is a first time dad. In this post he talks about anticipating the birth of his first born, the decision to utilise Shared Parental Leave and why more men should be utilising SPL.

Shared Parental Leave (SPL) for me seems like a fantastic opportunity to be able to spend some quality time with my first born, Sam. During our pregnancy, my wife Anna and I decided that SPL was definitely for us and when it came to ‘D’ day, we agreed that she will take the first 9 months and I will take the last 3. I always joked that May to August will be a perfect 3 months in the sun for me, though now with the birth of my child and all the work to look after a new baby, my views have of course changed!

Baby and DAve

The resemblance is already uncanny for Dave and baby Sam.

Quite a few years ago, I assumed that when I did actually have kids, I would have to go back to work after my 2 weeks of paternity and leave the all-important first year of quality time to my wife. I thought that was the only option and at the time it was! However, things have changed and the question I asked myself and all the other eligible fathers out there is why wouldn’t you?!

Around 285,000 couples in the UK are eligible every year for shared parental leave, but take-up “could be as low as 2%”, according to the Department for Business. Nearly three years after it was introduced around half of the general public still are unaware the option exists. Experts say that as well as a lack of understanding of what is on offer, cultural barriers and financial penalties are deterring some parents from sharing parental leave.

There seems to be a lot of “research undertaken by trusted organisations” about SPL out there but I say don’t just rely on the headlines and newspaper write-ups; delve a little deeper into the detail and look at the research for yourself!

Research shows the poor take-up of the policy is due to concerns about lack of financial support for fathers. I say, if you manage your finances correctly and are prepared for the eventuality that you might be slightly out of pocket for a few months of your life, you will get to spend some amazing time with your children (time you will NEVER get back) so just go for it!

However, the main problem with childcare take up remains – many men just wouldn’t want it because they’re scared it would impact their careers. It’s not that men’s attitudes are anti-childcare these days. It’s more that this fear outweighs fathers’ enthusiasm to have a stint at being a stay-at-home dad or the desire to exercise their legal right. It’s the dated belief that a man better serves their family by sticking to a traditional career path.

In my opinion, if you care that much about money, then perhaps you shouldn’t have kids in the first place as they WILL most definitely suck you dry of most of your finances. However if you see it as I see it then everyone’s a winner!

This is a government funded scheme remember, so in my case the company I work for (Sopra Steria) will have to cover my work for 3 months but they have been very accommodating about it and in some ways educated by it due to the lack of uptake.

Fair enough I won’t get paid for 3 months but there is an option to plan some ‘Staying in Touch’ days with HR (paid in full for the day) and I still accrue holiday while I am off along with Bank Holidays.

Hopefully my example will encourage others to do the same. To top it all off of course, I will have an awesome few months with my new son. I am looking forward to this immensely and the bottom line is, ”You only live once”!

Below are the key points about SPL, learn more about the intiative here.

What is Shared Parental Leave?

  • Shared parental leave (SPL) was introduced in April 2015
  • It allows parents to share 50 weeks of leave and 37 weeks of pay after they have a baby
  • Parents can take time off separately or can be at home together for up to six months
  • SPL is paid at £140.98 per week or 90% of your average earnings, whichever is lower

 

The Geek Shall Inherit

AI has the potential to be the greatest ever invention for humanity.  And it should be for the benefit of all humanity equally, but instead we’re heading towards a particular group, the geeks, who will benefit most from AI. AI is fundamentally more likely to favour the values of its designers, and whether we train our AI on a data set gathered from humans, or with pure simulated data through a system like deep reinforcement learning bias will, to a greater or lesser extent, remain.

A disclaimer – Humans are already riddled with bias.  Be it confirmation, selective or inclusive bias, we constantly create unfair systems and draw inaccurate conclusions which can have a devastating effect on society.  I think AI can be a great step in the right direction, even if it’s not perfect.  AI can analyse dramatically more data than a human and by doing so generate a more rounded point of view.  More rounded however is not completely rounded, and this problem is significant given any AI which can carry out a task orders of magnitude faster than a human.

To retain our present day levels of inequality while building a significantly faster AI we must dramatically reduce the number of unethical decisions it produces.  For example, if we automate a process with a system which produces only 10% as many unethical decisions as a human per transaction, but we make it 1000x faster, we end up with 100x more injustice in the world.  To retain todays levels that same system would need to make only 0.1% as many unethical decisions per transaction.

For the sake of rhyme, I’ve titled this blog the geek shall inherit.  I am myself using a stereotype, but I want to identify the people that are building AI today.  Though I firmly support the idea that anyone can and should be involved in building these systems that’s not a reflection of our world today.  Our society and culture has told certain people, women for instance, from a young age that boys work on computers and girls do not.  This is wrong, damaging and needs remedying.  That’s a problem to tackle in a different blog!  Simply accepting in this instance that the people building AI tend to be a certain type of person – Geeks.  And if we are to stereotype a geek, we’re thinking about someone who is highly knowledgeable in an area, but also socially inept, and probably a man.

With more manual forms of AI creation the problem is at its greatest.  Though we may be using a dataset gathered from a more diverse group of people, there’s still going to be selection bias in that data, as well as bias directly from the developers if they are tasked with the annotation of that data.  Whether intentionally or not , humans are always going to favour things more alike themselves and code nepotism into a system, meaning the system is going to favour geeky men like themselves more so than any other group.

In 2014 the venture capital fund ‘Deep Knowledge Ventures’ developed an algorithm called ‘VITAL’ to join their board and vote on investments for the firm.  VITAL shared a bias with it’s creators, nepotism, showing a preference to invest in businesses which valued algorithms in their own decision making (Homo Deus, Harari, 2015).  Perhaps VITAL developed this bias independently, but the chances area it’s developers unconsciously planted the seed of nepotism, and even the preference towards algorithms due to their own belief in them.

A step beyond this is deep reinforcement learning.  This is the method employed by Google’s Deep Mind in the Alpha Zero project.  The significant leap between Alpha Go and Alpha Go Zero is that Alpha Go used data recorded from humans playing Go, whereas Alpha Go Zero learned simply by playing against itself in a simulated world.  By doing this, the system can make plays which seem alien to human players, as it’s not constrained by human knowledge of the games.  The exception here is ‘move 37’ against Lee Sedol, which Alpha Go Lee used,  prior to the application of Deep Reinforcement Learning.  This move was seen as a stroke of creative brilliance that no human would ever have played, even though this system was trained on human data.

Humans also use proxies to determine success in these games.  An example of this is Alpha Go playing chess.  Where humans use a points system on pieces as a proxy to understand their performance in a game, Alpha Go doesn’t care about its score.  It’ll sacrifice valuable pieces for cheap ones when other moves which appear more beneficial are available, because it doesn’t care about its score, only about winning.  And win it does, if only by a narrow margin.

So where is the bias in this system?  Though the system may be training in a simulated world, two areas for bias remain.  For one, the layers of the artificial neural network are decided upon by those same biased developers.  Second, it is simulating a game designed by humans – Where the game board and rules of Go were designed.  Both Go and Chess for instance offer a first move advantage to black.  Though I prefer to believe that the colours of pieces on a game board has everything to do with contrast and nothing to do with race, we may be subtly teaching a machine that one colour is guaranteed by rules an advantage over others in live.

The same issue however remains in more complex systems.  The Waymo driverless car is trained predominantly in a simulated world, where it learns free from human input, fatigue and mistakes.  It is however, still fed the look and feel of human designed and maintained roads, and the human written rules of the highway code.  We might shift here from ‘the geek shall inherit’ to ‘the lawyer shall inherit’.  Less catchy, but simply by making the system learn from a system or rules that was designed by a select group of people will introduce some bias, even if it’s simulating it’s training data within the constraints of those rules.

So, what should we do?

AI still has the potential to be incredibly beneficial for all humanity.  Terminator scenarios permitting, we should pursue the technology.  I would propose tackling this issue from two fronts.

2

This would be hugely beneficial to the technology industry as a whole, but it’s of paramount concern in the creation of thinking machines.  We want our AI to think in a way that suits everyone, and our best chance of success is to have fair and equal representation throughout its development.  We don’t know how much time remains before a hard take-off of an artificial general intelligence, and we may not have time to fix the current diversity problem, but we should do everything we can to fix it.

3

Because damage caused by biased humans, though potentially catastrophic will always be limited by our inherent slowness.  AI on the other hand can implement biased actions much faster than us humans and may simply accelerate an unfair system.  If we want more equality in the world a system must focus more heavily on equality as a metric than speed, and ensure at the very least that it reduces inequality by as much as the process speed is increased e.g.;

  1. If we make a process 10x faster, we must reduce the prevalence and impact of unequal actions by at least 90%.
  2. If we create a system 1,000x faster, this reduction must be for a 99.9% reduction of inequality in its actions.

Doing this only retains our current baseline.  To make progress in this area we need go a step further with the reduction in inequality before increasing the speed.

Start with the basics

Tyler is one of Sopra Steria UK’s Volunteers of the Year. As Volunteer of the Year 2017, he travelled to India to visit our international award-winning Community programmes run by our India CSR team. Read his previous write up on his volunteer work here

Yesterday took me to the new Government Girls Inter College, Hoshiyarpur in Noida, India. The school opened this academic year and has 1,270 girls on its register, all from underprivileged backgrounds. Next year the school will grow to a size of at least 2,000 and is expected to be a lot higher than this. Yesterday held great significance for the Girls’ School and I had the honour of being able to commemorate this day with them.

For the last eight months, Computer Science has been taught by theory. More than one-thousand girls have been learning IT skills from paper. Paper! Thankfully, yesterday we were able to celebrate the opening of a new computer lab with thirty new computers donated from Sopra Steria. The occasion was expectedly joyous. There were celebrations, speeches and all-too-necessary ribbon cutting ceremony. A fantastic moment that meant something to every member of the school, teacher or student.

As twenty 13-year-old children filed into the room and unwrapped the last remaining plastic from the screens and keyboards of the newly installed computers. The excitement of the girls ready to use these new machines was palpable. Great, right? The next few moments were like a sucker-punch to something I really ought to have expected. It started with a moment’s hesitation from a young girl finding the power button. Then a look of confusion from another trying to left-click a mouse. Perhaps the most basic of tasks for a child that age. The only thing was – this was the first time that any girl in that room had touched a computer, ever. And for some reason, it was as if someone had told me the sky had fallen down. Obvious when you think about it, but near unthinkable for any child in the UK today.

After a quick breath, I went and sat with two girls, Yashika and Pooja. They had opened Microsoft Word and it was great to see their teamwork as they hunted for the letters on the keyboard, as our very own Gayathri Mohan took them through their ABCs. Within a few minutes, Pooja had moved her second hand onto the keyboard as she began to type sentences. Computers are a absolute necessity in the modern working world and in some government schools here the may be only one or two computers for several thousand children. Some do not have computer access of any kind. For such a reasonable investment, the lives of thousands of children, their families and future families can be changed completely.

Many things we take for granted are new to girls like Yashika and Pooja. It’s a familiar feeling to feel passionate about tech and I hope to continue to be able to contribute to bringing these new opportunities to them. This trip has shown me the individual lives being changed from the Sopra Steria India CSR programmes. It’s hard to fathom that yearly 70,000 children are introduced to tech through these schools, provided with free lunches, access to drinking water and toilet facilities, among the many other initiatives. A big thank you to the team for guiding us round and allowing us to share in these moments.

Gender, AI and automation: How will the next part of the digital revolution affect women?

Automation and AI are already changing the way we work, and there is no shortage of concern expressed in the media, businesses, governments, labour organisations and many others about the resulting displacement of millions of jobs over the next decade.

However, much of the focus has been at the macro level, and on the medium and long-term effects of automation and AI.  Meanwhile, the revolution is already well underway, and its impact on jobs is being felt now by a growing number of people.

The wave of automation and AI that is happening now is most readily seen in call centres, among customer services, and in administrative and back-office functions.  Much of what we used to do was by phone – talking directly to a person. We can now use not only companies’ websites in self-serve platforms, but interact with bots in chat windows and text messages. Cashiers and administrative assistants are being replaced by self-service check-outs and robot PA’s. The processing of payroll and benefits, and so much of finance and accounting has also been automated, eliminating the need for many people to do the work…

…eliminating the need for many women to do the work, in many cases.

A World Economic Forum report, Towards a Reskilling Revolution, estimated that 57% of the 1.4 million jobs that will be lost to automation belong to women. This displacement is not only a problem for these women and their families, but could also have wider negative ramifications for the economy.  We know that greater economic participation by women, not less, is what the economy needs: it could contribute $250b to the UK’s GDP .

Both the economic and ethical solution is in reskilling our workers. Businesses and economies benefit from a more highly skilled workforce. Society is enriched by diversity and inclusion.  Individuals moving to new jobs (those that exist now and those that we haven’t yet imagined) may even be more fulfilled in work that could be more interesting and challenging.  Moreover, the WEF report suggests that many of the new jobs will come with higher pay.

But there are two things we need to bear in mind as we do the work of moving to the jobs of tomorrow:

  1. Our uniquely human skills: Humans are still better at creative problem solving and complex interactions where sensitivity, compassion and good judgment play a role, and these skills are used all the time in the kinds of roles being displaced. In business processes, humans are still needed to identify problems before they spread too far (an automated process based on bad programming will spread a problem faster than a human-led process; speed is not always an advantage).  AI will get better at some of this, but the most successful operators in the digital world of the future will be the ones who put people at the centre of their digital strategies.  Valuing the (too-long undervalued) so-called soft skills that these workers are adept at, and making sure these are built in to the jobs of the future, will pay dividends down the road.
  2. Employment reimagined: To keep these women in the workforce, contributing to society and the economy, we must expand the number of roles that offer part-time and flexible working options. One reason there are so many women doing these jobs is because they are offered these options. And with women still taking on most of the domestic and caring responsibilities, the need for a range of working arrangements is not going away anytime soon.  The digital revolution is already opening discussion of different models of working, with everything from providing people with a Universal Basic Income, to the in-built flexibility of the Gig Economy, but simpler solutions on smaller scales can be embraced immediately.  For example, Sopra Steria offers a range of flexible working arrangements and is making full use of digital technology to support remote and home working options.

Women are not the only people affected by the current wave of automation and AI technology.  Many of the jobs discussed here are also undertaken by people in developing countries, and those where wages are lower, such as India and Poland.  The jobs that economies in those countries have relied on, at least in part,may not be around much longer in their current form.

Furthermore, automation and AI will impact a much wider range of people in the longer term.  For example, men will be disproportionately impacted by the introduction of driverless cars and lorries, because most taxi and lorry drivers are men.

Today, on International Women’s Day 2018, though, I encourage all of us in technology to tune in to the immediate and short-term impacts and respond with innovative actions, perhaps drawing inspiration from previous technological disruptions.   Let’s use the encouraging increased urgency – as seen through movements such as #Time’sUp and #MeToo – to address gender inequality while also working on technology-driven changes to employment.  Let us speed up our efforts to offer more jobs with unconventional working arrangements, and to prepare our workers for the jobs of tomorrow.  Tomorrow is not that far off, after all.

Jen Rodvold is Head of Sustainability & Social Value Solutions.  She founded the Sopra Steria UK Women’s Network in 2017 and is its Chair.  She has been a member of the techUK Women in Tech Council and the APPG for Women & Enterprise.  She recently led the development of the techUK paper on the importance of Returners Programmes to business, which can be found here.  Jen is interested in how business and technology can be used as forces for good.