How to keep AI alive when death is inevitable

Uber was in the headlines again last week, this time because on of their driverless cars was involved in an accident which killed a cyclist.  Loss of life is always a tragedy and I don’t want to diminish the significance of this death, however, accidents like this are likely to happen as we develop AI and we should be able to agree on situations where projects must be shut down and times when they can continue.

We saw footage released showing the moments leading up to the collision.  If we take the footage to be an honest, accurate and unaltered representation of events it appears that the car had very little opportunity to avoid this happening, with the cyclist crossing the road away from a designated crossing, unlit and shortly after a corner.

It’s hard to watch the footage without imaging yourself in that situation, and it’s hard to see how a human driver could have avoided the incident.  There would only have been a split second to react.  It’s quite possible that both human and machine would have produced the same result – Yet humans continue to be allowed to drive, and Uber is shutting down its self-driving vehicle programme.

So – Following a human death, how can we decide when our projects must be axed and when they can continue?

Intentional vs Incidental vs accidental

I would like to propose three categories of machine caused death.  Under two of the three circumstances (intentional and incidental) I suggest that the programmes must be shut down.  Under the 3rd (accidental) the project may continue, depending on a benchmark I will set out shortly.


Intentional death caused by AI will result from the likes of ‘lethal autonomous weapons’.  I would propose that these should be banned under all circumstances from ever being created.  As Max Tegmark described in Life 3.0, AI has the potential to be either the greatest tool ever created for humanity, or the most destructive – The latter being killbots.  We want AI to go in the first direction like Chemistry or biology, which became useful to humanity rather than becoming chemical and biological weapons respectively – We have international treaties to ban them.  Nuclear had the potential to be simply a power source to help humanity, but has ended up with a dual purpose – Generating energy to power our homes and incredibly destructive weapons.

Here are a few of the most poignant issues possible;

  • With AI the risk is potentially higher than nuclear weapons. A machine with the coded right to take human life could do so with an efficiency orders of magnitude higher than any human could – Infecting our healthcare systems, power, or even launching our own nuclear weapons against ourselves.
  • As a race we are yet to create our first piece of bug-free software, and until we do we do, we run the risk of this extremely fast automated system killing people we had never aimed for it to. And even if we regain control of the device within days, hours or minutes, the damage done could be thousands of times greater than any human could have achieved in that time.
  • Using a machine only adds in a layer of ethical abstraction that allows us to commit atrocities (Automating Inequality, Virginia Eubanks).


An incidental death can be categorised as death which happens as the result of another action, but not as the primary motivation.  This would include any action where and automated system was sure, or attributed a high probability to the possibility of a person being seriously injured or killed as the result of its primary goal or the steps taken to achieve it.  We may imagine machines allowing this to happen ‘for the greater good’, as an acceptable step towards its primary goal.  This should also be avoided and a cause to shut off and prevent any AI projects which allow this to happen.


  • It’s a short distance between this and lethal autonomous weapons. An AI is highly unlikely to be human in the way it thinks and acts.  Unlike humans which are carbon based lifeforms, evolved over thousands of years, an AI will be silicon based and evolve quickly over years, months or days.  The chances of it feeling emotions, if it does at all… like guilt, empathy, love like a human is improbably.  If it is given the flexibility to allow human death, its idea of an atrocity may be very different to ours, and due to its speed and accuracy even the fastest reactions in stopping this type of AI may be far too late to prevent a disaster.


This is the only area where I believe death with an AI is involved may be forgiven – And even in this case not in all circumstances.  I would describe an accidental death caused by an AI as one where in-spite of reasonable steps being taken to collect and analyse available data and accident happened, which resulted in death or injury, that was believed only to have a low level of probability and became unavoidable.  Here we may see this through the eyes of the Uber driverless vehicle;

  • ‘An accidental death’ – The car should never be allowed to sacrifice human life where it is aware of a significant risk (we will discuss the ‘significant risk’ shortly), opting instead to stop entirely in the safest possible manner.
  • ‘Reasonable steps’ – These should be defined through establishing a reasonable level of risk, above 0% which is tolerable to us. More on this below.
  • ‘Collect and analyse data’ – I think this is where the Uber project went wrong. Better sensors or processing hardware and software may have made this accident preventable.

An AI designed to tolerate only accidental death should not set the preservation of human life as its primary objective.  Clearly defined final objectives for AI seemingly have unintended results – With a matrix like human farm being possible to maximise human life but sacrificing pleasure.  Maximising pleasure similarly could result in the AI dedicating its resources to generating a new drug to make humans permanently happy, or putting us all in an ideal simulated world.  Indirect normativity (Nick Bostrom, SuperIntelligence) seems to be a more appealing proposition, instead teaching an AI to;

  1. Drive a car to its destination
  2. Take any reasonable steps to avoid human harm or death while fulfilling step A
  3. Do the intended meaning of this statement

But what if a driverless car finds itself in a situation where death is unavoidable, where it’s just choosing between one person or another dying?

If an AI designed only to tolerate accidental death finds itself in a situation where it’s only decision is between one life and another, even if inaction would result in a death, it may still be compliant with this rule.  We should instead measure this type of AI from an earlier moment, that the actions leading up to this situation should have been taken to minimise risk of death.  As new data becomes available which show no further options are possible which avoid death or injury the accident has already happened and a separate decision making system may come into force to decide what action to take.

A reasonable level of risk?

To enable AI to happen at all we need to establish the reasonable level for risk in these systems.  A Bayesian AI would always attribute a greater than 0% chance of anything happening, including harm or death of a human, in any action or inaction that it takes.  For example, if a robot were to make contact with a human, holding no weapons, travelling slowly and covered in bubble wrap, the chance of it transferring bacteria or viruses which have a small chance of causing harm is higher than 0%.  If we are to set the risk appetite for our AI at 0% it’s only option will be to shut itself down as quickly and safely as possible.  We must have a minimum accepted level for AI caused harm to progress, and I think we can reach some consensus for this.

With the example of the Uber self-driving car we may assume the equivalent number of deaths caused by human and machine.  The machine was unable to avoid the death of a human, and if the evidence presented is an accurate reflection of the circumstances it seems likely a human too would have been unable to avoid the death.  The reaction for this has been strongly anti-automation, so we can tell that a 1-to-1 exchange between human and machine deaths is not the right level – That we would prefer for a human to be responsible for a death if the number of casualties is not reduced by using a machine.

If we are to change this number to 2-to-1 this begins to look different.  If we could half the number of deaths caused by driving or any other human activity automation begins to look a lot more appealing to a far greater number of people.  If we extend this to a 99% reduction in deaths and injuries the vast majority of people will lean towards AI over human actors.

Where this number stands exactly I am not certain.  It’s also unlikely that the ratio would remain static as growing trust in AI may lead us either direction.  Indirect normativity may be our best option again in this instance, accounting for the moving standard which we would hold it to.

Setting a tolerance rate for error at 0% for anything is asking for failure.  No matter how safe or fool proof a plan may seem there will always be at least a tiny possibility of error.  AI can’t solve this, but it might be able to do a better job than us.  If our goal is to protect and improve human life… maybe AI can help us along the way.


Why it’s time to ditch the duvet with a great employee experience

by Claudia Quinton, Head of Workplace Transformation

I’m not one to ‘throw a sickie’. I enjoy getting out of bed and heading in to work each day. But can you say the same about your employees? Or are your people frustrated and demoralised by the high level of of process and bureaucratic hoops they have to jump through to complete even the simplest of tasks, such as booking leave or submitting expenses?

I’ve been turning the spotlight on employee experience in a series of papers and blogs recently. And this idea of a frustrated workforce unwilling to get out of bed in the morning is something I discuss in my latest paper*. That’s because, whether it is a high level of employee attrition or too much absenteeism, the impact of a poor workplace experience can have a hugely detrimental impact on the business.

Counting the cost of employee attrition

For example, the costs of searching for new employees, reviews, screening, interviews, offers, negotiation, on-boarding, co-worker networking and the inevitable learning curve can quickly mount up. One estimate suggests that UK organisations alone are losing £340bn from employee attrition. So, there is clearly a need to retain talent for as long as possible.

Pivotal to this is providing employees with a positive experience in the workplace. That means enabling them to engage seamlessly with HR and business processes, through the channel of their choice, from anywhere, at any time. It’s about empowering employees to self-serve and manage basic requirements themselves; and enabling managers to spend less time chasing up resourcing approvals and more time managing their teams and getting new joiners embedded in the business. How? With robotic process automation speeding up talent onboarding and handling labour-intensive tasks.

This latter capability doesn’t have to come at a huge cost to the business. Simply by adding a digital tool on top of an existing process, it is possible to transform a laborious admin task, quickly and at relatively low risk.

Happy employees equal happy customers

In my paper I quote Sopra Steria’s Engaging Generation Me brochure, which states: “Crucially, the workplace that empowers its people with real-time data services, intuitive easy-to-access employee services and automated self-help will be better placed to achieve broader strategic customer experience goals.” I use this quote to illustrate how a positive employee experience has wide-ranging strategic ramifications. In this instance, I suggest that a happy, empowered employee is a more productive employee, one more committed to delivering customer-service excellence.

This is nothing new in the world outside the workplace. Tech giant Apple has been giving customers an intuitive, personalised experience for many years. It is constantly bringing out new products apps that work around people’s lifestyles. Now it’s time for HR to follow suit. Working with other leaders across the business, including IT and finance, HR needs to re-define how people engage with the organisation, using intuitive, tailored employee services that make people want to ’ditch the duvet’ and come into work.

For more on this, read my opinion paper ‘A transformation business case that writes itself’.

“I am your Father”… my experience with Shared Parental Leave

By Dave Parslew, Senior Internal Recruiter.

As well as looking after internal recruitment, Dave is a first time dad. In this post he talks about anticipating the birth of his first born, the decision to utilise Shared Parental Leave and why more men should be utilising SPL.

Shared Parental Leave (SPL) for me seems like a fantastic opportunity to be able to spend some quality time with my first born, Sam. During our pregnancy, my wife Anna and I decided that SPL was definitely for us and when it came to ‘D’ day, we agreed that she will take the first 9 months and I will take the last 3. I always joked that May to August will be a perfect 3 months in the sun for me, though now with the birth of my child and all the work to look after a new baby, my views have of course changed!

Baby and DAve

The resemblance is already uncanny for Dave and baby Sam.

Quite a few years ago, I assumed that when I did actually have kids, I would have to go back to work after my 2 weeks of paternity and leave the all-important first year of quality time to my wife. I thought that was the only option and at the time it was! However, things have changed and the question I asked myself and all the other eligible fathers out there is why wouldn’t you?!

Around 285,000 couples in the UK are eligible every year for shared parental leave, but take-up “could be as low as 2%”, according to the Department for Business. Nearly three years after it was introduced around half of the general public still are unaware the option exists. Experts say that as well as a lack of understanding of what is on offer, cultural barriers and financial penalties are deterring some parents from sharing parental leave.

There seems to be a lot of “research undertaken by trusted organisations” about SPL out there but I say don’t just rely on the headlines and newspaper write-ups; delve a little deeper into the detail and look at the research for yourself!

Research shows the poor take-up of the policy is due to concerns about lack of financial support for fathers. I say, if you manage your finances correctly and are prepared for the eventuality that you might be slightly out of pocket for a few months of your life, you will get to spend some amazing time with your children (time you will NEVER get back) so just go for it!

However, the main problem with childcare take up remains – many men just wouldn’t want it because they’re scared it would impact their careers. It’s not that men’s attitudes are anti-childcare these days. It’s more that this fear outweighs fathers’ enthusiasm to have a stint at being a stay-at-home dad or the desire to exercise their legal right. It’s the dated belief that a man better serves their family by sticking to a traditional career path.

In my opinion, if you care that much about money, then perhaps you shouldn’t have kids in the first place as they WILL most definitely suck you dry of most of your finances. However if you see it as I see it then everyone’s a winner!

This is a government funded scheme remember, so in my case the company I work for (Sopra Steria) will have to cover my work for 3 months but they have been very accommodating about it and in some ways educated by it due to the lack of uptake.

Fair enough I won’t get paid for 3 months but there is an option to plan some ‘Staying in Touch’ days with HR (paid in full for the day) and I still accrue holiday while I am off along with Bank Holidays.

Hopefully my example will encourage others to do the same. To top it all off of course, I will have an awesome few months with my new son. I am looking forward to this immensely and the bottom line is, ”You only live once”!

Below are the key points about SPL, learn more about the intiative here.

What is Shared Parental Leave?

  • Shared parental leave (SPL) was introduced in April 2015
  • It allows parents to share 50 weeks of leave and 37 weeks of pay after they have a baby
  • Parents can take time off separately or can be at home together for up to six months
  • SPL is paid at £140.98 per week or 90% of your average earnings, whichever is lower


The Geek Shall Inherit

AI has the potential to be the greatest ever invention for humanity.  And it should be for the benefit of all humanity equally, but instead we’re heading towards a particular group, the geeks, who will benefit most from AI. AI is fundamentally more likely to favour the values of its designers, and whether we train our AI on a data set gathered from humans, or with pure simulated data through a system like deep reinforcement learning bias will, to a greater or lesser extent, remain.

A disclaimer – Humans are already riddled with bias.  Be it confirmation, selective or inclusive bias, we constantly create unfair systems and draw inaccurate conclusions which can have a devastating effect on society.  I think AI can be a great step in the right direction, even if it’s not perfect.  AI can analyse dramatically more data than a human and by doing so generate a more rounded point of view.  More rounded however is not completely rounded, and this problem is significant given any AI which can carry out a task orders of magnitude faster than a human.

To retain our present day levels of inequality while building a significantly faster AI we must dramatically reduce the number of unethical decisions it produces.  For example, if we automate a process with a system which produces only 10% as many unethical decisions as a human per transaction, but we make it 1000x faster, we end up with 100x more injustice in the world.  To retain todays levels that same system would need to make only 0.1% as many unethical decisions per transaction.

For the sake of rhyme, I’ve titled this blog the geek shall inherit.  I am myself using a stereotype, but I want to identify the people that are building AI today.  Though I firmly support the idea that anyone can and should be involved in building these systems that’s not a reflection of our world today.  Our society and culture has told certain people, women for instance, from a young age that boys work on computers and girls do not.  This is wrong, damaging and needs remedying.  That’s a problem to tackle in a different blog!  Simply accepting in this instance that the people building AI tend to be a certain type of person – Geeks.  And if we are to stereotype a geek, we’re thinking about someone who is highly knowledgeable in an area, but also socially inept, and probably a man.

With more manual forms of AI creation the problem is at its greatest.  Though we may be using a dataset gathered from a more diverse group of people, there’s still going to be selection bias in that data, as well as bias directly from the developers if they are tasked with the annotation of that data.  Whether intentionally or not , humans are always going to favour things more alike themselves and code nepotism into a system, meaning the system is going to favour geeky men like themselves more so than any other group.

In 2014 the venture capital fund ‘Deep Knowledge Ventures’ developed an algorithm called ‘VITAL’ to join their board and vote on investments for the firm.  VITAL shared a bias with it’s creators, nepotism, showing a preference to invest in businesses which valued algorithms in their own decision making (Homo Deus, Harari, 2015).  Perhaps VITAL developed this bias independently, but the chances area it’s developers unconsciously planted the seed of nepotism, and even the preference towards algorithms due to their own belief in them.

A step beyond this is deep reinforcement learning.  This is the method employed by Google’s Deep Mind in the Alpha Zero project.  The significant leap between Alpha Go and Alpha Go Zero is that Alpha Go used data recorded from humans playing Go, whereas Alpha Go Zero learned simply by playing against itself in a simulated world.  By doing this, the system can make plays which seem alien to human players, as it’s not constrained by human knowledge of the games.  The exception here is ‘move 37’ against Lee Sedol, which Alpha Go Lee used,  prior to the application of Deep Reinforcement Learning.  This move was seen as a stroke of creative brilliance that no human would ever have played, even though this system was trained on human data.

Humans also use proxies to determine success in these games.  An example of this is Alpha Go playing chess.  Where humans use a points system on pieces as a proxy to understand their performance in a game, Alpha Go doesn’t care about its score.  It’ll sacrifice valuable pieces for cheap ones when other moves which appear more beneficial are available, because it doesn’t care about its score, only about winning.  And win it does, if only by a narrow margin.

So where is the bias in this system?  Though the system may be training in a simulated world, two areas for bias remain.  For one, the layers of the artificial neural network are decided upon by those same biased developers.  Second, it is simulating a game designed by humans – Where the game board and rules of Go were designed.  Both Go and Chess for instance offer a first move advantage to black.  Though I prefer to believe that the colours of pieces on a game board has everything to do with contrast and nothing to do with race, we may be subtly teaching a machine that one colour is guaranteed by rules an advantage over others in live.

The same issue however remains in more complex systems.  The Waymo driverless car is trained predominantly in a simulated world, where it learns free from human input, fatigue and mistakes.  It is however, still fed the look and feel of human designed and maintained roads, and the human written rules of the highway code.  We might shift here from ‘the geek shall inherit’ to ‘the lawyer shall inherit’.  Less catchy, but simply by making the system learn from a system or rules that was designed by a select group of people will introduce some bias, even if it’s simulating it’s training data within the constraints of those rules.

So, what should we do?

AI still has the potential to be incredibly beneficial for all humanity.  Terminator scenarios permitting, we should pursue the technology.  I would propose tackling this issue from two fronts.


This would be hugely beneficial to the technology industry as a whole, but it’s of paramount concern in the creation of thinking machines.  We want our AI to think in a way that suits everyone, and our best chance of success is to have fair and equal representation throughout its development.  We don’t know how much time remains before a hard take-off of an artificial general intelligence, and we may not have time to fix the current diversity problem, but we should do everything we can to fix it.


Because damage caused by biased humans, though potentially catastrophic will always be limited by our inherent slowness.  AI on the other hand can implement biased actions much faster than us humans and may simply accelerate an unfair system.  If we want more equality in the world a system must focus more heavily on equality as a metric than speed, and ensure at the very least that it reduces inequality by as much as the process speed is increased e.g.;

  1. If we make a process 10x faster, we must reduce the prevalence and impact of unequal actions by at least 90%.
  2. If we create a system 1,000x faster, this reduction must be for a 99.9% reduction of inequality in its actions.

Doing this only retains our current baseline.  To make progress in this area we need go a step further with the reduction in inequality before increasing the speed.

Start with the basics

Tyler is one of Sopra Steria UK’s Volunteers of the Year. As Volunteer of the Year 2017, he travelled to India to visit our international award-winning Community programmes run by our India CSR team. Read his previous write up on his volunteer work here

Yesterday took me to the new Government Girls Inter College, Hoshiyarpur in Noida, India. The school opened this academic year and has 1,270 girls on its register, all from underprivileged backgrounds. Next year the school will grow to a size of at least 2,000 and is expected to be a lot higher than this. Yesterday held great significance for the Girls’ School and I had the honour of being able to commemorate this day with them.

For the last eight months, Computer Science has been taught by theory. More than one-thousand girls have been learning IT skills from paper. Paper! Thankfully, yesterday we were able to celebrate the opening of a new computer lab with thirty new computers donated from Sopra Steria. The occasion was expectedly joyous. There were celebrations, speeches and all-too-necessary ribbon cutting ceremony. A fantastic moment that meant something to every member of the school, teacher or student.

As twenty 13-year-old children filed into the room and unwrapped the last remaining plastic from the screens and keyboards of the newly installed computers. The excitement of the girls ready to use these new machines was palpable. Great, right? The next few moments were like a sucker-punch to something I really ought to have expected. It started with a moment’s hesitation from a young girl finding the power button. Then a look of confusion from another trying to left-click a mouse. Perhaps the most basic of tasks for a child that age. The only thing was – this was the first time that any girl in that room had touched a computer, ever. And for some reason, it was as if someone had told me the sky had fallen down. Obvious when you think about it, but near unthinkable for any child in the UK today.

After a quick breath, I went and sat with two girls, Yashika and Pooja. They had opened Microsoft Word and it was great to see their teamwork as they hunted for the letters on the keyboard, as our very own Gayathri Mohan took them through their ABCs. Within a few minutes, Pooja had moved her second hand onto the keyboard as she began to type sentences. Computers are a absolute necessity in the modern working world and in some government schools here the may be only one or two computers for several thousand children. Some do not have computer access of any kind. For such a reasonable investment, the lives of thousands of children, their families and future families can be changed completely.

Many things we take for granted are new to girls like Yashika and Pooja. It’s a familiar feeling to feel passionate about tech and I hope to continue to be able to contribute to bringing these new opportunities to them. This trip has shown me the individual lives being changed from the Sopra Steria India CSR programmes. It’s hard to fathom that yearly 70,000 children are introduced to tech through these schools, provided with free lunches, access to drinking water and toilet facilities, among the many other initiatives. A big thank you to the team for guiding us round and allowing us to share in these moments.

Still making difficult decisions – the Spring Statement

In 2010 the coalition government started with the objective of eliminating the structural current deficit by 2014-15. It introduced a package of savings, a public sector pay freeze, welfare reforms and significant reductions to every department’s administration budget. There was still a desire to protect the most growth-enhancing capital spending.

The target originally set by George Osborne when he imposed austerity on public services was only achieved this year. Paul Johnson, director of the Institute for Fiscal Studies, said the deficit reduction was still ‘quite an achievement given how poor economic growth has been’.

What are the lessons of the last eight years?

As the Chancellor gives his ‘no frills’ Spring Statement this week, and prepares more far reaching plans for tax and spending through his Budget in the Autumn, it is worth drawing some conclusions on how the government eliminated the deficit and what aspects of the austerity agenda should remain:

  • The government maintained a clear and measurable fiscal target (the Chancellor has made a ‘pledge of fiscal responsibility, to borrow no more than two per cent of national income by 2020-21) and the Office for Budget Responsibility (OBR) should continue to assess publicly whether this is likely to be achieved.
  • The departmental spending review prioritised areas with benefits to a broad sweep of society – next year’s review should promote growth (like transport and education) and fairness and social mobility (providing routes out of poverty for the poorest, improving incentives for work and tackling ‘wicked problems’ such as the increasing public health hazards of air pollution).
  • Eliminating a sizeable deficit was not a normal budget exercise and a more open and inclusive approach is required – government should consult widely, beyond departments, asking public sector workers and the public to suggest ideas, convening expert advisory groups and holding regional events to listen to people’s views.

Of course, external conditions are now favourable and the reforms introduced in 2010 (including spending controls, back office shared services and commercial reforms) have been sustainable. But the United Kingdom cannot rely on external conditions to remain as favourable as they are now. Particularly as uncertainty lingers about the UK’s future relationship with the European Union and the economic costs of divergence with the EU become clear.

What needs to change? Meeting the UK’s future challenges

The squeeze on public services is showing up in higher waiting times in hospitals for emergency treatment, low satisfaction for GP services and a staggering decline in prison safety. The National Audit Office (NAO) warned that local councils are at financial breaking point. If they keep draining their reserves at the current rate, one in ten will have exhausted them in just three years’ time.

The improvement in the public finances gives the Chancellor some leeway to spend in his Spring Statement. But the expected £5bn to £10bn windfall is not going to transform the delivery of public services. It is not enough to solve the UK’s long-term fiscal challenges. For example, demographic change will demand either a significant increase in taxation or a radical change to the funding of health and pensions. There is an immediate need to put the funding of social care on a sustainable footing

Achieving better internal efficiency is a necessary but not sufficient part of public service reform. At the same time public services must come up with innovative, less resource-intensive and more effective ways of achieving the government’s aims. In the Spring Statement, the Chancellor should provide funding and direction:

  • To move away from the traditional tools of legislation, regulation and taxation – which can be expensive to design and implement – and develop and apply lessons from behavioural science (designing policy that reflects how people really behave).
  • To renew the transparency agenda, as a way of achieving ‘better for less’ – by consistently releasing data into the public domain, individuals are able to draw their own conclusions on the way public services operate, incentivising efficiency through accountability, and stimulating innovation through ‘information marketplaces’.
  • And, where appropriate, for public services being open to a range of providers competing to offer a better public service, with a greater emphasis on outcome-based contracts, and joint work with the private sector to access private capital and expertise to make fuller use of core public assets in an enterprising way.

A final thought – accountability and public services

I appreciate that the third suggestion is not shared by everybody. Over the past five or six years problems have emerged in the UK public service market, particularly in the commissioning of complex services. This came to a head with the liquidation of Carillion.

The reality is that the public are more pragmatic than the politicians. For example, sixty-four per cent of people do not think it matters who runs hospitals or GP surgeries ’as long as everyone has access to care (Populus poll, January 2018).

But we still need to recognise that one of the most important differences between a private and public service is the different and often enhanced levels of accountability for the delivery of that service to a broader range of stakeholders. Private sector organisations that want to deliver public services have to be aware of, and work within those boundaries.

There is an urgent need for a more transparent and robust way of measuring the quality of services provided by the public and private sector. The Chancellor should ensure the rapid implementation of Sir Michael Barber’s report into improving value in public spending.

Gender, AI and automation: How will the next part of the digital revolution affect women?

Automation and AI are already changing the way we work, and there is no shortage of concern expressed in the media, businesses, governments, labour organisations and many others about the resulting displacement of millions of jobs over the next decade.

However, much of the focus has been at the macro level, and on the medium and long-term effects of automation and AI.  Meanwhile, the revolution is already well underway, and its impact on jobs is being felt now by a growing number of people.

The wave of automation and AI that is happening now is most readily seen in call centres, among customer services, and in administrative and back-office functions.  Much of what we used to do was by phone – talking directly to a person. We can now use not only companies’ websites in self-serve platforms, but interact with bots in chat windows and text messages. Cashiers and administrative assistants are being replaced by self-service check-outs and robot PA’s. The processing of payroll and benefits, and so much of finance and accounting has also been automated, eliminating the need for many people to do the work…

…eliminating the need for many women to do the work, in many cases.

A World Economic Forum report, Towards a Reskilling Revolution, estimated that 57% of the 1.4 million jobs that will be lost to automation belong to women. This displacement is not only a problem for these women and their families, but could also have wider negative ramifications for the economy.  We know that greater economic participation by women, not less, is what the economy needs: it could contribute $250b to the UK’s GDP .

Both the economic and ethical solution is in reskilling our workers. Businesses and economies benefit from a more highly skilled workforce. Society is enriched by diversity and inclusion.  Individuals moving to new jobs (those that exist now and those that we haven’t yet imagined) may even be more fulfilled in work that could be more interesting and challenging.  Moreover, the WEF report suggests that many of the new jobs will come with higher pay.

But there are two things we need to bear in mind as we do the work of moving to the jobs of tomorrow:

  1. Our uniquely human skills: Humans are still better at creative problem solving and complex interactions where sensitivity, compassion and good judgment play a role, and these skills are used all the time in the kinds of roles being displaced. In business processes, humans are still needed to identify problems before they spread too far (an automated process based on bad programming will spread a problem faster than a human-led process; speed is not always an advantage).  AI will get better at some of this, but the most successful operators in the digital world of the future will be the ones who put people at the centre of their digital strategies.  Valuing the (too-long undervalued) so-called soft skills that these workers are adept at, and making sure these are built in to the jobs of the future, will pay dividends down the road.
  2. Employment reimagined: To keep these women in the workforce, contributing to society and the economy, we must expand the number of roles that offer part-time and flexible working options. One reason there are so many women doing these jobs is because they are offered these options. And with women still taking on most of the domestic and caring responsibilities, the need for a range of working arrangements is not going away anytime soon.  The digital revolution is already opening discussion of different models of working, with everything from providing people with a Universal Basic Income, to the in-built flexibility of the Gig Economy, but simpler solutions on smaller scales can be embraced immediately.  For example, Sopra Steria offers a range of flexible working arrangements and is making full use of digital technology to support remote and home working options.

Women are not the only people affected by the current wave of automation and AI technology.  Many of the jobs discussed here are also undertaken by people in developing countries, and those where wages are lower, such as India and Poland.  The jobs that economies in those countries have relied on, at least in part,may not be around much longer in their current form.

Furthermore, automation and AI will impact a much wider range of people in the longer term.  For example, men will be disproportionately impacted by the introduction of driverless cars and lorries, because most taxi and lorry drivers are men.

Today, on International Women’s Day 2018, though, I encourage all of us in technology to tune in to the immediate and short-term impacts and respond with innovative actions, perhaps drawing inspiration from previous technological disruptions.   Let’s use the encouraging increased urgency – as seen through movements such as #Time’sUp and #MeToo – to address gender inequality while also working on technology-driven changes to employment.  Let us speed up our efforts to offer more jobs with unconventional working arrangements, and to prepare our workers for the jobs of tomorrow.  Tomorrow is not that far off, after all.

Jen Rodvold is Head of Sustainability & Social Value Solutions.  She founded the Sopra Steria UK Women’s Network in 2017 and is its Chair.  She has been a member of the techUK Women in Tech Council and the APPG for Women & Enterprise.  She recently led the development of the techUK paper on the importance of Returners Programmes to business, which can be found here.  Jen is interested in how business and technology can be used as forces for good.