Vern Davis wins Business Leader of the Year

Hilton’s Park Lane grand ballroom was the venue for last night’s Ex-Forces in Business Awards and we are delighted to announce that Vern Davis, Managing Director of the UK Aerospace, Defence and Security (ADS) sector was named Business Leader of the Year. The awards is the largest celebration and recognition of ex-military personnel in the UK workforce, and the employers that support current and former members of the British Armed Forces. In line with our pledge to the Armed Forces Covenant, Sopra Steria actively seeks to provide career opportunities for this community and has a number of veterans working in all areas of our business. Vern was recognised for his transformation of the ADS business since joining in 2017.

Vern commented ‘I am delighted to have been recognised in these awards. Sopra Steria has a fantastic culture that really values different backgrounds and experiences, including those of the armed forces community. It is an honour to have been named Business Leader of the Year and I thank my team for all their hard work and support throughout our transformation of the ADS business.’

Bringing together the UK Space Sector

In April, we brought together space leaders and experts across the private, public and academic worlds for ‘Event Horizon – a UK Space Symposium’.

Partnering with The National Security & Resilience Consortium (NS&RC) and Fieldfisher, we held a fantastic day of debate, knowledge sharing and networking – the first steps in building a richer and more dynamic collaborative ecosystem across the space sector.

Throughout the day 60 attendees, from SMEs to large enterprises, academia and not for profits listened to keynote talks stimulating much discussion and idea sharing. Doug Millard, deputy curator of the Science Museum, spoke about the future of space. As well as Alan Brunstrom, from the European Space Agency, on the influence of space on the economy.

Stuart Martin, CEO of the Satellite Applications Catapult, stressed the importance of the innovative technologies SMEs within the space sector are developing. These technologies may lead to the creation of new markets as they address everyday problems in the future.

Vern Davis, Managing Director of Aerospace, Defence and Security for Sopra Steria commented, “We are committed to enabling SMEs in the Space sector. We want to form the kind of relationships where SMEs can focus on growing their business knowing they have a trusted partner that will be with them long term.”

Event Horizon was an extremely positive day for the sector. As a collective, we are committed to shaping the future of the consortium and the UK space sector, together.

If you work or contribute to the UK space sector and would be interested in joining our consortium and future symposiums please complete this form

‘Ethics Guidelines for Trustworthy AI’ Summarised

On the 8thof April 2019, the EU’s High-Level Expert Group (HLEG) on AI released their Ethics Guidelines for Trustworthy AI, building on over 500 recommendations received on the ‘Draft Ethics Guidelines’ released in December 2018.

In this blog, I want to help you understand what this document is, why it matters to us and how we may make use of it.

What is it?

The ‘Draft Ethics Guidelines’ is an advisory document, describing the components for ‘Trustworthy AI,’ a brand for AI which is lawful, ethical and robust.  As the title suggests, this document focuses on the ethical aspect of Trustworthy AI.  It does make some reference to the requirements for robust AI and to a lesser extent the law that surrounds AI but clearly states that it is not a policy document and does not attempt to offer advice on legal compliance for AI.  The HLEG is tasked separately with creating a second document advising the European Commission on AI Policy, due later in 2019.

The document is split into three chapters;

  1. Ethical principles, the related values and their application to AI
  2. Seven requirements that Trustworthy AI should meet
  3. A non-exhaustive assessment list to operationalise Trustworthy AI

This structure begins with the most abstract and ends with concrete information.  There is also an opportunity to pilot and feedback on the assessment list to help shape a future version of this document due in 2020.  Register your interest here.

Why does this matter?

I am writing this article as a UK national, working for a business in London.  Considering Brexit and the UK’s (potential) withdrawal from the European Union it’s fair to ask whether this document is still relevant to us.  TL;DR, yes. But why?

Trustworthy AI must display three characteristics, being lawful, ethical and robust.

Ethical AI extends beyond law and as such is no more legally enforceable to EU member states than those who are independent.  The ethical component of Trustworthy AI means that the system is aligned with our values, and our values in the UK are in turn closely aligned to the rest of Europe as a result of our physical proximity and decades of cultural sharing. The same may be true to an extent for the USA, who share much of their film, music and literature with Europe. The ethical values listed in this document still resonate with the British public, and this document stands as the best and most useful guide to operationalise those values.

Lawful AI isn’t the focus of this document but is an essential component for Trustworthy AI. The document refers to several EU laws like the EU Charter and European Convention of Human Rights, but it doesn’t explicitly say that Lawful AI needs to be compliant with EU law.  Trustworthy AI could instead implement the locally relevant laws to this framework.  Arguably compliance with EU laws is the most sensible route to take, with of 45% of the UK’s trade in Q4 2018 was with the EU[1]according to these two statistics from the ONS.  If people and businesses in EU member states only want to buy Trustworthy AI, compliant with EU law, they become an economic force rather than a legal requirement.  We can see the same pattern in the USA, with business building services compliant with GDPR, a law they do not have to follow, to capture a market that matters to them.

The final component, Robust AI, describes platforms which continue to operate in the desired way in the broad spectrum of situations that it could face throughout its operational life and in the face of adversarial attacks.  If we agree in principle with the lawful and ethical components of Trustworthy AI and accept that unpredictable or adversarial attacks may challenge either then the third component, Robust AI, becomes logically necessary.

 

What is Trustworthy AI?

Trustworthy AI is built from three components; it’s lawful, ethical and robust.

Diagram

Lawful AI may not be ethical where our values extend beyond policy.  Ethical AI may not be robust where, even with the best intentions, undesirable actions result unexpectedly or as the result of an adversarial attack. Robust AI may be neither ethical nor legal, for instance, if it were designed to discriminate, robustness would only ensure that it discriminates reliably, and resists attempts to take it down.

This document focuses on the ethical aspect of Trustworthy AI, and so shall I in this summary.

What is Ethical AI?

The document outlines four ethical principles in Chapter I (p.12-13) which are;

  • Respect for human autonomy
  • Prevention of harm
  • Fairness
  • Explicability

These four principles are expanded in chapter II, Realising Trustworthy AI, translating them into seven requirements that also make some reference to robustness and lawful aspects. They are;

  1. Human agency and oversight

AI systems have the potential to support or erode fundamental rights.  Where there is a risk of erosion, a ‘fundamental rights impact assessment’ should be carried out before development, identifying whether risks can be mitigated and determine whether the risk is justifiable given any benefits. Human agencymust be preserved, allowing people to make ‘informed autonomous decisions regarding AI system [free from] various forms of unfair manipulation, deception, herding and conditioning’ (p.16).   For greater safety and protection of autonomy human oversightis required, and may be present at every step of the process (HITL), at the design cycle (HOTL) or in a holistic overall position (HIC), allowing the human override the system, establish levels of discretion, and offer public enforces oversight (p.16).

  1. Technical robustness and safety

Fulfilling the requirements for robust AI, a system must have resilience to attack and security, taking account for additional requirements unique to AI systems that extend beyond traditional software, considering hardware and software vulnerabilities, dual-use, misuse and abuse of systems. It must satisfy a level of accuracyappropriate to its implementation and criticality, assessing the risks from incorrect judgements, the system’s ability to make correct judgements and ability to indicate how likely errors are. Reliability and reproducibilityare required to ensure the system performs as expected across a broad range of situations and inputs, with repeatable behaviour to enable greater scientific and policy oversight and interrogation.

  1. Privacy and data governance

This links to the ‘prevention of harm’ ethical principle and the fundamental right of privacy.  Privacy and data protectionrequire that both aspects are protected throughout the whole system lifecycle, including data provided by the user and additional data generated through their continued interactions with the system. None of this data will be used unlawfully or to unfairly discriminate.  Both in-house developed and procured AI systems must consider the quality and integrity of data, prior to training as ‘it may contain socially constructed biases, inaccuracies, errors and mistakes’ (p.17) or malicious data that may influence its behaviour. Processes must be implemented to provide individuals access to dataconcerning them, administered only by people with the correct qualifications and competence.

  1. Transparency

The system must be documented to enable traceability, for instance identifying and reasons for a decision the system hade with a level of explainablity, using the right timing and tone to communicate effectively with the relevant human stakeholder.  The system should employ clear communicationto inform humans when they are interacting with an AI rather than a human and allow them to opt for a human interaction when required by fundamental rights.

  1. Diversity, non-discrimination and fairness

Avoidance of unfair biasis essential as AI has the potential to introduce new unfair biases and amplify existing historical types, leading to prejudice and discrimination.  Trustworthy AI instead advocates accessible and universal design, building and implementing systems which are inclusive of all regardless of ‘age, gender, abilities or characteristics’ (p.18), mindful that one-size does not fit all, and that particular attention may need to be given to vulnerable persons.  This is best achieved through regular stakeholder participation, including all those who may directly or indirectly interact with the system.

  1. Societal and environmental wellbeing

When considered in wider society, sustainable and environmentally friendly AImay offer a solution to urgent global concerns such as reaching the UN’s Sustainable Development Goals.  It may also have a social impact, and should ‘enhance social skills’, while taking care to ensure it does not cause them to deteriorate (p.19).  Its impact on society and democracyshould also be considered where it has the potential to influence ‘institutions, democracy and society at large (p.19).

  1. Accountability

‘Algorithms, data and design processes’ (p.19) must be designed for internal and external auditabilitywithout needing to give away IP or business model, but rather enhance trustworthiness.  Minimisation and reporting of negativeimpacts work proportionally to risks associated with the AI system, documenting and reporting the potential negative impacts of AI systems (p.20) and protecting those who report legitimate concerns.  Where the two above points conflict trade-offsmay be made, based on evidence and logical reasoning, and where there is no acceptable trade-off the AI system should not be used. When a negative impact occurs, adequate redressshould be provided to the individual.

Assessing Trustworthy AI

Moving to the most concrete guidance, Chapter III offers an assessment list for realising Trustworthy AI. This is a non-exhaustive list of questions, some of which will not be appropriate to the context of certain AI applications, while other questions need to be extended for the same reason. None of the questions in the list should be answered by gut instinct, but rather through substantive evidence-based research and logical reasoning.

The guidelines expect there will be moments of tension between ethical principles, where trade-offs need to be made, for instance where predictive policing may, on the one hand, keep people from harm, but on the other infringe on privacy and liberty. The same evidence-based reasoning is required at these points to understand where the benefits outweigh the costs and where it is not appropriate to employ the AI system.

In summary

This is not the end of the HLEG’s project.  We can expect policy recommendations later in 2019 to emerge from the same group which will likely give us a strong indication for the future requirements for lawful AI, and we will also see a new iteration on the assessment framework for Trustworthy AI in 2020.

This document represents the most comprehensive and concrete guideline towards building Ethical AI, expanding on what this means by complementing it with the overlapping lawful and robustness aspects.  Its usefulness extends beyond nations bound by EU law by summarising the ethical values which are shared by nations outside of the European Union, and a framework where location specific laws can be switched in and out where necessary.

[1]Source: ONS – Total UK exports £165,752m total, £74,568m to the EU – 44.98% (rounded to 45%) of UK trade is to the EU.

Sopra Steria’s Vern Davis and Mohammed Ahmed finalists in the British Ex-Forces in Business Awards.

Sopra Steria is delighted to announce that two colleagues have been named finalists in the British Ex-Forces in Business Awards. The awards celebrates the outstanding business achievements of service leavers demonstrating transferable skills gained in the military. This year, the awards attracted over 400 nominations across 18 categories.

Vern Davis, Managing Director of the Aerospace, Defence and Security sector is a finalist in the Business Leader of the Year category. Vern started his career in 1990 as an Army Officer in the British Army. This role took him on operational tours across Northern Ireland, Bosnia and Iraq while building his skillset in operational communications, SATCOM, systems training, operational planning, real estate management and budget control. Today, Vern helps organisations in their digital transformation journey, driven by delivering exceptional customer service. His wealth of knowledge, experience and expertise ensures Sopra Steria’s customers receive bespoke services that fit their needs as well as the best return on investment.

Mohammed Ahmed recently retired from the Royal Air Force as a Wing Commander. During his military career he specialised as an  Aero Systems and Communications Electronics engineer and, for Operational tours during Gulf War 2, he was awarded the MBE by Her Majesty the Queen. In August 2018 he joined Sopra Steria as Head of the Acquisition Support Partner for MOD Corsham. In his new role, Mohammed has run a multi-million pound profitable programme and a team of over 60 staff for Sopra Steria.  Within weeks he achieved the highest level of customer satisfaction and a perfect 100% NPS score. Mohammed is a finalist for the Service Leaver of the Year award.

Sopra Steria is committed to supporting the Armed Forces community and demonstrates that through our covenant pledge. We are delighted to also be sponsoring these awards and the category of Innovator of the Year.

Durham Constabulary chooses STORM Command and Control to deliver effective and efficient public safety services

Sopra Steria is proud to announce the recent signing of a three year contract to provide its STORM Command and Control system to Durham Constabulary. STORM enables Durham Constabulary to enhance public service delivery and provide more efficient scheduling of resources.

Sopra Steria and Durham Constabulary have a strong partnership and this contract extends that to a total of 16 years. During this time, Durham Constabulary have been instrumental in providing user feedback to inform product developments as part of the STORM User Group. The User Group, comprised of members from 26 forces, meets biannually to discuss product developments which feed into the Sopra Steria roadmap.

Chief Inspector Steve Long, Head of Force Control Room commented: ‘Durham Constabulary has worked with Sopra Steria for a significant period of time and we have established a good working relationship. It is essential that Durham Constabulary continue to deliver an efficient and effective service to the public and we will continue to build upon our partnership with Sopra Steria to achieve that aim.’

Muz Janoowalla, Head of Emergency Services at Sopra Steria, said: ‘We are proud of our long relationship with Durham Constabulary and delighted to continue working with the force to keep the people of County Durham and Darlington safe.’

My Journey as a Woman in STEM

International Women’s Day has been and gone, but it’s important to think about what the day means. It’s a celebration of women who more often than not don’t get enough recognition for doing what they do – but it isn’t solely about celebrating. It’s also about making efforts to break down gender stereotypes and norms – and this is especially true for women in STEM industries.

While more women are working in STEM than ever before, they still only make-up around one quarter of the STEM workforce in the UK. Even as the UK faces a severe STEM skills shortage – with a recent study forecasting more than 600,000 vacancies in STEM by 2023 – many women still struggle to enter and stay in the STEM-related industries. As women in STEM, it’s important that we share our stories so that those looking to follow the same path know that it is entirely possible and there is always a way in.

So, what about me? I’m currently on a graduate training programme as a digital consultant at Sopra Steria – one of the biggest global tech consultancies in the world. I get to learn my job in a very practical, hands on way. More specifically, my job includes working directly with clients while simultaneously getting involved with our technology teams to figure out how best we can help said clients.

As for my route into STEM, I initially studied Politics and Sociology at Cardiff University – not your traditional STEM subjects. During my study however, I took a couple of modules centred around technology and internet governance and from that, I knew exactly what I wanted to pursue. A couple of years and a nerve racking assessment centre later – I chose Sopra Steria. Beginning my career in digital technologies at a standout organisation that is heralded for its innovative solutions and expertise with female and male role models.

If you’re looking to get into STEM but perhaps haven’t previously studied a STEM subject, don’t be disheartened. It’s important to remember that there a number of routes you can follow. Looking to the future, I want to further develop my knowledge of technology and identify which area I want to specialise in and I believe I am at the right place for it.

Of course, I am at the beginning of my journey but at Sopra Steria, I feel wholly comfortable and proud to be a woman in STEM and have a plethora of colleagues who are passionate about what they do to look up to. It is truly an exciting time to be a woman entering the sector. Never before has it been a better time to put myself out there and try to make as much of a difference as I can. But, it is important to remember that while we have come so far as an industry, there is long road ahead to true equality, and we cannot take our foot off the pedal.

By Lauren Boys – Junior Consultant

Sopra Steria is proud to announce our founding partnership of the inaugural World Class Policing Awards taking place this Autumn.

The awards will celebrate and acknowledge the best in all aspects of 21st century policing. Reflecting that effective modern day policing requires partnership and collaboration, whether in teams of officers and staff; collaboration between forces; multi-agency operations; wider public sector involvement; and collaboration also with the supplier community and beyond.

Bernard Rix, Founder of World Class Policing Awards said. “We are delighted to have Sopra Steria on board as a Founder Sponsor and are very much looking forward to working with them to develop and deliver an outstanding World Class Policing Awards event in November 2019.”

Muz Janoowalla, Head of Emergency Services at Sopra Steria said, “ As a long term partner of policing, both nationally and internationally, Sopra Steria are proud to be a founding sponsor of the awards, and look forward to recognising the very best in UK and overseas policing.”

More information on the awards and how to nominate can be found on the World Class Policing website.