On the 8thof April 2019, the EU’s High-Level Expert Group (HLEG) on AI released their Ethics Guidelines for Trustworthy AI, building on over 500 recommendations received on the ‘Draft Ethics Guidelines’ released in December 2018.
In this blog, I want to help you understand what this document is, why it matters to us and how we may make use of it.
What is it?
The ‘Draft Ethics Guidelines’ is an advisory document, describing the components for ‘Trustworthy AI,’ a brand for AI which is lawful, ethical and robust. As the title suggests, this document focuses on the ethical aspect of Trustworthy AI. It does make some reference to the requirements for robust AI and to a lesser extent the law that surrounds AI but clearly states that it is not a policy document and does not attempt to offer advice on legal compliance for AI. The HLEG is tasked separately with creating a second document advising the European Commission on AI Policy, due later in 2019.
The document is split into three chapters;
- Ethical principles, the related values and their application to AI
- Seven requirements that Trustworthy AI should meet
- A non-exhaustive assessment list to operationalise Trustworthy AI
This structure begins with the most abstract and ends with concrete information. There is also an opportunity to pilot and feedback on the assessment list to help shape a future version of this document due in 2020. Register your interest here.
Why does this matter?
I am writing this article as a UK national, working for a business in London. Considering Brexit and the UK’s (potential) withdrawal from the European Union it’s fair to ask whether this document is still relevant to us. TL;DR, yes. But why?
Trustworthy AI must display three characteristics, being lawful, ethical and robust.
Ethical AI extends beyond law and as such is no more legally enforceable to EU member states than those who are independent. The ethical component of Trustworthy AI means that the system is aligned with our values, and our values in the UK are in turn closely aligned to the rest of Europe as a result of our physical proximity and decades of cultural sharing. The same may be true to an extent for the USA, who share much of their film, music and literature with Europe. The ethical values listed in this document still resonate with the British public, and this document stands as the best and most useful guide to operationalise those values.
Lawful AI isn’t the focus of this document but is an essential component for Trustworthy AI. The document refers to several EU laws like the EU Charter and European Convention of Human Rights, but it doesn’t explicitly say that Lawful AI needs to be compliant with EU law. Trustworthy AI could instead implement the locally relevant laws to this framework. Arguably compliance with EU laws is the most sensible route to take, with of 45% of the UK’s trade in Q4 2018 was with the EUaccording to these two statistics from the ONS. If people and businesses in EU member states only want to buy Trustworthy AI, compliant with EU law, they become an economic force rather than a legal requirement. We can see the same pattern in the USA, with business building services compliant with GDPR, a law they do not have to follow, to capture a market that matters to them.
The final component, Robust AI, describes platforms which continue to operate in the desired way in the broad spectrum of situations that it could face throughout its operational life and in the face of adversarial attacks. If we agree in principle with the lawful and ethical components of Trustworthy AI and accept that unpredictable or adversarial attacks may challenge either then the third component, Robust AI, becomes logically necessary.
What is Trustworthy AI?
Trustworthy AI is built from three components; it’s lawful, ethical and robust.
Lawful AI may not be ethical where our values extend beyond policy. Ethical AI may not be robust where, even with the best intentions, undesirable actions result unexpectedly or as the result of an adversarial attack. Robust AI may be neither ethical nor legal, for instance, if it were designed to discriminate, robustness would only ensure that it discriminates reliably, and resists attempts to take it down.
This document focuses on the ethical aspect of Trustworthy AI, and so shall I in this summary.
What is Ethical AI?
The document outlines four ethical principles in Chapter I (p.12-13) which are;
- Respect for human autonomy
- Prevention of harm
These four principles are expanded in chapter II, Realising Trustworthy AI, translating them into seven requirements that also make some reference to robustness and lawful aspects. They are;
- Human agency and oversight
AI systems have the potential to support or erode fundamental rights. Where there is a risk of erosion, a ‘fundamental rights impact assessment’ should be carried out before development, identifying whether risks can be mitigated and determine whether the risk is justifiable given any benefits. Human agencymust be preserved, allowing people to make ‘informed autonomous decisions regarding AI system [free from] various forms of unfair manipulation, deception, herding and conditioning’ (p.16). For greater safety and protection of autonomy human oversightis required, and may be present at every step of the process (HITL), at the design cycle (HOTL) or in a holistic overall position (HIC), allowing the human override the system, establish levels of discretion, and offer public enforces oversight (p.16).
- Technical robustness and safety
Fulfilling the requirements for robust AI, a system must have resilience to attack and security, taking account for additional requirements unique to AI systems that extend beyond traditional software, considering hardware and software vulnerabilities, dual-use, misuse and abuse of systems. It must satisfy a level of accuracyappropriate to its implementation and criticality, assessing the risks from incorrect judgements, the system’s ability to make correct judgements and ability to indicate how likely errors are. Reliability and reproducibilityare required to ensure the system performs as expected across a broad range of situations and inputs, with repeatable behaviour to enable greater scientific and policy oversight and interrogation.
- Privacy and data governance
This links to the ‘prevention of harm’ ethical principle and the fundamental right of privacy. Privacy and data protectionrequire that both aspects are protected throughout the whole system lifecycle, including data provided by the user and additional data generated through their continued interactions with the system. None of this data will be used unlawfully or to unfairly discriminate. Both in-house developed and procured AI systems must consider the quality and integrity of data, prior to training as ‘it may contain socially constructed biases, inaccuracies, errors and mistakes’ (p.17) or malicious data that may influence its behaviour. Processes must be implemented to provide individuals access to dataconcerning them, administered only by people with the correct qualifications and competence.
The system must be documented to enable traceability, for instance identifying and reasons for a decision the system hade with a level of explainablity, using the right timing and tone to communicate effectively with the relevant human stakeholder. The system should employ clear communicationto inform humans when they are interacting with an AI rather than a human and allow them to opt for a human interaction when required by fundamental rights.
- Diversity, non-discrimination and fairness
Avoidance of unfair biasis essential as AI has the potential to introduce new unfair biases and amplify existing historical types, leading to prejudice and discrimination. Trustworthy AI instead advocates accessible and universal design, building and implementing systems which are inclusive of all regardless of ‘age, gender, abilities or characteristics’ (p.18), mindful that one-size does not fit all, and that particular attention may need to be given to vulnerable persons. This is best achieved through regular stakeholder participation, including all those who may directly or indirectly interact with the system.
- Societal and environmental wellbeing
When considered in wider society, sustainable and environmentally friendly AImay offer a solution to urgent global concerns such as reaching the UN’s Sustainable Development Goals. It may also have a social impact, and should ‘enhance social skills’, while taking care to ensure it does not cause them to deteriorate (p.19). Its impact on society and democracyshould also be considered where it has the potential to influence ‘institutions, democracy and society at large (p.19).
‘Algorithms, data and design processes’ (p.19) must be designed for internal and external auditabilitywithout needing to give away IP or business model, but rather enhance trustworthiness. Minimisation and reporting of negativeimpacts work proportionally to risks associated with the AI system, documenting and reporting the potential negative impacts of AI systems (p.20) and protecting those who report legitimate concerns. Where the two above points conflict trade-offsmay be made, based on evidence and logical reasoning, and where there is no acceptable trade-off the AI system should not be used. When a negative impact occurs, adequate redressshould be provided to the individual.
Assessing Trustworthy AI
Moving to the most concrete guidance, Chapter III offers an assessment list for realising Trustworthy AI. This is a non-exhaustive list of questions, some of which will not be appropriate to the context of certain AI applications, while other questions need to be extended for the same reason. None of the questions in the list should be answered by gut instinct, but rather through substantive evidence-based research and logical reasoning.
The guidelines expect there will be moments of tension between ethical principles, where trade-offs need to be made, for instance where predictive policing may, on the one hand, keep people from harm, but on the other infringe on privacy and liberty. The same evidence-based reasoning is required at these points to understand where the benefits outweigh the costs and where it is not appropriate to employ the AI system.
This is not the end of the HLEG’s project. We can expect policy recommendations later in 2019 to emerge from the same group which will likely give us a strong indication for the future requirements for lawful AI, and we will also see a new iteration on the assessment framework for Trustworthy AI in 2020.
This document represents the most comprehensive and concrete guideline towards building Ethical AI, expanding on what this means by complementing it with the overlapping lawful and robustness aspects. Its usefulness extends beyond nations bound by EU law by summarising the ethical values which are shared by nations outside of the European Union, and a framework where location specific laws can be switched in and out where necessary.