By Josh Entsminger, Dr. Mark Esposito, Dr. Terence Tse, and Danny Goh
The increasing application of artificial intelligence across the value chain reflects that such a technological development provides competitive advantages to enterprises. However, as one can observe, its meteoric rise paves the way for greater ethical risks – which means a more effective governance should be put in place. Here are a few propositions governments can consider when looking at the scope of the problem associated with AI.
Primum Non Nocere. First do no harm. So goes the more modern version of the Hippocratic Oath, taken by doctors despite knowing more than likely they will be involved in a patient’s death. The involvement may be from a mistaken diagnosis, exhaustion, or a variety of other influences, leading to a natural concern about how many of these mistakes could be avoided.1 AI is taking up the challenge, and shows promise, but just as with doctors, if you give AI the power of decision making along with the power of analysis, it will more than likely be involved in a patient’s death. If so, is it the responsibility of the doctor? The hospital? The engineer? The firm?
Answers to such questions depends on how the governance is arranged – whether or not a doctor is at the end of each AI-provided analysis, checking whether or not it’s correct; whether or not the decision-making paths of each AI driven diagnosis can be followed. It is paramount to remember, that current attempts to automate and reproduce intelligence are not deterministic, they are probabilistic, and therefore subject to issues and experiential biases that plague all other kinds of intelligence. The same issues arise for self-driving cars, autonomous drones, and the host of intentional and incidental ways AI will be involved in life-or-death scenarios, and the more day to day risks people face. As machine to machine data grows in the internet of things, companies with preferential access will have more and more insight into more and more minute aspects of behavioural patterns we ourselves might not understand – and with that comes a powerful ability to nudge behaviour, or more worryingly, limit choices. We can begin to see the larger picture, but governance is in the details. The risks of 99% accuracy in a hospital, in image recognition for food safety, in text analysis for legal documents, will not be the same – as such, policymakers will need more nuanced accounts of what is involved. The details for the use cases of AI in each practice are as well going to change. What kind of oversight, standards, and frameworks for making AI accountable in healthcare may require different conditions than AI in education, in finance, in telecom, in energy, on and on.
From Tech Literacy to Tech Fluency
Effective governance of AI means the burden of adjustment falls, if unequally, on all partners – on governments, on firms, on users, and non-users. Ethical governance takes it further. New technology means new risks, meaning firms, governments, and users have to be literate enough about the technology to understand the new set of risks and responsibilities that come with the tech. Understanding those risks is not straightforward – not for users, governments, or even the firms employing that tech. Consider if an AI is employed to assess risk of a heart attack, detecting variations in eating habits and other trends identified to be important to making an effective prediction;2 or, more simply, assessing risk by scanning your eye. Consider a service leveraging voice analysis to identify PTSD.3
The more individual the profile, the better the prediction. Consider if a school replaces teachers as test monitors with an AI system to detect cheating or is leveraged by students to cheat better,4 or to identify trends in homework grades and class attendance to identify the probability that a student will drop out.5 Such cases have a clear burden on the designer and the firm – but the depth of usefulness of any of the predictions implies an understanding of how that decision was reached, what was a trigger – for someone being informed of an increased risk of heart attack without, seemingly, overtly changing any behaviour, generates confusion – for a cheating system that might not reliably be able to tell between stretching and looking at another paper, we can create further problems. But these are technical issues and have technical solutions – the problem is when the user has to change their behaviour to match the technical issues.
[ms-protect-content id=”9932″]
Ethical AI means that everyone will have to improve their tech literacy – to go from the prediction or analysis we get from an AI to the response. Yet it goes further, with the insights we derive from AI about our behaviour, understanding AI implies, and demands, a better understanding of our own habits, our own behaviour, our own often unconscious trends. This understanding therefore begins with how to not only treat our behaviours, but what others know about them – in short, we have to begin with the data.
Governance for all or for nobody at all
The first step to good AI governance means being honest about whether or not the data set represents what we want the AI to understand and make decisions about. However, we cannot conflate AI with data – and governing data can only go so far. Data sets cover a limited range of situations, and inevitably, most AI will be confronted with situations they have not encountered before – the ethical issue is the framework by which decisions occur, and good data cannot secure that kind of ethical behaviour by itself..6
Generically, we can train the AI to make better decisions, as with GoodAI – but the issue is not simply in the algorithm, but in the choices about which kinds of data sets, the design of the algorithm, and the intended function of that AI in impacting decision making, in short, its ecology of use. Even at 99% accuracy, we will need a system of governance to structure the appeals – in fact, under such conditions, we will need it even more. Ethical governance is not deterministic. We may be able to find reliable ethical responses for day-to-day situations, but the test will be in the inevitable uncertainty which decision makers have to respond to and respond appropriately. Our lives are already ruled by probabilistic assumptions, intended to drive behaviour. Now we need to ask, and answer honestly, how much of your life are you willing to have shaped by algorithms you do not understand. More importantly, who should be watching and maintaining the algorithms to flag when it made a bad decision, or an intentionally manipulative one?
Good approaches to governance do not begin from one-size-fits-all. Governance begins with the concepts by which we determine what is relevant and irrelevant, appropriate or inappropriate, good, bad, or inconsequential. When the concepts are understood, rules are derived, and the system of rules is precisely what makes up governance – what is permissible, what is impermissible, and what to do about each. There are no technologies, not AI, nor blockchain which are not impacted by such rules, and as such, who is writing the rules will continue to matter. Blockchain in particular is often considered to be neutral – but the decisions on block size, and incentive structure, remain strategic decisions. As AI takes hold, we need to know what counts as good governance for governments, firms, people, and societies driven and shaped by AI. In short, we need to be ethically literate.
Getting beyond transparency
When a decision was made using AI, we may not know whether or not the data was faulty – and as such, may have a right to appeal an AI-driven decision. The first step is to be informed that a decision concerning your life was conducted with the help of an AI. Conventionally, these issues are to be resolved in the courtroom – but what if the courtrooms are themselves run by AI? Judges, like everyone, are biased. If an AI is trained on data sets from previously biased judicial decisions, then the parameters of successful judgements for an AI will likely include that bias. Governments will need a better record of what companies and institutions use AI and when. Furthermore, companies may have to better understand the architecture of decisions within their company, and where precisely AI is placed within the architecture. However, we can pose that being informed does not provide enough transparency. Even after the data is understood, mistakes can still occur, and biases can still arise. As such, those lives which have been shaped by AI have a right to understand why a decision was made. Currently, while the right to be informed is feasible, the ability to explain why an algorithm made one decisions instead of another is far less so.
In the infamous second match between AlphaGo and Lee Sedol, AlphaGo made an unexpected move – the issue was not simply that it surprised the AlphaGo design team, but that it surprised Lee Sedol.7 Decisions on whether to train algorithms based on past data, or design in the rules, as with AlphaGo, make the decisions about whether to follow one set of rules instead of another explicit. Teams will have to choose what kind of ethical rules, and the specifics of those rules. While public service decisions may be open to appeals on breaking open the black box to understand why one decision was made instead of another, corporate governance decisions may be less so – the most common defense to expect is the appeal to intellectual property, of specialised AI as a trade secret. When AI driven systems are under such protections, we may need to wonder whether incentives are aligned for firms to maintain an adequate understanding of the derived decision-making system. Different kinds of practices will require different kinds of means of making AI decisions understandable – one such method already proposed is to have counterfactual assessments, a running account of different scenarios and their flow, which can be followed – but while this enables oversight, it does not equate to getting the reasoning by which a decision was made.8
Opting in and out: Why the choice is up for us to take
These rules give us a right to an explanation, a means to be informed – but effective governance has to go a step further. To proceed, there needs to be a common ability to appeal an unintelligible decision, which may itself demand knowing whether or not the company themselves understand the decision-making process. Citizens, consumers, and users need to have the ability to not only opt out but do so feasibly. But, as with the rest, this problem will not resolve the issues. Opting out may require a larger percentage of data ownership on behalf of the citizen, leasing companies the right to use that data for targeted advertising – and then there is the question of what data is actually private enough to justify the right to opt out – when there is a change in your heartbeat, is that private?
The ultimate source of legitimacy, and a key provider of effective governance, will be giving the choice to citizens – offering the chance to say no to uses of their data, and maybe even to opt out of AI-driven decision making. However, just as ethical governance places new demands on firms, there is a new demand on the public to be aware. Ethics cannot be a one-way street – consent may be the point of departure, but cooperation, and consensus, is how the practice has to continue. The more frightening demand of AI governance comes from the possibility that such choices to opt out or opt in may themselves be subject to influence by firms or governments using AI.
Any light at the end of the tunnel?
The decisions which corporations and governments will need to face follows from the previous issues – governing data is essential but may be insufficient; as is being informed; as is providing a right to an explanation; as is providing more alternative means to providing that explanation. Governments will need to make decisions about where the largest burden of adjustment will fall – who will need to educate themselves most, and continuously – as AI innovation progresses, the specifics of the decision making will change – reshaping the specifics of our arguments about what kinds of risks practices built off a given algorithm actually pose.
The pursuit of such transparency on the use of AI may actually provide new avenues for oversight in governments and firms. Many government decisions, policy decisions, police decisions, judicial decisions are currently intractable – that is to say, we may not get an explanation, only a rationalisation. If the black box problem is resolved, that is a big if, then we may eventually have new means to make the governmental process fundamentally more transparent. The general data protection regulation (GDPR) ruling in the EU remains indeed a step in the right direction; however, the future of effective appeals and governance will need to be case by case.9 A one-size-fits-all will only serve to hide the discrepancies which complex algorithms are so effective at generating.
As such, we believe governments should consider the following basic propositions when considering the scope of the problem:
1. Governing the data is a necessary but insufficient step to guarantee ethical AI. This will involve decisions on whether to use historical data or synthetic data sets.
2. Building transparency through the right to be informed on the use of AI will be insufficient if companies themselves do not know how an AI arrives at a decision, and therefore cannot provide a sufficient explanation in the event of an appeal.
3. Redefining accountability through the right to explanation will be insufficient without a more rigorous set of standards for companies to know how an AI made one decision instead of another.
4. Integrating constantly the continual evolution of algorithmic complexity and novel AI usage will be required.
It is unlikely AI will replace decision making fully anytime soon – as such, the issue is not a purely technical problem, it is an issue, firstly, of awareness and intelligence in the response to what an AI will tell us. The question of ethical AI is not simply the openness of the algorithm, but the effective design of the institutions by which AI is used and decision making derived from AI analyses is made clear.
Suffice to say, politicians, coders, and philosophers have their work cut out for them. Technology is a tool, an extension of our problems – AI is no different, for now.
[/ms-protect-content]
About the Authors
Josh Entsminger is an applied researcher at Nexus Frontier Tech. He additionally serves as a senior fellow at Ecole Des Ponts Business School’s Center for Policy and Competitiveness, a research associate at IE business school’s social innovation initiative, and a research contributor to the world economic forum’s future of production initiative.
Dr. Mark Esposito, PhD., is a Socio-Economic Strategist and bestselling author, researching MegaTrends, Business Model Innovations and Competitiveness. He works at the interface between Business, Technology and Government and co-founded Nexus FrontierTech, an Artificial Intelligence Studio. He holds appointments as Professor of Business and Economics at Hult International Business School and he is equally a faculty member at Harvard University since 2011. Mark is an affiliated faculty of the Microeconomics of Competitiveness (MoC) network at Harvard Business School’s Institute for Strategy and Competitiveness and is currently co-leader of the network’s Institutes Council.
Dr. Terence Tse is an Associate Professor at ESCP Europe London campus and a Research Fellow at the Judge Business School in the UK. He is also head of Competitiveness Studies at i7 Institute for Innovation and Competitiveness. Terence has also worked as a consultant for Ernst & Young, and served as an independent consultant to a number of companies. He has published extensively on various topic of interests in academic publications and newspapers around the world. He has been interviewed by television channels including CCTV, Channel 2 of Greece, France 24, and NHK.
Danny Goh is a serial entrepreneur and an early stage investor. He is the partner and Commercial Director of Nexus Frontier Tech, an AI advisory business with presence in London, Geneva, Boston and Tokyo to assist CEO and board members of different organisations to build innovative businesses taking full advantage of artificial intelligence technology.