AI is Not Something to Be Trusted or Not Trusted
May 22nd 2020
Management/Strategy Consultant | Hackernoon’s “AI writer of the Year” | Editor of ThePourquoiPas.com
Yet, we’re so often busy discussing the ins and outs of whether A.I CAN do something, that we seldom ask if we SHOULD design it at all.
As such, below is a “quick” guide to the discussions surrounding A.I and ethics. It aims to help democratise conversations : we do not necessarily need smarter people at the table (and anything I write will not be news to an expert), but we DO need a bigger table. Or more tables. Or more seats. Or some sort of a video-conference solution.
I hate metaphors.
Ethic can Mean Many Different Things
Before we dive into the contemporary discussion about ethics, we first need to understand what ethic is. Ethic has a pretty straightforward dictionary definition : “moral principles that govern a person’s behaviour or the conducting of an activity”.
Here are the schools of thoughts to know about in order to best understand why current propositions on A.I ethics have little to do with moral principles :
- Consequentialism; TL;DR = The greatest happiness of the greatest number is the foundation of morals and legislation, aka “the ends justify the means”. Close cousin : utilitarianism.
- Deontology; TL;DR = It is our duty to always do what is right, even if it produces negative consequences. “What thou avoidest suffering thyself seek not to impose on others” (Epictetus, aka, the guy with the most epic name in philosophy — also a Stoic). Close cousin : Kantianism.
- Hedonism; TL;DR = Maximising self gratification is the best thing we can do as people.
- Moral intuitionism; TL;DR = It is possible to know what is ethical without prior knowledge of other concepts such as good or evil.
- Pragmatism; TL;DR = Morals evolve, and rules should take this into account.
- State consequentialism; TL;DR = Whatever is good for the state is ethical.
- Virtue ethics; TL;DR = A virtue is a character trait that stems from the prioritization of good versus evil through knowledge. It is separated from an action or a feeling. Close cousin : stoicism.
Lesson 1 : if a company or government tells you about its ethical principles, it is your duty to dig and ask which ethical branch they’re basing these principles on. Much information can be found in such definitions.
“Ethics Theater” Plagues Companies
Below are such principles, as defined by a few large A.I companies. This is in no way exhaustive (yet is exhausting), but provides an insight into corporation-sponsored ethics-washing. These rules generally fall into 4 categories.
Accountability / Responsibility
Fairness / Bias
Why it’s B-S : A system created to find patterns in data might find the wrong patterns. That is the simplest definition of A.I bias. Such a buzzword helps companies shy away from hard topics, such as sexism, racism or ageism. God forbid they have to ask themselves hard questions, or be held accountable for the data-set they use. We have every right (duty) to demand what bias are exactly being addressed, and how.
Data and Privacy
Why it’s B-S : If they really cared, they would have implemented the European standard (all hail the GDPR). They have not. Case closed.
Ethics is only truly mentioned twice in the many reports I’ve read :
“We will not design or deploy AI in the following application areas: technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.” (Google)
You may have noted that only three companies are named above (Google, IBM, Microsoft). It’s because the other major A.I companies have yet to produce anything worthy of being picked apart, choosing instead to invest in think-tanks that will ultimately influence governments. This point highlights a major flaw common to all principles: none offer that companies subject themselves to enforceable rules. Why then, do companies bother with ethics theater? The first reason, as explained above, is indeed to influence governments and steer the conversation in the “right” direction (see below the similarities in standards between company and government priorities). Secondly, it’s good to be seen as being ethical by customers and employees, so as to avoid any boycotts. Thirdly, and maybe most importantly, there is big money to be made in setting a standard: Patents x universal use = $$$.
Lesson 2: Companies know very little about ethics, and have no incentives to take a stand on what is good or right. Corporate ethics is an oxymoron.
Governments are Doing their Best
There are many government-published white papers out there, but they are either vague as all hell, or shamefully incomplete. Furthermore, many see A.I through the lens of economic and geopolitical competition. One notable exception is the clear emphasis on ethics and responsibility in the EU’s A.I strategy and vision, especially relative to the US and China (both morally discredited to the bone). In order to get an overall look at what countries believe A.I ethics should be, I’ve put their principles into 7 categories, most of which closely resemble those highlighted by the above analysis of corporations.
Note that this is merely a (relevant) oversimplification of thousands of pages written by people much smarter and more informed than myself. I highly recommend reading the linked documents as they provide in-depth information about the listed principles.
Accountability / Responsibility
Accountable TO WHAT?! TO WHOM?! How is this question so very systematically avoided?
Fairness / Bias
As a reminder, bias can be avoided by ensuring that the data input is representative of reality, and that it does not reflect reality’s existing prejudices.
Data and Privacy
Oh, China and the US aren’t on that list ? Cool, cool, cool… just a coincidence, I’m sure. I’m sure it’s also a coincidence that 3 completely different organisations came up with principles that are VERY similarly phrased.
Safety / Security / Reliability
Stakeholder inclusion / Societal good
- Only the EU, Norway and Australia deal with all 7 principles; much can be said from what has been omitted by certain countries. This lack of consensus is also worrying because an entity deciding between several international guidelines, its home country’s national policy, and recommendations from companies and nonprofits might end up doing nothing.
- No list of principle ventures outside of these 7 points, and they rarely stray far from one another. This highlights a very real risk of groupthink (something that would be beneficial to the private sector). For example, nowhere is the right to self-determination mentioned, when A.I could easily be used to nudge people one way or another (say, during an election).
- Red lines are shamefully absent: no country has forbidden itself from certain A.I uses, and none of the principles are legally binding. FYI, strong regulation looks like this:
- Technical definitions are entirely absent from the discussion. As are any relevant KPIs which could measure these principles. Who cares if some things are currently technically out of reach? Claiming so means misunderstanding the very definition of strategy (also, threaten to fine companies and they’ll find find technical solutions pretty darn quickly).
- The lack of ethical guidelines is not clear at first. Neither are their necessity, lest we ask “what happens if one principle goes against another?”. Are they ranked? Are there orders of importance ? What happens if foregoing privacy rights is beneficial to society ? When we start dealing with multiple, often competing, objectives or try to account for intangibles like “freedom” and “well-being,” a satisfactory mathematical solution doesn’t exist. This is where a clear ethical philosophy would be useful : if state consequentialism is prioritised (as is generally the case in China), this at least gives us a clue as to what will be prioritised (Asimov’s three laws of robotic were pretty great at this).
Lesson 3 : Governments go a step further than companies in setting relevant principles. However, they still lack the courage of their principles, as well as the technical know-how to make these principles enforceable.
Ethic is Easy, But Courage isn’t
Now that we’ve established the basics of what ethics has to offer (not a whole lot at face value), and that we’ve analysed various attempts by companies and governments alike, below are a few recommendations that base themselves not only on ethics, but also on courage with regards to the BIG issues (war, politics, autonomous cars, justice…).
I mention courage because this is what is missing in the current A.I discourse. The principles below have probably been thought of before, but were likely dismissed because of what they entail (loss of competitiveness, strategic advantage, cool guy points…). I risk nothing by bringing them up, because I do not wield any real power in this conversation; I may not hold the same discourse were I representing a people/a company.
Principle of Rationality
Principle of Ranking
Principle of Ambivalence
Put simply, if I get into a Chinese autonomous car, I’d like to be able to choose a Western standard in case of an accident.
Principle of Accountability
This principle may appear blasphemous for many free-market proponents, raised as they are in countries where tobacco groups do not cause cancer, distilleries do not cause alcoholism, guns do not cause school shootings and drug companies do not cause overdoses. Silicon Valley has understood this, and its go-to excuse when its products cause harm (unemployment, bias, deaths…) is to say that its technologies are value neutral, and that they are powerless to influence the nature of their implementation. That’s just an easy way out. Algorithms behaving unexpectedly are now a fact of life, and just as car makers must now be aware of emissions and European companies must protect their customers’ data, tech executives (as opposed to scientists, whose very raison d’être is pushing barriers — and so it should be) must closely track an algorithm’s behavior as it changes over time and contexts, and when needed, mitigate malicious behavior, lest they face a hefty fine or prison time.
Can’t handle it? Don’t green-light it.
If your signature is at the bottom of the page, you are accountable to the law.
Principal of Net Positive
And by sanity I mean being able to see the whole damn supply chain or your algorithm isn’t entering production.
Environmental issues cannot take the back seat any longer, even when discussing something as seemingly innocent as the digital world.
In the face of a limited technology and a plethora of potential uses, the benefits of A.I clearly outweigh the risks. This is however no reason not to have a conversation about its implementation, before the robots start doing the talking for us (yes, this is a hyperbole, sue me).
Let me say it loudly for the people at the back: A.I is not something to be trusted or not trusted. It is merely a man-made tool which is “fed” data in order to automate certain tasks, at scale. Do you trust your washing-machine? Your calculator (Yeah, me neither. Math is black magic)? It is all-too-easy to assume the agency of something that has none. A.I cannot be good or evil. Humans are good or evil (and so often simultaneously both). At the end of the day, A.I merely holds a dark mirror to society, its triumphs and its inequalities. This, above all, is uncomfortable. It’s uncomfortable because we keep finding out that we’re the a-holes.
A.I Ethic does not exist.
Let me say it loudly for the people at the back: Algorithms serve very specific purposes. They cannot stray from those purposes. What matters is whether or not a company decides that this purpose is worthy of being automated within a black box. As such, the question of A.I ethics should be rephrased as “do we trust (insert company’s name here)’s managers have our best interest at heart?” and, if yes, “do we trust the company’s programmers to implement that vision flawlessly while taking into account potential data flaws?” That’s trickier, isn’t it? But more realistic.
A.I Ethic does not exist.
Let me say it loudly for the people at the back: The vague checklists and principles, the powerless ethics officers and toothless advisory boards are there to save face, avoid change, and evade liability. If you come away with one lesson from this article, it is this :