If Your Company Uses AI, It Needs an Institutional Review Board hbr.org – Harvard Business Review

Posted: April 6, 2021 at 8:31 pm

Conversations around AI and ethics may have started as a preoccupation of activists and academics, but now prompted by the increasing frequency of headlines of biased algorithms, black box models, and privacy violations boards, C-suites, and data and AI leaders have realized its an issue for which they need a strategic approach.

A solution is hiding in plain sight. Other industries have already found ways to deal with complex ethical quandaries quickly, effectively, and in a way that can be easily replicated. Instead of trying to reinvent this process, companies need to adopt and customize one of health cares greatest inventions: the Institutional Review Board, or IRB.

Most discussions of AI ethics follow the same flawed formula,consisting of three moves, each of which is problematic from the perspective of an organization that wants to mitigate the ethical risks associated with AI.

Heres how these conversations tend to go.

First, companies move to identify AI ethics with fairness in AI, or sometimes more generally, fairness, equity, and inclusion. This certainly resonates with the zeitgeist the rise of BLM, the anti-racist movement, and corporate support for diversity and inclusion measures.

Second, they move from the language of fairness to the language of bias: biased algorithms, as popular media puts it, or biased models as engineers (more accurately) call it. The examples of (allegedly) biased models are well-know, including those from Amazon, Optum Health, and Goldman Sachs.

Finally, they look for ways to address the problem as theyve defined it. They discuss technical tools (whether open source, or sold by Big Tech or a startup) for bias identification, which standardly compare a models outputs against dozens of quantitative metrics or definitions of fairness found in the burgeoning academic research area of machine learning (ML) ethics. They may also consider engaging stakeholders, especially those that comprise historically marginalized populations.

While some recent AI ethics discussions go beyond this, many of the most prominent dont. And the most common set of actions that practitioners actually undertake flows from these three moves: Most companies adopt a risk-mitigation strategy that utilizes one of the aforementioned technical tools, if theyre doing anything at all.

All of this should keep the stewards of brand reputation up at night, because this process has barely scratched the surface of the ethical risks that AI introduces. To understand why this is, lets take each of these moves in turn.

The first move sends you off in the wrong direction, because it immediately narrows the scope. Defining AI ethics as fairness in AI is problematic for the simple reason that fairness issues are just a subset of ethical issues youve just decided to ignore giant swaths of ethical risk. Most obviously there are issues relating to privacy violations (given that most of current AI is ML, which is often powered by peoples data), and unexplainable outputs/black-box algorithms. But there are more. For example, the primary ethical risk related to AI-powered self-driving cars isnt bias or privacy, but killing and maiming. The ethical risk of facial recognition technology doesnt end when bias has been weeded out from the model (of which there are a number of examples); a non-biased facial recognition software still enables surveillance by corporations and (fascist) governments. AI also requires vast amounts of energy to power the computers that are training the algorithms, which entails a shocking degree of damage to the environment. The list of ways a company can meet with ethical disaster is endless, and so reducing AI ethics to issues involving fairness is a recipe for disaster.

The second move further reduces your remit: Issues of bias are a subset of issues of fairness. More specifically, issues of bias in the context of AI are issues of how different subpopulations are treated relative to others whether goods, services, and opportunities are distributed fairly and justly. Are the job ads placed in such a way that they are just as likely to be seen by the African-American population as by the white population? Are women applying for a job just as likely to have their resumes lead to an interview as a mans resume? The problem with this approach is that issues of fairness extend beyond issues of fair distributions of goods to various subpopulations.

Most obviously, there are issues about what any individual deserves independently of how others are treated. If Im torturing you, and you protest, it would hardly justify my actions to say, Dont worry, Im torturing other subpopulations at equal rates to the population of which youre a member. The entire category of human rights is about what every person deserves, independently of how others are being treated. Fairness crucially involves issues of individual desert, and a discussion, let alone a risk-mitigation strategy, that leaves this out is the more perilous for it.

The final trouble for organizations arrives in the third move identifying and adopting bias-mitigation strategies and technical tools. Organizations often lean on technical tools, in particular, as their go-to (or only) meaningful instrument to ferret out bias, as measured by quantitative definitions of fairness found in recent computer science literature. Here, we run into a raft of failures at ethical risk mitigation.

First, those two-dozen-plus quantitative metrics for fairness are not compatible with each other. You simply cannot be fair according to all of them at the same time. That means that an ethical judgment needs to be made: Which, if any, of these quantitative metrics of fairness are the ethical/appropriate ones to use? Instead of bringing in lawyers, political theorists, or ethicists all of whom have training in these kinds of complex ethical questions these decisions are left to data scientists and engineers. But if the experts arent in the room, you cannot expect your due diligence to have been responsibly discharged.

Second, these tools standardly only kick in well into the development lifecycle. Because they measure the output of AI models, theyre used after data sets have been chosen and models have been trained and a good deal of resources have been devoted to the product. It is then inefficient, not to mention unpopular, to go back to the drawing board if a bias problem is detected that cannot be solved in a fairly straightforward way.

Third, while the search for a technical, quantitative solution to AI ethics is understandable, the truth is that many ethical issues are not reducible to quantitative metrics or KPIs. Surveillance is a problem because it destroys trust, causes anxiety, alters peoples behavior, and ultimately erodes autonomy. Questions about whether people are being treated respectfully, whether a product design is manipulative or merely giving reasonable incentives, whether a decision places a burden on people that is too great to reasonably expect of them, these all require qualitative assessments.

Fourth, these technical tools do not cover all types of bias. They do not, for instance, ferret out whether your search engine has labeled Black people gorillas. These are cases of bias for which no technical tool exists.

Fifth, the way these tools measure for bias are often not compatible with existing anti-discrimination law. For example, anti-discrimination law forbids companies from using variables like race and gender in their decision-making process. But what if that needs to be done in order to test their models for bias and thus influences the changes they make to the model in an effort to mitigate bias? That looks to be not only ethically permissible, but plausibly ethically required as well.

Finally, as regards engaging stakeholders, that is generally a good thing to do. However, aside from the logistical issues to which it gives rise, it does not by itself mitigate any ethical risks; it leaves them right in place, unless one knows how to think through stakeholder feedback. For instance, suppose your stakeholders are racist. Suppose the norms that are local to where you will deploy your AI encourage gender discrimination. Suppose your stakeholders disagree with each other because, in part, they have conflicting interests; stakeholders are not a monolithic group with a single perspective, after all. Stakeholder input is valuable, but you cannot programmatically derive an ethical decision from stakeholder input.

My point here isnt that technical tools and stakeholder outreach should not be avoided; they are indeed quite useful. But we need more comprehensive ways to deal with ethical risk. Ideally, this will involve building a comprehensive AI ethical risk-mitigation program that is implemented throughout an organization admittedly a heavy lift. If a company is looking for something to do in relatively short order that can have a big impact (and will later dovetail well with the bigger risk-mitigation program), they should take their cues on ethical risk mitigation from health care and create an IRB.

In the United States, IRBs in medicine were introduced to mitigate the ethical risks that arose and were commonly realized in research on human subjects. Some of that unethical conduct was particularly horrific, including the Tuskegee experiments, in which doctors refrained from treating Black men with syphilis, despite penicillin being available, so they could study the diseases unmitigated progression. More generally, the goals of an IRB include upholding the core ethical principles of respect for persons, beneficence, and justice. IRBs carry out their function by approving, denying, and suggesting changes to proposed research projects.

Comparing the kinds of ethical risks present in medicine to the kinds present in AI is useful for a number of reasons. First, in both instances there is the potential for harming individuals and groups of people (e.g. members of a particular race or gender). Second, there exists a vast array of ethical risks that can be realized in both fields, ranging from physical harm and mental distress to discriminating against protected classes, invading peoples privacy, and undermining peoples autonomy. Third, many of the ethical risks in both instances arise from the particular applications of the technology at hand.

Applied to AI, the IRB can have the capacity to systematically and exhaustively identify ethical risks across the board. Just as in medical research, an AI IRB can not only play the role of approving and rejecting various proposals, but should also make ethical risk-mitigation recommendations to researchers and product developers. Moreover, a well-constituted IRB more on this in a moment can perform the functions that the current approach cannot.

When it comes to building and maintaining an IRB, three issues loom large: membership of the board, jurisdiction, and articulating the values it will strive to achieve (or at least the nightmares it strives to avoid).

To systematically and exhaustively identify and mitigate AI ethical risks, an AI IRB requires a diverse team of experts. You will want to place an engineer that understands the technical underpinnings of the research and/or product so the committee can understand what is being done and what can be done from a technical perspective. Similarly, someone deeply familiar with product design is important. They speak the language of the product developers, understand customer journeys, and can help shape ethical risk-mitigation strategies in a way that doesnt undermine the essential functions of the products under consideration.

Youll also want to include ethics-adjacent members, like attorneys and privacy officers. Their knowledge of current and potential regulations, anti-discrimination law, and privacy practices are important places to look when vetting for ethical risks.

Insofar as the AI IRB has as its function the identification and mitigation of ethical risks, it would be wise of you to include an ethicist, e.g. someone with a Ph.D. in philosophy who specializes in ethics, or, say, someone with a masters degree in medical ethics. The ethicist isnt there to act as kind of priest with superior ethical views. Theyre there because they have training, knowledge, and experience related to understanding and spotting a vast array of ethical risks, familiarity with important concepts and distinctions that aid in clear-eyed ethical deliberation, and the skill of helping groups of people objectively assess ethical issues. Importantly, this kind of risk assessment is distinct from the risks one finds in the model risk assessments created by data scientists and engineers, which tend to focus on issues relating to accuracy and data quality.

You may also find it useful to include various subject-matter experts depending on the research or product at hand. If the product is to be deployed in universities, someone deeply familiar with their operations, goals, and constituencies should be included. If it is a product to be deployed in Japan, including an expert in Japanese culture may be important.

Lastly, as part of an effort to maintain independence and the absence of conflict of interests (e.g. members looking for approval from their bosses), having at least one member unaffiliated with your organization is important (and is, incidentally, required for medical IRBs). At the same time, all members should have a sense of the business goals and necessities.

When should an AI IRB be consulted, how much power should it have, and where should it be situated in product development? In the medical community, IRBs are consulted prior to the start of research. The reason there is obvious: The IRB is consulted when testing on human subjects will be performed, and one needs approval before that testing begins. When it comes to authority, medical IRBs are the ultimate authority. They can approve and reject proposals, as well as suggest changes to the proposal, and their decisions are final. Once an IRB has denied a proposal, another IRB cannot approve it, and the decision cant be appealed.

The same rule should apply for an AI IRB.

Even though the harm typically occurs during deployment of the AI, not research and product development, theres a strong case for having an AI IRB before research and/or product development begins. The primary reason for this is that its much easier and therefore more inexpensive and efficient to change projects and products that do not yet exist. If, for instance, you only realize a significant ethical risk from a potential or probable unintended consequence of how the product was designed, you will either have to go to market with a product you know to be ethically risky or you will have to go through the costly process of reengineering the product.

While medical IRBs are granted their authority by the law, there is at least one strong reason you should consider voluntarily granting that degree of power to an AI IRB: It is a tool by which great trust can be built with employees, clients, and consumers. That is particularly true if your organization is transparent about the operations even if not the exact decisions of the IRB. If being an ethically sound company is at the top of the pyramid of your companys values, then granting an AI IRB the independence and power to veto proposals without the possibility of an appeal (to a member of your executive team, for instance) is a good idea.

Of course, that is often (sadly) not the case. Most companies will see the AI IRB as a tool of risk mitigation, not elimination, and one should admit at least the possibility if not the probability of cases in which a company can pursue a project that is ethically risky while also highly profitable. For companies with that kind of ethical risk appetite, either an appeals process will have to be created or, if they are only minorly concerned with ethical risk mitigation, they may make the pronouncements of the board advisory instead of required. At that point, though, they should not expect the board to be particularly effective at systematically mitigating ethical risks.

Youve assembled your AI IRB and defined its jurisdiction. Now youll need to articulate the values by which it should be guided. The standard way of doing this is to articulate a set of principles and then seek to apply those principles to the case at hand. This is notoriously difficult, given the vast array of ways in which principles can be interpreted and applied; just think about the various and incompatible ways sincere politicians interpret and apply the principle of fairness.

In medical ethics and in the law, for that matter decision-making is usually not guided by principles alone. Instead, they rely on case studies and precedent, comparing any given case under investigation to previous cases that are similar. This allows your IRB to leverage the insights brought to bear on the previous case to the present case. It also increases the probability of consistency in the application of principles across cases.

Progress can be made here by articulating previous decisions senior leadership made on ethical grounds prior to the existence of the IRB. Suppose, for instance, the IRB knows senior leaders rejected a contract with a certain government due to particular ethical concerns about how the government operates generally or how they anticipated that government would use their product. The reasoning that led to the decision can reveal how future cases ought to be decided. In the event that no such cases exist and/or no such cases have been disclosed, it can be useful to entertain fictional examples, preferably ones that are not unlikely to become real examples in the future, and for the IRB to deliberate and decide on those cases. Doing that will ensure readiness for the real case when it arrives on their doorstep. It also encourages the cool objectivity with which fictional cases can be considered when no money is on the line, for instance to transfer to the real cases to which theyll be compared.

***

We all know that adopting an AI strategy is becoming a necessity to stay competitive. In a remarkable bit of good news, board members and data leaders see AI ethical risk mitigation as an essential component of that strategy. But current approaches are grossly inadequate and many leaders are uncertain how to develop this part of their strategy.

In the absence of the ideal a widespread commitment to creating a robust AI ethical-risk program from day one building, maintaining, and empowering an AI IRB can serve as a strong foundation for achieving that ideal. It can be created in relatively short order, it can be piloted fairly easily, it can be built on and expanded to cover all product teams and even all departments, and it creates and communicates a culture of ethics. Thats a powerful punch not only for AI ethics, but for the ethics of the organization more generally.

Continue reading here:

If Your Company Uses AI, It Needs an Institutional Review Board hbr.org - Harvard Business Review

Related Posts