Conversations around AI and ethics may have started as a preoccupation of activists and academics, but now prompted by the increasing frequency of headlines of biased algorithms, black box models, and privacy violations boards, C-suites, and data and AI leaders have realized its an issue for which they need a strategic approach.
A solution is hiding in plain sight. Other industries have already found ways to deal with complex ethical quandaries quickly, effectively, and in a way that can be easily replicated. Instead of trying to reinvent this process, companies need to adopt and customize one of health cares greatest inventions: the Institutional Review Board, or IRB.
Most discussions of AI ethics follow the same flawed formula,consisting of three moves, each of which is problematic from the perspective of an organization that wants to mitigate the ethical risks associated with AI.
Heres how these conversations tend to go.
First, companies move to identify AI ethics with fairness in AI, or sometimes more generally, fairness, equity, and inclusion. This certainly resonates with the zeitgeist the rise of BLM, the anti-racist movement, and corporate support for diversity and inclusion measures.
Second, they move from the language of fairness to the language of bias: biased algorithms, as popular media puts it, or biased models as engineers (more accurately) call it. The examples of (allegedly) biased models are well-know, including those from Amazon, Optum Health, and Goldman Sachs.
Finally, they look for ways to address the problem as theyve defined it. They discuss technical tools (whether open source, or sold by Big Tech or a startup) for bias identification, which standardly compare a models outputs against dozens of quantitative metrics or definitions of fairness found in the burgeoning academic research area of machine learning (ML) ethics. They may also consider engaging stakeholders, especially those that comprise historically marginalized populations.
While some recent AI ethics discussions go beyond this, many of the most prominent dont. And the most common set of actions that practitioners actually undertake flows from these three moves: Most companies adopt a risk-mitigation strategy that utilizes one of the aforementioned technical tools, if theyre doing anything at all.
All of this should keep the stewards of brand reputation up at night, because this process has barely scratched the surface of the ethical risks that AI introduces. To understand why this is, lets take each of these moves in turn.
The first move sends you off in the wrong direction, because it immediately narrows the scope. Defining AI ethics as fairness in AI is problematic for the simple reason that fairness issues are just a subset of ethical issues youve just decided to ignore giant swaths of ethical risk. Most obviously there are issues relating to privacy violations (given that most of current AI is ML, which is often powered by peoples data), and unexplainable outputs/black-box algorithms. But there are more. For example, the primary ethical risk related to AI-powered self-driving cars isnt bias or privacy, but killing and maiming. The ethical risk of facial recognition technology doesnt end when bias has been weeded out from the model (of which there are a number of examples); a non-biased facial recognition software still enables surveillance by corporations and (fascist) governments. AI also requires vast amounts of energy to power the computers that are training the algorithms, which entails a shocking degree of damage to the environment. The list of ways a company can meet with ethical disaster is endless, and so reducing AI ethics to issues involving fairness is a recipe for disaster.
The second move further reduces your remit: Issues of bias are a subset of issues of fairness. More specifically, issues of bias in the context of AI are issues of how different subpopulations are treated relative to others whether goods, services, and opportunities are distributed fairly and justly. Are the job ads placed in such a way that they are just as likely to be seen by the African-American population as by the white population? Are women applying for a job just as likely to have their resumes lead to an interview as a mans resume? The problem with this approach is that issues of fairness extend beyond issues of fair distributions of goods to various subpopulations.
Most obviously, there are issues about what any individual deserves independently of how others are treated. If Im torturing you, and you protest, it would hardly justify my actions to say, Dont worry, Im torturing other subpopulations at equal rates to the population of which youre a member. The entire category of human rights is about what every person deserves, independently of how others are being treated. Fairness crucially involves issues of individual desert, and a discussion, let alone a risk-mitigation strategy, that leaves this out is the more perilous for it.
The final trouble for organizations arrives in the third move identifying and adopting bias-mitigation strategies and technical tools. Organizations often lean on technical tools, in particular, as their go-to (or only) meaningful instrument to ferret out bias, as measured by quantitative definitions of fairness found in recent computer science literature. Here, we run into a raft of failures at ethical risk mitigation.
First, those two-dozen-plus quantitative metrics for fairness are not compatible with each other. You simply cannot be fair according to all of them at the same time. That means that an ethical judgment needs to be made: Which, if any, of these quantitative metrics of fairness are the ethical/appropriate ones to use? Instead of bringing in lawyers, political theorists, or ethicists all of whom have training in these kinds of complex ethical questions these decisions are left to data scientists and engineers. But if the experts arent in the room, you cannot expect your due diligence to have been responsibly discharged.
Second, these tools standardly only kick in well into the development lifecycle. Because they measure the output of AI models, theyre used after data sets have been chosen and models have been trained and a good deal of resources have been devoted to the product. It is then inefficient, not to mention unpopular, to go back to the drawing board if a bias problem is detected that cannot be solved in a fairly straightforward way.
Third, while the search for a technical, quantitative solution to AI ethics is understandable, the truth is that many ethical issues are not reducible to quantitative metrics or KPIs. Surveillance is a problem because it destroys trust, causes anxiety, alters peoples behavior, and ultimately erodes autonomy. Questions about whether people are being treated respectfully, whether a product design is manipulative or merely giving reasonable incentives, whether a decision places a burden on people that is too great to reasonably expect of them, these all require qualitative assessments.
Fourth, these technical tools do not cover all types of bias. They do not, for instance, ferret out whether your search engine has labeled Black people gorillas. These are cases of bias for which no technical tool exists.
Fifth, the way these tools measure for bias are often not compatible with existing anti-discrimination law. For example, anti-discrimination law forbids companies from using variables like race and gender in their decision-making process. But what if that needs to be done in order to test their models for bias and thus influences the changes they make to the model in an effort to mitigate bias? That looks to be not only ethically permissible, but plausibly ethically required as well.
Finally, as regards engaging stakeholders, that is generally a good thing to do. However, aside from the logistical issues to which it gives rise, it does not by itself mitigate any ethical risks; it leaves them right in place, unless one knows how to think through stakeholder feedback. For instance, suppose your stakeholders are racist. Suppose the norms that are local to where you will deploy your AI encourage gender discrimination. Suppose your stakeholders disagree with each other because, in part, they have conflicting interests; stakeholders are not a monolithic group with a single perspective, after all. Stakeholder input is valuable, but you cannot programmatically derive an ethical decision from stakeholder input.
My point here isnt that technical tools and stakeholder outreach should not be avoided; they are indeed quite useful. But we need more comprehensive ways to deal with ethical risk. Ideally, this will involve building a comprehensive AI ethical risk-mitigation program that is implemented throughout an organization admittedly a heavy lift. If a company is looking for something to do in relatively short order that can have a big impact (and will later dovetail well with the bigger risk-mitigation program), they should take their cues on ethical risk mitigation from health care and create an IRB.
In the United States, IRBs in medicine were introduced to mitigate the ethical risks that arose and were commonly realized in research on human subjects. Some of that unethical conduct was particularly horrific, including the Tuskegee experiments, in which doctors refrained from treating Black men with syphilis, despite penicillin being available, so they could study the diseases unmitigated progression. More generally, the goals of an IRB include upholding the core ethical principles of respect for persons, beneficence, and justice. IRBs carry out their function by approving, denying, and suggesting changes to proposed research projects.
Comparing the kinds of ethical risks present in medicine to the kinds present in AI is useful for a number of reasons. First, in both instances there is the potential for harming individuals and groups of people (e.g. members of a particular race or gender). Second, there exists a vast array of ethical risks that can be realized in both fields, ranging from physical harm and mental distress to discriminating against protected classes, invading peoples privacy, and undermining peoples autonomy. Third, many of the ethical risks in both instances arise from the particular applications of the technology at hand.
Applied to AI, the IRB can have the capacity to systematically and exhaustively identify ethical risks across the board. Just as in medical research, an AI IRB can not only play the role of approving and rejecting various proposals, but should also make ethical risk-mitigation recommendations to researchers and product developers. Moreover, a well-constituted IRB more on this in a moment can perform the functions that the current approach cannot.
When it comes to building and maintaining an IRB, three issues loom large: membership of the board, jurisdiction, and articulating the values it will strive to achieve (or at least the nightmares it strives to avoid).
To systematically and exhaustively identify and mitigate AI ethical risks, an AI IRB requires a diverse team of experts. You will want to place an engineer that understands the technical underpinnings of the research and/or product so the committee can understand what is being done and what can be done from a technical perspective. Similarly, someone deeply familiar with product design is important. They speak the language of the product developers, understand customer journeys, and can help shape ethical risk-mitigation strategies in a way that doesnt undermine the essential functions of the products under consideration.
Youll also want to include ethics-adjacent members, like attorneys and privacy officers. Their knowledge of current and potential regulations, anti-discrimination law, and privacy practices are important places to look when vetting for ethical risks.
Insofar as the AI IRB has as its function the identification and mitigation of ethical risks, it would be wise of you to include an ethicist, e.g. someone with a Ph.D. in philosophy who specializes in ethics, or, say, someone with a masters degree in medical ethics. The ethicist isnt there to act as kind of priest with superior ethical views. Theyre there because they have training, knowledge, and experience related to understanding and spotting a vast array of ethical risks, familiarity with important concepts and distinctions that aid in clear-eyed ethical deliberation, and the skill of helping groups of people objectively assess ethical issues. Importantly, this kind of risk assessment is distinct from the risks one finds in the model risk assessments created by data scientists and engineers, which tend to focus on issues relating to accuracy and data quality.
You may also find it useful to include various subject-matter experts depending on the research or product at hand. If the product is to be deployed in universities, someone deeply familiar with their operations, goals, and constituencies should be included. If it is a product to be deployed in Japan, including an expert in Japanese culture may be important.
Lastly, as part of an effort to maintain independence and the absence of conflict of interests (e.g. members looking for approval from their bosses), having at least one member unaffiliated with your organization is important (and is, incidentally, required for medical IRBs). At the same time, all members should have a sense of the business goals and necessities.
When should an AI IRB be consulted, how much power should it have, and where should it be situated in product development? In the medical community, IRBs are consulted prior to the start of research. The reason there is obvious: The IRB is consulted when testing on human subjects will be performed, and one needs approval before that testing begins. When it comes to authority, medical IRBs are the ultimate authority. They can approve and reject proposals, as well as suggest changes to the proposal, and their decisions are final. Once an IRB has denied a proposal, another IRB cannot approve it, and the decision cant be appealed.
The same rule should apply for an AI IRB.
Even though the harm typically occurs during deployment of the AI, not research and product development, theres a strong case for having an AI IRB before research and/or product development begins. The primary reason for this is that its much easier and therefore more inexpensive and efficient to change projects and products that do not yet exist. If, for instance, you only realize a significant ethical risk from a potential or probable unintended consequence of how the product was designed, you will either have to go to market with a product you know to be ethically risky or you will have to go through the costly process of reengineering the product.
While medical IRBs are granted their authority by the law, there is at least one strong reason you should consider voluntarily granting that degree of power to an AI IRB: It is a tool by which great trust can be built with employees, clients, and consumers. That is particularly true if your organization is transparent about the operations even if not the exact decisions of the IRB. If being an ethically sound company is at the top of the pyramid of your companys values, then granting an AI IRB the independence and power to veto proposals without the possibility of an appeal (to a member of your executive team, for instance) is a good idea.
Of course, that is often (sadly) not the case. Most companies will see the AI IRB as a tool of risk mitigation, not elimination, and one should admit at least the possibility if not the probability of cases in which a company can pursue a project that is ethically risky while also highly profitable. For companies with that kind of ethical risk appetite, either an appeals process will have to be created or, if they are only minorly concerned with ethical risk mitigation, they may make the pronouncements of the board advisory instead of required. At that point, though, they should not expect the board to be particularly effective at systematically mitigating ethical risks.
Youve assembled your AI IRB and defined its jurisdiction. Now youll need to articulate the values by which it should be guided. The standard way of doing this is to articulate a set of principles and then seek to apply those principles to the case at hand. This is notoriously difficult, given the vast array of ways in which principles can be interpreted and applied; just think about the various and incompatible ways sincere politicians interpret and apply the principle of fairness.
In medical ethics and in the law, for that matter decision-making is usually not guided by principles alone. Instead, they rely on case studies and precedent, comparing any given case under investigation to previous cases that are similar. This allows your IRB to leverage the insights brought to bear on the previous case to the present case. It also increases the probability of consistency in the application of principles across cases.
Progress can be made here by articulating previous decisions senior leadership made on ethical grounds prior to the existence of the IRB. Suppose, for instance, the IRB knows senior leaders rejected a contract with a certain government due to particular ethical concerns about how the government operates generally or how they anticipated that government would use their product. The reasoning that led to the decision can reveal how future cases ought to be decided. In the event that no such cases exist and/or no such cases have been disclosed, it can be useful to entertain fictional examples, preferably ones that are not unlikely to become real examples in the future, and for the IRB to deliberate and decide on those cases. Doing that will ensure readiness for the real case when it arrives on their doorstep. It also encourages the cool objectivity with which fictional cases can be considered when no money is on the line, for instance to transfer to the real cases to which theyll be compared.
***
We all know that adopting an AI strategy is becoming a necessity to stay competitive. In a remarkable bit of good news, board members and data leaders see AI ethical risk mitigation as an essential component of that strategy. But current approaches are grossly inadequate and many leaders are uncertain how to develop this part of their strategy.
In the absence of the ideal a widespread commitment to creating a robust AI ethical-risk program from day one building, maintaining, and empowering an AI IRB can serve as a strong foundation for achieving that ideal. It can be created in relatively short order, it can be piloted fairly easily, it can be built on and expanded to cover all product teams and even all departments, and it creates and communicates a culture of ethics. Thats a powerful punch not only for AI ethics, but for the ethics of the organization more generally.
Follow this link:
If Your Company Uses AI, It Needs an Internal Review Board - Harvard Business Review
- NBC Has a Huge Opportunity with Law & Order: SVU's 25th Season - CBR - Comic Book Resources - November 30th, 2023 [November 30th, 2023]
- Seeding a gay community in LA, the gay liberation revolution - Los Angeles Blade - November 30th, 2023 [November 30th, 2023]
- Britney Spears's 'Baby One More Time' music video debuted on ... - Yahoo Entertainment - November 30th, 2023 [November 30th, 2023]
- 13 Of The Greatest And Most Famous Britpop Bands - Hello Music Theory - November 30th, 2023 [November 30th, 2023]
- The top advertising campaigns of 2023 according to Australian ... - AdNews - November 30th, 2023 [November 30th, 2023]
- The 25 Best New Movies Streaming in November 2023 - TheWrap - November 30th, 2023 [November 30th, 2023]
- Jets' Aaron Rodgers 'attacking' rehab, eyes return this season - WABC-TV - October 3rd, 2023 [October 3rd, 2023]
- ESG counteroffensive is missing big guns - POLITICO - POLITICO - October 3rd, 2023 [October 3rd, 2023]
- The increasingly radical climate movement, explained - Vox.com - October 3rd, 2023 [October 3rd, 2023]
- Imani Winds inspires with recital celebrating composers of color at ... - EarRelevant - October 3rd, 2023 [October 3rd, 2023]
- The Super Models Tells the Story of the Original Fashion Influencers - AnOther Magazine - October 3rd, 2023 [October 3rd, 2023]
- What constitutes a master? Don't ask Jann Wenner The Daily ... - Daily Free Press - October 3rd, 2023 [October 3rd, 2023]
- The Conviviality of Ivan Illich (Part I) | by O.G. Rose | Oct, 2023 ... - Medium - October 3rd, 2023 [October 3rd, 2023]
- SickKids unveils more future-focused VS campaign to match new ... - The Message - October 3rd, 2023 [October 3rd, 2023]
- Top 6 Iconic Classic Rock Bands of the '60s - American Songwriter - October 3rd, 2023 [October 3rd, 2023]
- Brent Harold: The renaissance of union logic - Arizona Daily Star - October 3rd, 2023 [October 3rd, 2023]
- German bishops conclude tense gathering with all eyes on Synod ... - Catholic World Report - October 3rd, 2023 [October 3rd, 2023]
- Slasher Saturdays: The Hills Have Eyes (1977) Vs. The Hills Have ... - Horror Obsessive - October 3rd, 2023 [October 3rd, 2023]
- Listen to Scott Drebit Discuss His New Book A CUT BELOW: A ... - Daily Dead - October 3rd, 2023 [October 3rd, 2023]
- Whitney Houston Hairstyles: Tribute to Her Unparalleled Elegance - PINKVILLA - October 3rd, 2023 [October 3rd, 2023]
- Frosted Lipstick, Chunky Highlights & Thick Eyeliner: Every Beauty ... - New Zealand Herald - October 3rd, 2023 [October 3rd, 2023]
- From Alphas To Betas: Science Says There Are Three Types Of ... - Evie Magazine - October 3rd, 2023 [October 3rd, 2023]
- Russell Brand is a product of the horrifically misogynistic noughties - Prospect Magazine - October 3rd, 2023 [October 3rd, 2023]
- The Enduring Magic of Lorde's Pure Heroine and HAIM's Days Are ... - Paste Magazine - October 3rd, 2023 [October 3rd, 2023]
- Climate activists: How far is too far in raising the climate alarm? - Daily Maverick - October 3rd, 2023 [October 3rd, 2023]
- Pride Anthems at WHBPAC June 2nd at 8PM - Hamptons.com - May 28th, 2023 [May 28th, 2023]
- The illuminating influence of Eric Huntley - Peoples Dispatch - May 28th, 2023 [May 28th, 2023]
- Want Sofia Richie Style? Try These Cheap Nordstrom Finds - Who What Wear - May 28th, 2023 [May 28th, 2023]
- What will Saudi-Iran rapprochement mean for the Palestinians? - +972 Magazine - May 28th, 2023 [May 28th, 2023]
- EU as Arbiter of Ideological Elegance? The European Conservative - The European Conservative - May 28th, 2023 [May 28th, 2023]
- Catholic theology yesterday and today: A Thomist's response to Dr ... - Catholic World Report - May 28th, 2023 [May 28th, 2023]
- Andy Warhol exhibition coming to College of DuPage - Chicago Tribune - May 28th, 2023 [May 28th, 2023]
- COVER STORY | Arlo Parks Embraces the Intimacy of Aliveness - Paste Magazine - May 28th, 2023 [May 28th, 2023]
- The Number Ones: The Black Eyed Peas' Boom Boom Pow - Stereogum - May 28th, 2023 [May 28th, 2023]
- 7 First-time ASTRA Exhibitors You Don't Want to Miss This June - Gifts & Decorative Accessories - May 28th, 2023 [May 28th, 2023]
- Curator Lesley Lokko on the Venice Architecture Biennale: 'It's about ... - Financial Times - May 18th, 2023 [May 18th, 2023]
- German revolution of 1848: A precursor to today's democracy - DW (English) - May 18th, 2023 [May 18th, 2023]
- The Hoxton, Lloyd Amsterdam to open 21st August 2023 - Hospitality Net - May 18th, 2023 [May 18th, 2023]
- Ruin America? Joe Manchin is just getting started. | Will Bunch ... - The Philadelphia Inquirer - May 18th, 2023 [May 18th, 2023]
- How the MTV logo captured the creative spirit of the 1980s - Creative Bloq - May 18th, 2023 [May 18th, 2023]
- I give up I cant do that: The song that made David Crosby want to quit music - Far Out Magazine - May 18th, 2023 [May 18th, 2023]
- How We Loved and Lost the Hot Girl Summer - The Swaddle - May 18th, 2023 [May 18th, 2023]
- 5 Laid Back Essentials From Faherty Prove The Hype - Fatherly - May 18th, 2023 [May 18th, 2023]
- 'How to Blow Up a Pipeline' director Daniel Goldhaber explains the ... - The Real News Network - May 18th, 2023 [May 18th, 2023]
- The Totally Rockin' History of Dr. Teeth and the Electric Mayhem - Collider - May 18th, 2023 [May 18th, 2023]
- Was The Hunger Games Renaissance Planned All Along? - GameRant - May 18th, 2023 [May 18th, 2023]
- Michael J. Fox Looks Back on Hollywood Triumphs, Setbacks and Why Parkinsons Is the Gift That Keeps on Taking - Variety - May 18th, 2023 [May 18th, 2023]
- It's Raining Ramen! A Brief History of Jewish Asian Fusion - Aish - May 18th, 2023 [May 18th, 2023]
- Ted Weber's Wesleyan Political Theology - Juicy Ecumenism - May 18th, 2023 [May 18th, 2023]
- What do the British Royals and Cleopatra have in common? - Firstpost - May 18th, 2023 [May 18th, 2023]
- Pakistan Army won't bounce back easily this time. Imran Khan ... - ThePrint - May 18th, 2023 [May 18th, 2023]
- Five years since #MeToo, Tarana Burke is looking beyond the hashtag - Yahoo News - October 15th, 2022 [October 15th, 2022]
- After Florence Pugh Freed The Nipple, Olivia Wilde Supported The Movement On New Magazine Cover - CinemaBlend - October 15th, 2022 [October 15th, 2022]
- Barbara Kay: The Movement to Normalize Pedophilia Hits a Roadblock, but We Mustn't Let Our Guard Down - The Epoch Times - October 15th, 2022 [October 15th, 2022]
- Is it Time to Decolonize Global Health Data? - Research Blog - Duke University - October 15th, 2022 [October 15th, 2022]
- Claire Foy Doesnt Think Women Talking Could Have Been Made Before #MeToo - Yahoo Entertainment - October 15th, 2022 [October 15th, 2022]
- Can the Congress rewrite its chronicle of a death foretold? - Scroll.in - October 15th, 2022 [October 15th, 2022]
- We need a strong nationalist as a president - Daily Sun - October 15th, 2022 [October 15th, 2022]
- The 19th Century Movement to Canonize Columbus - Catholic Exchange - October 13th, 2022 [October 13th, 2022]
- Audemars Piguet toasts 50 years of Royal Oak with new watches, book - New York Post - October 13th, 2022 [October 13th, 2022]
- Claire Foy Doesn't Think 'Women Talking' Could Have Been Made Before #MeToo - Yahoo! Voices - October 13th, 2022 [October 13th, 2022]
- Best Bets: 6 nights of live music at Wussow's and more - Duluth News Tribune - October 13th, 2022 [October 13th, 2022]
- Five Burning Questions: Bad Bunny Spends a 13th Week at No. 1 With Un Verano Sin Ti - Billboard - October 13th, 2022 [October 13th, 2022]
- San Diego artist uses creativity to uplift Black culture and 'determine how we are seen' - The San Diego Union-Tribune - October 13th, 2022 [October 13th, 2022]
- The Premier League at thirty - what should it sound like next? - Broadcast - October 13th, 2022 [October 13th, 2022]
- Steve Braunias on Peter Ellis case: 'Moral panic, contaminated evidence and an innocent ghost' - New Zealand Herald - October 13th, 2022 [October 13th, 2022]
- Constituency Statutes: The Overlooked Predecessor to the ESG Movement - JD Supra - October 2nd, 2022 [October 2nd, 2022]
- 10 books to add to your reading list in October 2022 - Los Angeles Times - October 2nd, 2022 [October 2nd, 2022]
- The Multiple Religions Coexisting Within the Catholic Church - Crisis Magazine - October 2nd, 2022 [October 2nd, 2022]
- 2023 Oscar Predictions The Rules of the Game - Awards Daily - October 2nd, 2022 [October 2nd, 2022]
- Kathy Sheridan: Brace yourselves for where Giorgia Meloni and Italy end up - The Irish Times - October 2nd, 2022 [October 2nd, 2022]
- The rise and fall of Sir Philip Green, the retail king who fell from grace - Evening Standard - October 2nd, 2022 [October 2nd, 2022]
- The lying flat movement standing in the way of China ... - Brookings - September 29th, 2022 [September 29th, 2022]
- Namwali Serpell Distills the Disorienting Experience of Grief in 'The Furrows' - Shondaland.com - September 29th, 2022 [September 29th, 2022]
- Dance & House Music Ruled the Summer. What Now? - Complex - September 29th, 2022 [September 29th, 2022]
- It is time to back a new party in the elections - Morning Star Online - September 29th, 2022 [September 29th, 2022]
- The empty feminism of Dont Worry Darling - The Guardian - September 27th, 2022 [September 27th, 2022]
- Sunburn The morning read of what's hot in Florida politics 9.26.22 - Florida Politics - September 27th, 2022 [September 27th, 2022]
- GOP candidate Trevor Lee ran a secret Twitter account that attacked LGBTQ people and Utah Gov. Cox. Now he's been rebuked by Republican leadership. -... - September 27th, 2022 [September 27th, 2022]
- Peeling Back the Slasher-Inspired Look of HBO Maxs Pretty Little Liars: Original Sin with Cinematographer Anka Malatynska - Dread Central - September 27th, 2022 [September 27th, 2022]