The proliferation of artificial intelligence and algorithmic decision-making has helped shape myriad aspects of our society: From facial recognition to deep fake technology to criminal justice and health care, their applications are seemingly endless. Across these contexts, the story of applied algorithmic decision-making is one of both promise and peril. Given the novelty, scale, and opacity involved in many applications of these technologies, the stakes are often incredibly high.
This is the introduction to FTC Commissioner Rebecca Kelly Slaughter's whitepaper:Algorithms and Economic Justice: A Taxonomy of Harms and a Path Forward for the Federal Trade Commission. If you have been keeping up with data-driven and algorithmic decision-making, analytics, machine learning, AI, and their applications, you can tell it's spot on. The 63-page Whitepaper does not disappoint.
Slaughter worked on the whitepaper with her FTC colleagues Janice Kopec and Mohamad Batal. Their work was supported by Immuta, and it has just been published as part of theYale Law School Information Society Project Digital Future Whitepapers series. The Digital Future Whitepaper Series, launched in 2020, is a venue for leading global thinkers to question the impact of digital technologies on law and society.
The series aims to provide academics, researchers, and practitioners a forum to describe novel challenges of data and regulation, to confront core assumptions about law and technology, and to propose new ways to align legal and ethical frameworks to the problems of the digital world.
Slaughter notes that in recent years, algorithmic decision-making has produced biased, discriminatory, and otherwise problematic outcomes in some of the most important areas of the American economy. Her work provides a baseline taxonomy of algorithmic harms that portend injustice, describing both the harms themselves and the technical mechanisms that drive those harms.
In addition, it describes Slaughter's view of how the FTC's existing tools can and should be aggressively applied to thwart injustice, and explores how new legislation or an FTC rulemaking could help structurally address the harms generated by algorithmic decision-making.
Slaughter identifies three ways in which flaws in algorithm design can produce harmful results: Faulty inputs, faulty conclusions, and failure to adequately test.
The value of a machine learning algorithm is inherently related to the quality of the data used to develop it, and faulty inputs can produce thoroughly problematic outcomes. This broad concept is captured in the familiar phrase: "Garbage in, garbage out."
The data used to develop a machine-learning algorithm might be skewed because individual data points reflect problematic human biases or because the overall dataset is not adequately representative. Often skewed training data reflect historical and enduring patterns of prejudice or inequality, and when they do, thesefaulty inputs can create biased algorithms that exacerbate injustice, Slaughter notes.
She cites some high-profile examples of faulty inputs, such asAmazon's failed attempt to develop a hiring algorithm driven by machine learning, and theInternational Baccalaureate'sandUK's A-Level exams. In all of those cases, the algorithms introduced to automate decisions kept identifying patterns of bias in the data used to train them and attempted to reproduce them.
A different type of problem involves feeding data into algorithms that generate conclusions that are inaccurate or misleading -- perhaps better phrased as "data in, garbage out." This type of flaw, faulty conclusions, undergirds fears about the rapidly proliferating field of AI-driven "affect recognition" technology and is often fueled by failures in experimental design.
Machine learning often works as a black box, and as applications are becoming more impactful, that can be problematic. Image: Immuta
Slaughter describes situations in which algorithms attempt to find patterns in, and reach conclusions based on, certain types of physical presentations, and mannerisms. But, she notes, as one might expect, human character cannot be reduced to a set of objective, observable factors. Slaughter highlights the use of affect recognition technology in hiring as particularly problematic.
Some more so than others, such as a company that purports to profile more than sixty personality traits relevant to job performance -- from "resourceful" to "adventurous" to "cultured" -- all based on an algorithm's analysis of an applicant's 30-second recorded video cover letter.
Despite the veneer of objectivity that comes from throwing around terms such as "AI" and "machine learning," in many contexts, the technology is still deeply imperfect, and many argue that its use is nothing less than pseudo-science.
But even algorithms designed with care and good intentions can still produce biased or harmful outcomes that are unanticipated, Slaughter notes. Too often, algorithms are deployed without adequate testing that could uncover these unwelcome outcomes before they harm people in the real world.
Slaughter mentions bias in search results uncovered when testing withGoogle'sandLinkedIn'ssearch but focuses on the health care field. Arecent studyfound racial bias in a widely used machine learning algorithm intended to improve access to care for high-risk patients with chronic health problems.
The algorithm used health care costs as a proxy for health needs, but for a variety of reasons unrelated to health needs, white patients spend more on health care than their equally sick Black counterparts do. Using health care costs to predict health needs, therefore, caused the algorithm to disproportionately flag white patients for additional care.
Researchers estimated that as a result of this embedded bias, the number of Black patients identified for extra care was reduced by more than half. The researchers who uncovered the flaw in the algorithm were able to do so because they looked beyond the algorithm itself to the outcomes it produced and because they had access to enough data to conduct a meaningful inquiry.
When the researchers identified the flaw, the algorithm's manufacturer worked with them to mitigate its impact,ultimately reducing bias by 84%-- exactly the type of bias reduction and harm mitigation that testing and modification seeks to achieve, Slaughter notes.
Not all harmful consequences of algorithms stem from design flaws. Slaughter also identifies three ways in which sophisticated algorithms can generate systemic harm: by facilitating proxy discrimination, by enabling surveillance capitalism, and by inhibiting competition in markets.
Proxy discrimination is the use of one or more facially neutral variables to stand in for a legally protected trait, often resulting in disparate treatment of or disparate impact on protected classes for certain economic, social, and civic opportunities. In other words, these algorithms identify seemingly neutral characteristics to create groups that closely mirror a protected class, and these "proxies" are used for inclusion or exclusion.
Slaughter mentions some high-profile cases of proxy discrimination: The Department of Housing and Urban Development allegations against Facebook's tool called "Lookalike Audiences,"showings of job openings to various audiences, andFinTech innovations that can enable the continuation of historical biasto deny access to the credit system or to efficiently target high-interest products to those who can least afford them.
An additional way algorithmic decision making can fuel broader social challenges is the role it plays in the system ofsurveillance capitalism, which Slaughter defines as a business model that systematically erodes privacy, promotes misinformation and disinformation, drives radicalization, undermines consumers' mental health, and reduces or eliminates consumers' choices.
AI Ethics has very real ramifications that are getting increasingly more widespread and important
Through constant, data-driven adjustments, Slaughter notes, algorithms that process consumer data, often in real-time, evolve, and "improve" in a relentless effort to capture and monetize as much attention from as many people as possible. Many surveillance capitalism enterprises are remarkably successful at using algorithms to "optimize" for consumers' attention with little regard for downstream consequences.
Slaughter examines the case ofYouTube content addressed at children and how it's been weaponized. TheFTC has dealt with this, and Slaughter notes that YouTube announced they will use machine learning to actively search for mis-designated content and automatically apply age restrictions.
While this sounds like the technological backstopSlaughter requestedin that case, she notes two major differences: First, it is entirely voluntary, and second, both its application and effectiveness are opaque. That, she argues, brings up a broader set of concerns about surveillance capitalism -- one that extends beyond any single platform.
The pitfalls associated with algorithmic decision-making sound most obviously in the laws the FTC enforces through its consumer protection mission, Slaughter notes. But the FTC is also responsible for promoting competition, and the threats posed by algorithms profoundly affect that mission as well.
Moreover, she goes on to add, these two missions are not actually distinct, and problems -- including those related to algorithms and economic justice -- need to be considered with both competition and consumer protection lenses.
Slaughter examines topics including traditional antitrust fare such as pricing and collusion, as well as more novel questions such as the implications of the use of algorithms by dominant digital firms to entrench market power and to engage in exclusionary practices.
Overall, the whitepaper seems well-researched and shows a good overview of the subject matter. While the paper's sections on using the FTC's current authorities to better protect consumers and proposed new legislative and regulatory solutions refer to legal tools we do not feel qualified to report on, we encourage interested readers to read them.
We would also like to note, however, that while it's important to be aware ofAI ethics and the far-reaching consequences of data and algorithms, it's equally important to maintain a constructive and unbiased attitude when it comes to issues that are often subjective and open to interpretation.
Overzealous attitude in debates that often take place on social media, where context and intent can easily be misinterpreted and misrepresented, may not be the most constructive way to make progress. Case in point, AI figureheadsYann LeCunandPedro Domingo'smisadventures.
When it comes to AI ethics, we need to go beyond sensationalism and toward a well-informed and, well, data-driven approach. Slaughter's work seems like a step in that direction.
Originally posted here:
AI ethics in the real world: FTC commissioner shows a path toward economic justice - ZDNet
- Chinese national arrested and charged with stealing AI trade secrets from Google - NPR - March 8th, 2024 [March 8th, 2024]
- President Biden Calls for Ban on AI Voice Impersonations During State of the Union - Variety - March 8th, 2024 [March 8th, 2024]
- Revolutionize Your Business with AWS Generative AI Competency Partners | Amazon Web Services - AWS Blog - March 8th, 2024 [March 8th, 2024]
- Broadcom Expects AI Demand to Help Offset Weakness Elsewhere - Yahoo Finance - March 8th, 2024 [March 8th, 2024]
- Micron Hits Record High With Analysts Calling It an 'Under-Appreciated AI Beneficiary' - Investopedia - March 8th, 2024 [March 8th, 2024]
- The Adams administration quietly hired its first AI czar. Who is he? - City & State New York - March 8th, 2024 [March 8th, 2024]
- AI likely to increase energy use and accelerate climate misinformation report - The Guardian - March 8th, 2024 [March 8th, 2024]
- This Artificial Intelligence (AI) Stock Could Double, and It Is Way Cheaper Than Nvidia - Yahoo Finance - March 8th, 2024 [March 8th, 2024]
- Fake images made to show Trump with Black supporters highlight concerns around AI and elections - The Associated Press - March 8th, 2024 [March 8th, 2024]
- Artificial intelligence and illusions of understanding in scientific research - Nature.com - March 8th, 2024 [March 8th, 2024]
- Analysis | House AI task force leaders take long view on regulating the tools - The Washington Post - March 8th, 2024 [March 8th, 2024]
- Don't Give Your Business Data to AI Companies - Dark Reading - March 8th, 2024 [March 8th, 2024]
- NIST, the lab at the center of Bidens AI safety push, is decaying - The Washington Post - March 8th, 2024 [March 8th, 2024]
- Essay | AI is Coming! Tips for Staying Calm and Carrying On - The Wall Street Journal - March 8th, 2024 [March 8th, 2024]
- AI can be easily used to make fake election photos - report - BBC.com - March 8th, 2024 [March 8th, 2024]
- 5 Artificial Intelligence (AI) Stocks That Could Make You a Millionaire - Yahoo Finance - March 8th, 2024 [March 8th, 2024]
- AI could be an extraordinary force for good. So why do our politicians still not have a plan? - The Guardian - March 8th, 2024 [March 8th, 2024]
- Mapping Disease Trajectories from Birth to Death with AI - Neuroscience News - March 8th, 2024 [March 8th, 2024]
- India plans 10,000-GPU sovereign AI supercomputer - The Register - March 8th, 2024 [March 8th, 2024]
- SAP enhances Datasphere and SAC for AI-driven transformation - CIO - March 8th, 2024 [March 8th, 2024]
- Jim Cramer names companies and sectors poised to rally on the AI wave - CNBC - March 8th, 2024 [March 8th, 2024]
- The job applicants shut out by AI: The interviewer sounded like Siri - The Guardian - March 8th, 2024 [March 8th, 2024]
- Microsoft confirms Surface and Windows AI event for March 21st - The Verge - March 8th, 2024 [March 8th, 2024]
- Adobes new Express app brings Firefly AI tools to iOS and Android - The Verge - March 8th, 2024 [March 8th, 2024]
- A Google AI Watched 30,000 Hours of Video GamesNow It Makes Its Own - Singularity Hub - March 8th, 2024 [March 8th, 2024]
- Palantir CEO Karp on TITAN, AI Warfare Technology - Bloomberg - March 8th, 2024 [March 8th, 2024]
- Elliptic Curve Murmurations Found With AI Take Flight - Quanta Magazine - March 8th, 2024 [March 8th, 2024]
- 5 AI Stocks to Buy in March 2024, According to Analysts - TipRanks.com - TipRanks - March 8th, 2024 [March 8th, 2024]
- Wix's new AI chatbot builds websites in seconds based on prompts - The Verge - March 8th, 2024 [March 8th, 2024]
- Amid record high energy demand, America is running out of electricity - The Washington Post - March 8th, 2024 [March 8th, 2024]
- AI Crypto Tokens in 5 Minutes: What to Know and Where to Start - Inc. - February 26th, 2024 [February 26th, 2024]
- 'The Worlds I See' by AI visionary Fei-Fei Li '99 selected as Princeton Pre-read - Princeton University - February 26th, 2024 [February 26th, 2024]
- AI is having a 1995 moment, analyst says - Business Insider - February 26th, 2024 [February 26th, 2024]
- Vatican research group's book outlines AI's 'brave new world' - National Catholic Reporter - February 26th, 2024 [February 26th, 2024]
- Honor's Magic 6 Pro launches internationally with AI-powered eye tracking on the way - The Verge - February 26th, 2024 [February 26th, 2024]
- Google explains Gemini's embarrassing AI pictures of diverse Nazis - The Verge - February 26th, 2024 [February 26th, 2024]
- Google cut a deal with Reddit for AI training data - The Verge - February 26th, 2024 [February 26th, 2024]
- What's the point of Elon Musk's AI company? - The Verge - February 26th, 2024 [February 26th, 2024]
- AI agents like Rabbit aim to book your vacation and order your Uber - NPR - February 26th, 2024 [February 26th, 2024]
- Announcing Microsofts open automation framework to red team generative AI Systems - Microsoft - February 26th, 2024 [February 26th, 2024]
- After Nvidia's latest blowout, here are 20 AI stocks expected to rise as much as 44% - Yahoo Finance - February 26th, 2024 [February 26th, 2024]
- 1 Exceptional AI Chip Stock Investors Need to Know About in 2024 - The Motley Fool - February 26th, 2024 [February 26th, 2024]
- Nvidia briefly hits $2 trillion valuation as AI frenzy grips Wall Street - Reuters - February 26th, 2024 [February 26th, 2024]
- AI Chatbots Can Guess Your Personal Information From What You ... - WIRED - October 18th, 2023 [October 18th, 2023]
- Harvard IT Launches Pilot of AI Sandbox to Enable Walled-Off Use ... - Harvard Crimson - October 18th, 2023 [October 18th, 2023]
- Advancing policing through AI: Insights from the global law ... - Police News - October 18th, 2023 [October 18th, 2023]
- Hochul announces new SUNY, IBM investments in AI - Olean Times Herald - October 18th, 2023 [October 18th, 2023]
- Nvidia's banking on TensorRT to expand its generative AI dominance - The Verge - October 18th, 2023 [October 18th, 2023]
- AI expands from MRFs to vehicles - Plastics Recycling Update - October 18th, 2023 [October 18th, 2023]
- AI Reads Ancient Scroll Charred by Mount Vesuvius in Tech First - Scientific American - October 18th, 2023 [October 18th, 2023]
- A DEEPer (squared) dive into AI Harvard Gazette - Harvard Gazette - October 18th, 2023 [October 18th, 2023]
- Florida bar weighs whether lawyers using AI need client consent - Reuters - October 18th, 2023 [October 18th, 2023]
- Cognizant and Vianai Systems Announce Strategic Partnership to ... - PR Newswire - October 18th, 2023 [October 18th, 2023]
- How AI could speed up scientific discoveries, from proteins to ... - NPR - October 18th, 2023 [October 18th, 2023]
- AI challenge to deliver better healthcare | Western Australian ... - Government of Western Australia - October 18th, 2023 [October 18th, 2023]
- Henry Kissinger: The Path to AI Arms Control - Foreign Affairs Magazine - October 18th, 2023 [October 18th, 2023]
- Stability AI releases StableStudio in latest push for open-source AI - The Verge - May 18th, 2023 [May 18th, 2023]
- Google CEO Sundar Pichai Predicts That This Profession Will Be ... - The Motley Fool - May 18th, 2023 [May 18th, 2023]
- Frances privacy watchdog eyes protection against data scraping in AI action plan - TechCrunch - May 18th, 2023 [May 18th, 2023]
- Investing in Hippocratic AI - Andreessen Horowitz - May 18th, 2023 [May 18th, 2023]
- As Alphabet flexes its AI prowess, there's a 'new elephant in the room' for Google - MarketWatch - May 18th, 2023 [May 18th, 2023]
- The Boring Future of Generative AI | WIRED - WIRED - May 18th, 2023 [May 18th, 2023]
- OpenAI readies new open-source AI model, The Information reports - Reuters.com - May 18th, 2023 [May 18th, 2023]
- What every CEO should know about generative AI - McKinsey - May 18th, 2023 [May 18th, 2023]
- AI creates images of the 'perfect' man and woman - Sky News - May 18th, 2023 [May 18th, 2023]
- Audit AI search tools now, before they skew research - Nature.com - May 18th, 2023 [May 18th, 2023]
- 3 Reasons C3.ai Stock Could Be Your Golden Ticket to the AI ... - InvestorPlace - May 18th, 2023 [May 18th, 2023]
- Zoom makes a big bet on AI with investment in Anthropic - VentureBeat - May 18th, 2023 [May 18th, 2023]
- AI voice phone scams are on the rise. Here's how to avoid them - USA TODAY - May 18th, 2023 [May 18th, 2023]
- Amazon is building an AI-powered conversational experience for ... - The Verge - May 18th, 2023 [May 18th, 2023]
- AI speculators need to 'differentiate between actual spending and investment' and hype: Strategist - Yahoo Finance - May 18th, 2023 [May 18th, 2023]
- AI Can Be Both Accurate and Transparent - HBR.org Daily - May 18th, 2023 [May 18th, 2023]
- You're Probably Underestimating AI Chatbots | WIRED - WIRED - May 18th, 2023 [May 18th, 2023]
- AI presents political peril for 2024 with threat to mislead voters - The Associated Press - May 18th, 2023 [May 18th, 2023]
- We need AI to help us face the challenges of the future - The Guardian - May 18th, 2023 [May 18th, 2023]
- End Of Googles Dominance? Stock Gets Rare Analyst Downgrade Over AI Fears - Forbes - May 18th, 2023 [May 18th, 2023]
- Watch 44 million atoms simulated using AI and a supercomputer - New Scientist - May 18th, 2023 [May 18th, 2023]
- AI Is The New Electricity: Bank Of America Picks 20 Stocks To Cash In On ChatGPT Hype - Forbes - March 2nd, 2023 [March 2nd, 2023]
- Tech Giants Are Barreling Headfirst Into an AI Arms Race - February 20th, 2023 [February 20th, 2023]
- Bing's AI Is Threatening Users. That's No Laughing Matter - TIME - February 20th, 2023 [February 20th, 2023]