The Ethical Dilemma at the Heart of Big Tech Companies

Posted: July 13, 2022 at 8:58 am

If it seems like every week theres a new scandal about ethics and the tech industry, its not your imagination. Even as the tech industry is trying to establish concrete practices and institutions around tech ethics, hard lessons are being learned about the wide gap between the practices of doing ethics and what people think of as ethical. This helps explain, in part, why it raises eyebrows when Google dissolves its short-lived AI ethics advisory board, in the face of public outcry about including a controversial alumnus of the Heritage Foundation on it, or when organized pressure from Googles engineering staff results in the cancellation of military contracts.

This gap is important, because alongside these decidedly bad calls by those leading the charge for ethics in industry, we are also seeing the tech sector begin investing meaningful resources in the organizational capacity to identify, track, and mitigate the consequences of algorithmic technologies. We are at a point where it would seem that the academics and critics who had exhorted the industry to make such considerations for decades should be declaring a small victory. Yet in many cases, those same outside voices are raising a vigorous round of objections to the tech industrys choices, often with good reason. While just a few years ago, it seemed everyone shared an understanding of what was meant by ethics in the tech industry, now that ethics is a site of power, who gets to determine the meaning and practices of ethics is being hotly contested.

In Silicon Valley, however, it is unclear what all of this means, particularly when it comes to translating ethical principles into the practical necessities and language of business. Is tech ethics just the pursuit of robust processes? What are tech ethicists goals, and what is their theory of change? Is any of this work measurable in the frameworks companies already use to account for value? How much does ethics add to the cost of doing business, and what is the difference for companies that are just starting up, racing toward IPO, or already household names?

Building fair and equitable machine learning systems.

To find out, we along with co-author of the study, danah boyd (who prefers lowercase letters in her name) studied those doing the work of ethics inside of companies, whom we call ethics owners, to find out what they see as their task at hand. Owner is common parlance inside of flat corporate structures, meaning someone who is responsible for coordinating a domain of work across the different units of an organization. Our research interviewing this new class of tech industry professionals shows that their work is, tentatively and haltingly, becoming more concrete through both an attention to process and a concern with outcomes. We learned that people in these new roles encounter an important set of tensions that are fundamentally not resolvable. Ethical issues are never solved, they are navigated and negotiated as part of the work of ethics owners.

The central challenge ethics owners are grappling with is negotiating between external pressures to respond to ethical crises at the same time that they must be responsive to the internal logics of their companies and the industry. On the one hand, external criticisms push them toward challenging core business practices and priorities. On the other hand, the logics of Silicon Valley, and of business more generally, create pressures to establish or restore predictable processes and outcomes that still serve the bottom line.

We identified three distinct logics that characterize this tension between internal and external pressures:

Meritocracy: Although originally coined as a derisive term in satirical science fiction by British sociologist Michael Young, meritocracy infuses everything in Silicon Valley from hiring practices to policy positions, and retroactively justifies the industrys power in our lives. As such, ethics is often framed with an eye toward smarter, better, and faster approaches, as if the problems of the tech industry can be addressed through those virtues. Given this, it is not surprising that many within the tech industry position themselves as the actors best suited to address ethical challenges, rather than less technically-inclined stakeholders, including elected officials and advocacy groups. In our interviews, this manifested in relying on engineers to use their personal judgement by grappling with the hard questions on the ground, trusting them to discern and to evaluate the ethical stakes of their own products. While there are some rigorous procedures that help designers scan for the consequences of their products, sitting in a room and thinking hard about the potential harms of a product in the real world is not the same as thoroughly understanding how someone (whose life is very different than a software enginee) might be affected by things like predictive policing or facial recognition technology, as obvious examples. Ethics owners find themselves being pulled between technical staff that assert generalized competence over many domains and their own knowledge that ethics is a specialized domain that requires deep contextual understanding.

Market fundamentalism: Although it is not the case that tech companies will choose profit over social good in every instance, it is the case that the organizational resources necessary for morality to win out need to be justified in market-friendly terms. As a senior leader in a research division explained, this means that the system that you create has to be something that people feel adds value and is not a massive roadblock that adds no value, because if it is a roadblock that has no value, people literally wont do it, because they dont have to. In the end, the market sets the terms of the debate, even if maximum profit is not the only acceptable outcome. Ethics owners therefore must navigate between avoiding measurable downside risks and promoting the upside benefits of more ethical AI. Arguing against releasing a product before it undergoes additional testing for racial or gender bias, or in order to avoid a potential lawsuit, is one thing. Arguing that the more extensive testing will lead to greater sales numbers is something else. Both are important, but one fits squarely inside the legal and compliance team, the other fits better in product teams.

Technological solutionism: The idea that all problems have tractable technical fixes has been reinforced by the rewards the industry has reaped for producing technology that they believe does solve problems. As such, the organizational practices that facilitate technical success are often ported over to ethics challenges. This is manifested in the search for checklists, procedures, and evaluative metrics that could break down messy questions of ethics into digestible engineering work. This optimism is counterweighted by a concern that, even when posed as a technical question, ethics becomes intractable, like its too big of a problem to tackle.This tension is on stark display in addressing bias and unfairness in AI; there are dozens of solutions for fixing algorithmic bias through complex statistical methods, but less work on addressing the underlying bias in data collection or in the real world that that data is collected from. And even for a fair algorithm, fairness is only a subset of ethical questions about a product. What good is fairness if it only leads to a less biased set of people harmed by a dangerous product?

Our research shows that even if they are all engaged in some form of critique of it, the collective goal of ethics owners is not to stop the tech industry. They, like the engineers they work alongside, are enmeshed in organizational cultures that reward metric-oriented and fast-paced work with more resources. This ratchets up the pressure to fit in and ratchets down the capacity to object, which makes it all the more difficult to distinguish between success and failure moral victories can look like punishment while ethically questionable products earn big bonuses. The tensions that arise from this must be worked through, with one eye on process, certainly, but also with the other eye squarely focused on outcomes, both short- and long-term, both inside and outside companies, and as both employees and members of a much broader society.

We saw these tensions when the co-founder of the Stanford Human-Centered AI (HAI) Institute, the renowned AI researcher Fei-Fei Li, became notorious while working at Google for warning in a leaked email that Googlers should not publicly discuss the role of their AI products in constructing a system for facial analysis on military drones. Rather than using her considerable sway advocating against the military contract for obviously ethically-troubling technology, Li argued to colleagues that discussing Project Maven publicly would lead to damaging the positive image they had cultivated through talk of humanistic AI.

Similarly, when human rights legal scholar Philip Alston said from the 2018 AI Now symposium stage, half-jokingly, I want to strangle ethics, he was not implying that he wishes people and companies to be less ethical, but rather that ethics as opposed to a human rights legal framework, for example is typically approached as a non-normative, open-ended, undefined and unaccountable endeavor focused on achieving a robust process rather than a substantive outcome. Oddly enough, Alstons take on ethics is copacetic with Lis: ethics as a series of processes appears to not need to make substantive commitments to just outcomes.

For better or worse, the parameters of those processes will drive future administrative regulations, algorithmic accountability documentation, investment priorities, and human resource decisions. When we collectively debate how to manage the consequences of digital technologies, we should include more of the perspective of people whose labor is shaping this part of our futures.

Excerpt from:

The Ethical Dilemma at the Heart of Big Tech Companies

Related Posts