Among the A.I. Doomsayers – The New Yorker

Posted: March 14, 2024 at 12:11 am

Katja Graces apartment, in West Berkeley, is in an old machinists factory, with pitched roofs and windows at odd angles. It has terra-cotta floors and no central heating, which can create the impression that youve stepped out of the California sunshine and into a duskier place, somewhere long ago or far away. Yet there are also some quietly futuristic touches. High-capacity air purifiers thrumming in the corners. Nonperishables stacked in the pantry. A sleek white machine that does lab-quality RNA tests. The sorts of objects that could portend a future of tech-enabled ease, or one of constant vigilance.

Grace, the lead researcher at a nonprofit called A.I. Impacts, describes her job as thinking about whether A.I. will destroy the world. She spends her time writing theoretical papers and blog posts on complicated decisions related to a burgeoning subfield known as A.I. safety. She is a nervous smiler, an oversharer, a bit of a mumbler; shes in her thirties, but she looks almost like a teen-ager, with a middle part and a round, open face. The apartment is crammed with books, and when a friend of Graces came over, one afternoon in November, he spent a while gazing, bemused but nonjudgmental, at a few of the spines: Jewish Divorce Ethics, The Jewish Way in Death and Mourning, The Death of Death. Grace, as far as she knows, is neither Jewish nor dying. She let the ambiguity linger for a moment. Then she explained: her landlord had wanted the possessions of the previous occupant, his recently deceased ex-wife, to be left intact. Sort of a relief, honestly, Grace said. One set of decisions I dont have to make.

She was spending the afternoon preparing dinner for six: a yogurt-and-cucumber salad, Impossible beef gyros. On one corner of a whiteboard, she had split her pre-party tasks into painstakingly small steps (Chop salad, Mix salad, Mold meat, Cook meat); on other parts of the whiteboard, shed written more gnomic prompts (Food area, Objects, Substances). Her friend, a cryptographer at Android named Paul Crowley, wore a black T-shirt and black jeans, and had dyed black hair. I asked how they knew each other, and he responded, Oh, weve crossed paths for years, as part of the scene.

It was understood that the scene meant a few intertwined subcultures known for their exhaustive debates about recondite issues (secure DNA synthesis, shrimp welfare) that members consider essential, but that most normal people know nothing about. For two decades or so, one of these issues has been whether artificial intelligence will elevate or exterminate humanity. Pessimists are called A.I. safetyists, or decelerationistsor, when theyre feeling especially panicky, A.I. doomers. They find one another online and often end up living together in group houses in the Bay Area, sometimes even co-parenting and co-homeschooling their kids. Before the dot-com boom, the neighborhoods of Alamo Square and Hayes Valley, with their pastel Victorian row houses, were associated with staid domesticity. Last year, referring to A.I. hacker houses, the San Francisco Standard semi-ironically called the area Cerebral Valley.

A camp of techno-optimists rebuffs A.I. doomerism with old-fashioned libertarian boomerism, insisting that all the hand-wringing about existential risk is a kind of mass hysteria. They call themselves effective accelerationists, or e/accs (pronounced e-acks), and they believe A.I. will usher in a utopian futureinterstellar travel, the end of diseaseas long as the worriers get out of the way. On social media, they troll doomsayers as decels, psyops, basically terrorists, or, worst of all, regulation-loving bureaucrats. We must steal the fire of intelligence from the gods [and] use it to propel humanity towards the stars, a leading e/acc recently tweeted. (And then there are the normies, based anywhere other than the Bay Area or the Internet, who have mostly tuned out the debate, attributing it to sci-fi fume-huffing or corporate hot air.)

Graces dinner parties, semi-underground meetups for doomers and the doomer-curious, have been described as a nexus of the Bay Area AI scene. At gatherings like these, its not uncommon to hear someone strike up a conversation by asking, What are your timelines? or Whats your p(doom)? Timelines are predictions of how soon A.I. will pass particular benchmarks, such as writing a Top Forty pop song, making a Nobel-worthy scientific breakthrough, or achieving artificial general intelligence, the point at which a machine can do any cognitive task that a person can do. (Some experts believe that A.G.I. is impossible, or decades away; others expect it to arrive this year.) P(doom) is the probability that, if A.I. does become smarter than people, it will, either on purpose or by accident, annihilate everyone on the planet. For years, even in Bay Area circles, such speculative conversations were marginalized. Last year, after OpenAI released ChatGPT, a language model that could sound uncannily natural, they suddenly burst into the mainstream. Now there are a few hundred people working full time to save the world from A.I. catastrophe. Some advise governments or corporations on their policies; some work on technical aspects of A.I. safety, approaching it as a set of complex math problems; Grace works at a kind of think tank that produces research on high-level questions, such as What roles will AI systems play in society? and Will they pursue goals? When theyre not lobbying in D.C. or meeting at an international conference, they often cross paths in places like Graces living room.

The rest of her guests arrived one by one: an authority on quantum computing; a former OpenAI researcher; the head of an institute that forecasts the future. Grace offered wine and beer, but most people opted for nonalcoholic canned drinks that defied easy description (a fermented energy drink, a hopped tea). They took their Impossible gyros to Graces sofa, where they talked until midnight. They were courteous, disagreeable, and surprisingly patient about reconsidering basic assumptions. You can condense the gist of the worry, seems to me, into a really simple two-step argument, Crowley said. Step one: Were building machines that might become vastly smarter than us. Step two: That seems pretty dangerous.

Are we sure, though? Josh Rosenberg, the C.E.O. of the Forecasting Research Institute, said. About intelligence per se being dangerous?

Grace noted that not all intelligent species are threatening: There are elephants, and yet mice still seem to be doing just fine.

Cartoon by Erika Sjule and Nate Odenkirk

Rabbits are certainly more intelligent than myxomatosis, Michael Nielsen, the quantum-computing expert, said.

Crowleys p(doom) was well above eighty per cent. The others, wary of committing to a number, deferred to Grace, who said that, given my deep confusion and uncertainty about thiswhich I think nearly everyone has, at least everyone whos being honest, she could only narrow her p(doom) to between ten and ninety per cent. Still, she went on, a ten-per-cent chance of human extinction is obviously, if you take it seriously, unacceptably high.

They agreed that, amid the thousands of reactions to ChatGPT, one of the most refreshingly candid assessments came from Snoop Dogg, during an onstage interview. Crowley pulled up the transcript and read aloud. This is not safe, cause the A.I.s got their own minds, and these motherfuckers are gonna start doing their own shit, Snoop said, paraphrasing an A.I.-safety argument. Shit, what the fuck? Crowley laughed. I have to admit, that captures the emotional tenor much better than my two-step argument, he said. And then, as if to justify the moment of levity, he read out another quote, this one from a 1948 essay by C.S. Lewis: If we are all going to be destroyed by an atomic bomb, let that bomb when it comes find us doing sensible and human thingspraying, working, teaching, reading, listening to music, bathing the children, playing tennis, chatting to our friends over a pint and a game of dartsnot huddled together like frightened sheep.

Grace used to work for Eliezer Yudkowsky, a bearded guy with a fedora, a petulant demeanor, and a p(doom) of ninety-nine per cent. Raised in Chicago as an Orthodox Jew, he dropped out of school after eighth grade, taught himself calculus and atheism, started blogging, and, in the early two-thousands, made his way to the Bay Area. His best-known works include Harry Potter and the Methods of Rationality, a piece of fan fiction running to more than six hundred thousand words, and The Sequences, a gargantuan series of essays about how to sharpen ones thinking. The informal collective that grew up around these writingsfirst in the comments, then in the physical worldbecame known as the rationalist community, a small subculture devoted to avoiding the typical failure modes of human reason, often by arguing from first principles or quantifying potential risks. Nathan Young, a software engineer, told me, I remember hearing about Eliezer, who was known to be a heavy guy, onstage at some rationalist event, asking the crowd to predict if he could lose a bunch of weight. Then the big reveal: he unzips the fat suit he was wearing. Hed already lost the weight. I think his ostensible point was something about how its hard to predict the future, but mostly I remember thinking, What an absolute legend.

Yudkowsky was a transhumanist: human brains were going to be uploaded into digital brains during his lifetime, and this was great news. He told me recently that Eliezer ages sixteen through twenty assumed that A.I. was going to be great fun for everyone forever, and wanted it built as soon as possible. In 2000, he co-founded the Singularity Institute for Artificial Intelligence, to help hasten the A.I. revolution. Still, he decided to do some due diligence. I didnt see why an A.I. would kill everyone, but I felt compelled to systematically study the question, he said. When I did, I went, Oh, I guess I was wrong. He wrote detailed white papers about how A.I. might wipe us all out, but his warnings went unheeded. Eventually, he renamed his think tank the Machine Intelligence Research Institute, or MIRI.

The existential threat posed by A.I. had always been among the rationalists central issues, but it emerged as the dominant topic around 2015, following a rapid series of advances in machine learning. Some rationalists were in touch with Oxford philosophers, including Toby Ord and William MacAskill, the founders of the effective-altruism movement, which studied how to do the most good for humanity (and, by extension, how to avoid ending it). The boundaries between the movements increasingly blurred. Yudkowsky, Grace, and a few others flew around the world to E.A. conferences, where you could talk about A.I. risk without being laughed out of the room.

Philosophers of doom tend to get hung up on elaborate sci-fi-inflected hypotheticals. Grace introduced me to Joe Carlsmith, an Oxford-trained philosopher who had just published a paper about scheming AIs that might convince their human handlers theyre safe, then proceed to take over. He smiled bashfully as he expounded on a thought experiment in which a hypothetical person is forced to stack bricks in a desert for a million years. This can be a lot, I realize, he said. Yudkowsky argues that a superintelligent machine could come to see us as a threat, and decide to kill us (by commandeering existing autonomous weapons systems, say, or by building its own). Or our demise could happen in passing: you ask a supercomputer to improve its own processing speed, and it concludes that the best way to do this is to turn all nearby atoms into silicon, including those atoms that are currently people. But the basic A.I.-safety arguments do not require imagining that the current crop of Verizon chatbots will suddenly morph into Skynet, the digital supervillain from Terminator. To be dangerous, A.G.I. doesnt have to be sentient, or desire our destruction. If its objectives are at odds with human flourishing, even in subtle ways, then, say the doomers, were screwed.

Read the original here:

Among the A.I. Doomsayers - The New Yorker

Related Posts