Katja Graces apartment, in West Berkeley, is in an old machinists factory, with pitched roofs and windows at odd angles. It has terra-cotta floors and no central heating, which can create the impression that youve stepped out of the California sunshine and into a duskier place, somewhere long ago or far away. Yet there are also some quietly futuristic touches. High-capacity air purifiers thrumming in the corners. Nonperishables stacked in the pantry. A sleek white machine that does lab-quality RNA tests. The sorts of objects that could portend a future of tech-enabled ease, or one of constant vigilance.
Grace, the lead researcher at a nonprofit called A.I. Impacts, describes her job as thinking about whether A.I. will destroy the world. She spends her time writing theoretical papers and blog posts on complicated decisions related to a burgeoning subfield known as A.I. safety. She is a nervous smiler, an oversharer, a bit of a mumbler; shes in her thirties, but she looks almost like a teen-ager, with a middle part and a round, open face. The apartment is crammed with books, and when a friend of Graces came over, one afternoon in November, he spent a while gazing, bemused but nonjudgmental, at a few of the spines: Jewish Divorce Ethics, The Jewish Way in Death and Mourning, The Death of Death. Grace, as far as she knows, is neither Jewish nor dying. She let the ambiguity linger for a moment. Then she explained: her landlord had wanted the possessions of the previous occupant, his recently deceased ex-wife, to be left intact. Sort of a relief, honestly, Grace said. One set of decisions I dont have to make.
She was spending the afternoon preparing dinner for six: a yogurt-and-cucumber salad, Impossible beef gyros. On one corner of a whiteboard, she had split her pre-party tasks into painstakingly small steps (Chop salad, Mix salad, Mold meat, Cook meat); on other parts of the whiteboard, shed written more gnomic prompts (Food area, Objects, Substances). Her friend, a cryptographer at Android named Paul Crowley, wore a black T-shirt and black jeans, and had dyed black hair. I asked how they knew each other, and he responded, Oh, weve crossed paths for years, as part of the scene.
It was understood that the scene meant a few intertwined subcultures known for their exhaustive debates about recondite issues (secure DNA synthesis, shrimp welfare) that members consider essential, but that most normal people know nothing about. For two decades or so, one of these issues has been whether artificial intelligence will elevate or exterminate humanity. Pessimists are called A.I. safetyists, or decelerationistsor, when theyre feeling especially panicky, A.I. doomers. They find one another online and often end up living together in group houses in the Bay Area, sometimes even co-parenting and co-homeschooling their kids. Before the dot-com boom, the neighborhoods of Alamo Square and Hayes Valley, with their pastel Victorian row houses, were associated with staid domesticity. Last year, referring to A.I. hacker houses, the San Francisco Standard semi-ironically called the area Cerebral Valley.
A camp of techno-optimists rebuffs A.I. doomerism with old-fashioned libertarian boomerism, insisting that all the hand-wringing about existential risk is a kind of mass hysteria. They call themselves effective accelerationists, or e/accs (pronounced e-acks), and they believe A.I. will usher in a utopian futureinterstellar travel, the end of diseaseas long as the worriers get out of the way. On social media, they troll doomsayers as decels, psyops, basically terrorists, or, worst of all, regulation-loving bureaucrats. We must steal the fire of intelligence from the gods [and] use it to propel humanity towards the stars, a leading e/acc recently tweeted. (And then there are the normies, based anywhere other than the Bay Area or the Internet, who have mostly tuned out the debate, attributing it to sci-fi fume-huffing or corporate hot air.)
Graces dinner parties, semi-underground meetups for doomers and the doomer-curious, have been described as a nexus of the Bay Area AI scene. At gatherings like these, its not uncommon to hear someone strike up a conversation by asking, What are your timelines? or Whats your p(doom)? Timelines are predictions of how soon A.I. will pass particular benchmarks, such as writing a Top Forty pop song, making a Nobel-worthy scientific breakthrough, or achieving artificial general intelligence, the point at which a machine can do any cognitive task that a person can do. (Some experts believe that A.G.I. is impossible, or decades away; others expect it to arrive this year.) P(doom) is the probability that, if A.I. does become smarter than people, it will, either on purpose or by accident, annihilate everyone on the planet. For years, even in Bay Area circles, such speculative conversations were marginalized. Last year, after OpenAI released ChatGPT, a language model that could sound uncannily natural, they suddenly burst into the mainstream. Now there are a few hundred people working full time to save the world from A.I. catastrophe. Some advise governments or corporations on their policies; some work on technical aspects of A.I. safety, approaching it as a set of complex math problems; Grace works at a kind of think tank that produces research on high-level questions, such as What roles will AI systems play in society? and Will they pursue goals? When theyre not lobbying in D.C. or meeting at an international conference, they often cross paths in places like Graces living room.
The rest of her guests arrived one by one: an authority on quantum computing; a former OpenAI researcher; the head of an institute that forecasts the future. Grace offered wine and beer, but most people opted for nonalcoholic canned drinks that defied easy description (a fermented energy drink, a hopped tea). They took their Impossible gyros to Graces sofa, where they talked until midnight. They were courteous, disagreeable, and surprisingly patient about reconsidering basic assumptions. You can condense the gist of the worry, seems to me, into a really simple two-step argument, Crowley said. Step one: Were building machines that might become vastly smarter than us. Step two: That seems pretty dangerous.
Are we sure, though? Josh Rosenberg, the C.E.O. of the Forecasting Research Institute, said. About intelligence per se being dangerous?
Grace noted that not all intelligent species are threatening: There are elephants, and yet mice still seem to be doing just fine.
Cartoon by Erika Sjule and Nate Odenkirk
Rabbits are certainly more intelligent than myxomatosis, Michael Nielsen, the quantum-computing expert, said.
Crowleys p(doom) was well above eighty per cent. The others, wary of committing to a number, deferred to Grace, who said that, given my deep confusion and uncertainty about thiswhich I think nearly everyone has, at least everyone whos being honest, she could only narrow her p(doom) to between ten and ninety per cent. Still, she went on, a ten-per-cent chance of human extinction is obviously, if you take it seriously, unacceptably high.
They agreed that, amid the thousands of reactions to ChatGPT, one of the most refreshingly candid assessments came from Snoop Dogg, during an onstage interview. Crowley pulled up the transcript and read aloud. This is not safe, cause the A.I.s got their own minds, and these motherfuckers are gonna start doing their own shit, Snoop said, paraphrasing an A.I.-safety argument. Shit, what the fuck? Crowley laughed. I have to admit, that captures the emotional tenor much better than my two-step argument, he said. And then, as if to justify the moment of levity, he read out another quote, this one from a 1948 essay by C.S. Lewis: If we are all going to be destroyed by an atomic bomb, let that bomb when it comes find us doing sensible and human thingspraying, working, teaching, reading, listening to music, bathing the children, playing tennis, chatting to our friends over a pint and a game of dartsnot huddled together like frightened sheep.
Grace used to work for Eliezer Yudkowsky, a bearded guy with a fedora, a petulant demeanor, and a p(doom) of ninety-nine per cent. Raised in Chicago as an Orthodox Jew, he dropped out of school after eighth grade, taught himself calculus and atheism, started blogging, and, in the early two-thousands, made his way to the Bay Area. His best-known works include Harry Potter and the Methods of Rationality, a piece of fan fiction running to more than six hundred thousand words, and The Sequences, a gargantuan series of essays about how to sharpen ones thinking. The informal collective that grew up around these writingsfirst in the comments, then in the physical worldbecame known as the rationalist community, a small subculture devoted to avoiding the typical failure modes of human reason, often by arguing from first principles or quantifying potential risks. Nathan Young, a software engineer, told me, I remember hearing about Eliezer, who was known to be a heavy guy, onstage at some rationalist event, asking the crowd to predict if he could lose a bunch of weight. Then the big reveal: he unzips the fat suit he was wearing. Hed already lost the weight. I think his ostensible point was something about how its hard to predict the future, but mostly I remember thinking, What an absolute legend.
Yudkowsky was a transhumanist: human brains were going to be uploaded into digital brains during his lifetime, and this was great news. He told me recently that Eliezer ages sixteen through twenty assumed that A.I. was going to be great fun for everyone forever, and wanted it built as soon as possible. In 2000, he co-founded the Singularity Institute for Artificial Intelligence, to help hasten the A.I. revolution. Still, he decided to do some due diligence. I didnt see why an A.I. would kill everyone, but I felt compelled to systematically study the question, he said. When I did, I went, Oh, I guess I was wrong. He wrote detailed white papers about how A.I. might wipe us all out, but his warnings went unheeded. Eventually, he renamed his think tank the Machine Intelligence Research Institute, or MIRI.
The existential threat posed by A.I. had always been among the rationalists central issues, but it emerged as the dominant topic around 2015, following a rapid series of advances in machine learning. Some rationalists were in touch with Oxford philosophers, including Toby Ord and William MacAskill, the founders of the effective-altruism movement, which studied how to do the most good for humanity (and, by extension, how to avoid ending it). The boundaries between the movements increasingly blurred. Yudkowsky, Grace, and a few others flew around the world to E.A. conferences, where you could talk about A.I. risk without being laughed out of the room.
Philosophers of doom tend to get hung up on elaborate sci-fi-inflected hypotheticals. Grace introduced me to Joe Carlsmith, an Oxford-trained philosopher who had just published a paper about scheming AIs that might convince their human handlers theyre safe, then proceed to take over. He smiled bashfully as he expounded on a thought experiment in which a hypothetical person is forced to stack bricks in a desert for a million years. This can be a lot, I realize, he said. Yudkowsky argues that a superintelligent machine could come to see us as a threat, and decide to kill us (by commandeering existing autonomous weapons systems, say, or by building its own). Or our demise could happen in passing: you ask a supercomputer to improve its own processing speed, and it concludes that the best way to do this is to turn all nearby atoms into silicon, including those atoms that are currently people. But the basic A.I.-safety arguments do not require imagining that the current crop of Verizon chatbots will suddenly morph into Skynet, the digital supervillain from Terminator. To be dangerous, A.G.I. doesnt have to be sentient, or desire our destruction. If its objectives are at odds with human flourishing, even in subtle ways, then, say the doomers, were screwed.
Read the original here:
Among the A.I. Doomsayers - The New Yorker
- Creating 'good' AGI that won't kill us all: Crypto's Artificial Superintelligence Alliance - Cointelegraph - March 29th, 2024 [March 29th, 2024]
- Beyond the Buzz: Clear Language is Necessary for Clear Policy on AI | TechPolicy.Press - Tech Policy Press - March 29th, 2024 [March 29th, 2024]
- Whoever develops artificial general intelligence first wins the whole game - ForexLive - March 29th, 2024 [March 29th, 2024]
- AGI and Democracy - Ash Center - March 29th, 2024 [March 29th, 2024]
- Elon Musk Believes 'Super Intelligence' Is Inevitable and Could End Humanity - Observer - March 29th, 2024 [March 29th, 2024]
- What was (A)I made for? - by The Ink - The.Ink - March 29th, 2024 [March 29th, 2024]
- The evolution of artificial intelligence (AI) spending by the U.S. government | Brookings - Brookings Institution - March 29th, 2024 [March 29th, 2024]
- Fetch.ai, Ocean Protocol and SingularityNET to Partner on Decentralized AI - PYMNTS.com - March 29th, 2024 [March 29th, 2024]
- Unveiling Sam Altman's Insights from Lex Fridman Interview - hackernoon.com - March 29th, 2024 [March 29th, 2024]
- Scientists create AI models that can talk to each other and pass on skills with limited human input - Livescience.com - March 29th, 2024 [March 29th, 2024]
- Will AI save humanity? U.S. tech fest offers reality check - Japan Today - March 18th, 2024 [March 18th, 2024]
- Artificial general intelligence and higher education - Inside Higher Ed - March 18th, 2024 [March 18th, 2024]
- The Madness of the Race to Build Artificial General Intelligence - Truthdig - March 18th, 2024 [March 18th, 2024]
- Companies Like Morgan Stanley Are Already Making Early Versions of AGI - Observer - March 18th, 2024 [March 18th, 2024]
- Types of Artificial Intelligence That You Should Know in 2024 - Simplilearn - March 18th, 2024 [March 18th, 2024]
- US government warns AI may be an 'extinction-level threat' to humans - TweakTown - March 18th, 2024 [March 18th, 2024]
- Amazon's VP of AGI: Arrival of AGI Not 'Moment in Time' SXSW 2024 - AI Business - March 14th, 2024 [March 14th, 2024]
- What is general intelligence in the world of AI and computers? The race for the artificial mind explained - PC Gamer - March 14th, 2024 [March 14th, 2024]
- Beyond human intelligence: Claude 3.0 and the quest for AGI - VentureBeat - March 14th, 2024 [March 14th, 2024]
- DeepMind Co-founder on AGI and the AI Race - SXSW 2024 - AI Business - March 14th, 2024 [March 14th, 2024]
- Rejuve.Bio Launches Groundbreaking Crowd Fund on NetCapital to Pioneer the Future of Artificial General ... - PR Newswire - March 14th, 2024 [March 14th, 2024]
- Employees at Top AI Labs Fear Safety Is an Afterthought - TIME - March 14th, 2024 [March 14th, 2024]
- Meta hooks up with Hammerspace for advanced AI infrastructure project Blocks and Files - Blocks & Files - March 14th, 2024 [March 14th, 2024]
- Artificial Superintelligence Could Arrive by 2027, Scientist Predicts - Futurism - March 14th, 2024 [March 14th, 2024]
- OpenAI, Salesforce and Others Boost Efforts for Ethical AI - PYMNTS.com - March 14th, 2024 [March 14th, 2024]
- Google DeepMind C.E.O. Demis Hassabis on the Path From Chatbots to A.G.I. - The New York Times - February 24th, 2024 [February 24th, 2024]
- Bill Foster, a particle physicist-turned-congressman, on why he's worried about artificial general intelligence - FedScoop - February 24th, 2024 [February 24th, 2024]
- Generative AI Defined: How It Works, Benefits and Dangers - TechRepublic - February 24th, 2024 [February 24th, 2024]
- AI and You: OpenAI's Sora Previews Text-to-Video Future, First Ivy League AI Degree - CNET - February 24th, 2024 [February 24th, 2024]
- Vitalik Buterin and Sandeep Nailwal headline decentralized agi summit @ Ethdenver tackling threats of centralized AI - Grit Daily - February 24th, 2024 [February 24th, 2024]
- What is AI? A-to-Z Glossary of Essential AI Terms in 2024 - Tech.co - February 22nd, 2024 [February 22nd, 2024]
- With Sora, OpenAI highlights the mystery and clarity of its mission | The AI Beat - VentureBeat - February 22nd, 2024 [February 22nd, 2024]
- Why, Despite All the Hype We Hear, AI Is Not One of Us - Walter Bradley Center for Natural and Artificial Intelligence - February 22nd, 2024 [February 22nd, 2024]
- What is Artificial General Intelligence (AGI) and Why It's Not Here Yet: A Reality Check for AI Enthusiasts - Unite.AI - February 22nd, 2024 [February 22nd, 2024]
- Forget Artificial General Intelligence (AGI) the big impact is already here and its called AI agents - IT World Canada - February 22nd, 2024 [February 22nd, 2024]
- Future of Artificial Intelligence: Predictions and Impact on Society - Medriva - February 22nd, 2024 [February 22nd, 2024]
- Bret Taylor and Clay Bavor talk AI startups, AGI, and job disruptions - Semafor - February 22nd, 2024 [February 22nd, 2024]
- 3 scary breakthroughs AI will make in 2024 - Livescience.com - January 2nd, 2024 [January 2nd, 2024]
- What Is Artificial Intelligence (AI)? - Council on Foreign Relations - January 2nd, 2024 [January 2nd, 2024]
- 6. AI in Everyday Life How Artificial Intelligence is Impacting Society - Medium - January 2nd, 2024 [January 2nd, 2024]
- 2023 Was A Breakout Year for AI - What Can We Expect Looking Forward? - Securities.io - January 2nd, 2024 [January 2nd, 2024]
- 2024 Tech Predictions: From Sci-Fi Fantasy to Reality - Exploring Cinematic Tech Prophecies - Medium - January 2nd, 2024 [January 2nd, 2024]
- AGI predictions for 2024. The major LLM players, as well as | by Paul Pallaghy, PhD | Dec, 2023 - Medium - January 2nd, 2024 [January 2nd, 2024]
- 10 Scary Breakthroughs AI Will Make in 2024 | by AI News | Dec, 2023 - Medium - January 2nd, 2024 [January 2nd, 2024]
- Top 5 Myths About AI Debunked. Unraveling the Truth: Separating AI | by Michiel Meire | Dec, 2023 - Medium - January 2nd, 2024 [January 2nd, 2024]
- AI Revolution: Unleashing the Power of Artificial Intelligence in Our Lives - Medium - January 2nd, 2024 [January 2nd, 2024]
- Superintelligence Unleashed: Navigating the Perils and Promises of Tomorrow's AI Landscape with - Medium - January 2nd, 2024 [January 2nd, 2024]
- AI in taxation: Transforming or replacing? - Times of Malta - January 2nd, 2024 [January 2nd, 2024]
- ChatGPT egg balancing task convinced Microsoft that AGI is closer - Business Insider - May 18th, 2023 [May 18th, 2023]
- Can We Stop Runaway A.I.? - The New Yorker - May 18th, 2023 [May 18th, 2023]
- Microsoft's 'Sparks of AGI' ignite debate on humanlike AI - The Jerusalem Post - May 18th, 2023 [May 18th, 2023]
- Artificial General Intelligence is the Answer, says OpenAI CEO - Walter Bradley Center for Natural and Artificial Intelligence - May 18th, 2023 [May 18th, 2023]
- AI glossary: words and terms to know about the booming industry - NBC News - May 18th, 2023 [May 18th, 2023]
- Is medicine ready for AI? Doctors, computer scientists, and ... - MIT News - May 18th, 2023 [May 18th, 2023]
- What marketers should keep in mind when adopting AI - MarTech - May 18th, 2023 [May 18th, 2023]
- Zoom Invests in and Partners With Anthropic to Improve Its AI ... - PYMNTS.com - May 18th, 2023 [May 18th, 2023]
- Hippocratic AI launches With $50M to power healthcare chatbots - VatorNews - May 18th, 2023 [May 18th, 2023]
- The Potential of AI in Tax Practice Relies on Understanding its ... - Thomson Reuters Tax & Accounting - May 18th, 2023 [May 18th, 2023]
- How AI Knows Things No One Told It - Scientific American - May 18th, 2023 [May 18th, 2023]
- Operation HOPE and CAU Host ChatGPT Creator to Discuss AI - Black Enterprise - May 18th, 2023 [May 18th, 2023]
- Paper Claims AI May Be a Civilization-Destroying "Great Filter" - Futurism - May 18th, 2023 [May 18th, 2023]
- Vintage AI Predictions Show Our Hopes and Fears Aren't New ... - Gizmodo - May 18th, 2023 [May 18th, 2023]
- AIs Impact on Journalism - Signals AZ - May 18th, 2023 [May 18th, 2023]
- What is Augmented Intelligence? Explanation and Examples - Techopedia - May 18th, 2023 [May 18th, 2023]
- 'Godfather' of AI is now having second thoughts - The B.C. Catholic - May 18th, 2023 [May 18th, 2023]
- Top Philippine universities - Philstar.com - May 18th, 2023 [May 18th, 2023]
- The Politics of Artificial Intelligence (AI) - National and New Jersey ... - InsiderNJ - May 18th, 2023 [May 18th, 2023]
- People warned AI is becoming like a God and a 'catastrophe' is ... - UNILAD - May 18th, 2023 [May 18th, 2023]
- "Zero tolerance" for hallucinations - Dr. Vishal Sikka on how Vianai builds AI applications, and the mixed emotions of the AI hype cycle -... - May 18th, 2023 [May 18th, 2023]