The Madness of the Race to Build Artificial General Intelligence – Truthdig

Posted: March 18, 2024 at 11:29 am

A few weeks ago, I was having a chat with my neighbor Tom, an amateur chemist who conducts experiments in his apartment. I have a longtime fascination with chemistry, and always enjoy talking with him. But this conversation was scary. If his latest experiment was successful, he informed me, it might have some part to play in curing cancer. If it was a failure, however, there was a reasonable chance, according to his calculations, that the experiment would trigger an explosion that levels the entire apartment complex.

Perhaps Tom was lying, or maybe hes delusional. But what if he really was just one test tube clink away from blowing me and dozens of our fellow building residents sky high? What should one do in this situation? After a brief deliberation, I decided to call 911. The police rushed over, searched his apartment and decided after an investigation to confiscate all of his chemistry equipment and bring him in for questioning.

The above scenario is a thought experiment. As far as I know, no one in my apartment complex is an amateur chemist experimenting with highly combustible compounds. Ive spun this fictional tale because its a perfect illustration of the situation that we all of us are in with respect to the AI companies trying to build artificial general intelligence, or AGI. The list of such companies includes DeepMind, OpenAI, Anthropic and xAI, all of which are backed by billions of dollars. Many leading figures at these very companies have claimed, in public, while standing in front of microphones, that one possible outcome of the technology they are explicitly trying to build is that everyone on Earth dies. The only sane response to this is to immediately call 911 and report them to the authorities. They are saying that their own technology might kill you, me, our family members and friends the entire human population. And almost no one is freaking out about this.

Its crucial to note that you dont have to believe that AGI will actually kill everyone on Earth to be alarmed. I myself am skeptical of these claims. Even if one suspects Tom of lying about his chemistry experiments, the fact of his telling me that his actions could kill everyone in our apartment complex is enough to justify dialing 911.

One doesnt need to accept this line of reasoning to be alarmed when the CEO of the most powerful AI company thats trying to build AGI says that superintelligent machines might kill us.

What exactly are AI companies saying about the potential dangers of AGI? During a 2023 talk, OpenAI CEO Sam Altman was asked about whether AGI could destroy humanity, and he responded, the bad case and I think this is important to say is, like, lights out for all of us. In some earlier interviews, he declared that I think AI willmost likely sort of lead to the end of the world, but in the meantime there will be great companies created with serious machine learning, and probably AI will kill us all, but until then were going to turn out a lot of great students. The audience laughed at this. But was he joking? If he was, he was also serious: the OpenAI website itself states in a 2023 article that the risks of AGI may be existential, meaning roughly that they could wipe out the entire human species. Another article on their website affirms that a misaligned superintelligent AGI could cause grievous harm to the world.

In a 2015 post on his personal blog, Altman wrote that development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. Whereas AGI refers to any artificial system that is at least as competent as humans in every cognitive domain of importance, such as science, mathematics, social manipulation and creativity, a SMI is a type of AGI that is superhuman in its capabilities. Many researchers in the field of AI safety believe that once we have AGI, we will have superintelligent machines very shortly after. The reason is that designing increasingly capable machines is an intellectual task, so the smarter these systems become, the better able theyll become at designing even smarter systems. Hence, the first AGIs will design the next generation of even smarter AGIs, until those systems reach superhuman levels.

Again, one doesnt need to accept this line of reasoning to be alarmed when the CEO of the most powerful AI company thats trying to build AGI says that superintelligent machines might kill us.

Just the other day, an employee at OpenAI who goes by roon on Twitter/X, tweeted that things are accelerating. Pretty much nothing needs to change course to achieve AGI Worrying about timelines that is, worrying about whether AGI will be built later this year or 10 years from now is idle anxiety, outside your control. You should be anxious about stupid mortal things instead. Do your parents hate you? Does your wife love you? In other words, AGI is right around the corner and its development cannot be stopped. Once created, it will bring about the end of the world as we know it, perhaps by killing everyone on the planet. Hence, you should be thinking not so much about when exactly this might happen, but on more mundane things that are meaningful to us humans: Do we have our lives in order? Are we on good terms with our friends, family and partners? When youre flying on a plane and it begins to nosedive toward the ground, most people turn to their partner and say I love you or try to send a few last text messages to loved ones to say goodbye. That is, according to someone at OpenAI, what we should be doing right now.

A similar sentiment has been echoed by other notable figures at OpenAI, such as Altmans co-founder, Ilya Sutskever. The future is going to be good for the AIs regardless, he said in 2019. It would be nice if it would be good for humans as well. He adds, ominously, that I think its pretty likely the entire surface of the Earth will be covered with solar panels and data centers once we create AGI, referencing the idea that AGI is dangerous partly because it will seek to harness every resource it can. In the process, humanity could be destroyed as an unintended side effect. Indeed, Sutskever tells us that the AGI his own company is trying to build probably isnt,

going to actively hate humans and want to harm them, but its just going to be too powerful, and I think a good analogy would be the way humans treat animals. Its not that we hate animals. I think humans love animals and have a lot of affection for them, but when the time comes to build a highway between two cities, we are not asking the animals for permission. We just do it because its important for us. And I think by default thats the kind of relationship thats going to be between us and AGIs, which are truly autonomous and operating on their own behalf.

The good folks by which I mean quasi-homicidal folks at OpenAI arent the only ones being honest about how their work could lead to the annihilation of our species. Dario Amodei, the CEO of Anthropic, which recently received $4 billion in funding from Amazon, said in 2017 that theres a long tail of things of varying degrees of badness that could happen after building AGI. I think at the extreme end is the fear that an AGI could destroy humanity. I cant see any reason in principle why that couldnt happen. Similarly, Elon Musk, the co-founder of OpenAI who recently started his own company to build AGI, named xAI, declared in 2023 that one of the biggest risks to the future of civilization is AI, and has previously said that, being very close to the cutting edge in AI scares the hell out of me. Why? Because advanced AI is capable of vastly more than almost anyone knows and the rate of improvement is exponential.

Even the CEO of Google, Sundar Pichai, told Sky News last year that advanced AI can be very harmful if deployed wrongly, and that with respect to safety issues, we dont have all the answers there yet, and the technology is moving fast. So does that keep me up at night? Absolutely.

Google currently owns DeepMind, which was cofounded in 2010 by a computer scientist named Shane Legg. During a talk one year before DeepMind was founded, Legg claimed that if we can build human level AI, then we can almost certainly scale up to well above human level. A machine well above human level will understand its design and be able to design even more powerful machines, which gestures back at the idea that AGI could take over the job of designing even more advanced AI systems than itself. We have almost no idea how to deal with this, he adds. During the same talk, Legg said that we arent going to develop a theory about how to keep AGI safe before AGI is developed. Ive spoken to a bunch of people, he reports, none of them, that Ive ever spoken to, think they will have a practical theory of friendly artificial intelligence in about 10 years time. We have no idea how to solve this problem.

Either these AI companies need to show, right now, that the systems theyre building are completely safe, or they need to stop, right now.

Thats worrying because many researchers at the major AI companies argue that as roon suggested AGI may be just around the corner. In a recent interview, Demis Hassabis, another co-founder of DeepMind, says that when we started DeepMind back in 2010, we thought of it as a 20-year project, and actually I think were on track. So, I wouldnt be surprised if we had AGI-like systems within the next decade. When asked what it would take to make sure that an AGI thats smarter than a human is safe, his answer was, as one commentator put it, a grab bag of half-baked ideas. Maybe, he says, we can use less capable AIs to help us keep the AGIs in check. But maybe that wont work who knows? Either way, DeepMind and the other AI companies are plowing ahead with their efforts to build AGI, while simultaneously acknowledging, in public, on record, that their products could destroy the entire world.

This is, in a word, madness. If youre driving in a car with me, and I tell you that earlier today I attached a bomb to the bottom of the car, and it might or might not! go off if we hit a pothole, then whether or not you believe me, you should be extremely alarmed. That is a very scary thing to hear someone say at 60 miles an hour on a highway. You should, indeed, turn to me and scream, Stop this damn car right now. Let me out immediately I dont want to ride with you anymore!

Right now, were in that car with these AI companies driving. They have turned to us on numerous occasions over the past decade and a half and admitted that theyve attached a bomb to the car, and that it might or might not! explode in the near future, killing everyone inside. Thats an outrageous situation to be in, and more people should be screaming at them to stop what theyre doing immediately. More people should be dialing 911 and reporting the incident to the authorities, as I did with Tom in the fictional scenario above.

I do not know if AGI will kill everyone on Earth Im more focused on the profound harms that these AI companies have already caused through worker exploitation, massive intellectual property theft, algorithmic bias and so on. The point is that it is completely unacceptable that the people leading or working for these AI companies believe that what theyre doing could kill you, your family, your friends and even your pets (who will feed your fluffy companions if you cease to exist?) yet continue to do it anyway. One doesnt need to completely buy-into the AGI might destroy humanity claim to see that someone who says their work might destroy humanity should not be doing whatever it is theyre doing. As Ive shown before, there have been several episodes in recent human history where scientists have declared that were on the verge of creating a technology that would destroy the world and nothing came of it. But thats irrelevant. If someone tells you that they have a gun and might shoot you, that should be more than enough to sound the alarm even if you believe that they dont, in fact, have a gun hidden under their bed.

Either these AI companies need to show, right now, that the systems theyre building are completely safe, or they need to stop, right now, trying to build those systems. Something needs to change about the situation immediately.

Independent journalism is under threat and overshadowed by heavily funded mainstream media.

You can help level the playing field. Become a member.

Your tax-deductible contribution keeps us digging beneath the headlines to give you thought-provoking, investigative reporting and analysis that unearths what's really happening- without compromise.

Give today to support our courageous, independent journalists.

Read the rest here:

The Madness of the Race to Build Artificial General Intelligence - Truthdig

Related Posts