Advertisement
In a lecture at the Australian National Universitys National Security College on October 13, Australias Department of Home Affairs Secretary Mike Pezzullo enumerated a long and frightening list of security risks the country and the world would have to reckon with over the next hundred years. Pezzullo, who became the first head of the Home Affairs ministry in 2017, took a refreshingly expansive view of the notion of security itself in his speech, interrogating traditional conceptions, with a veritable whos who of Australias national security establishment in the audience.
Indeed, his was the only speech by a serving senior security official I have heard so far that included a reference to the French post-structuralist Jacques Derrida. Pezzullos repeated invocation of another French philosopher, Michel Foucault, was marginally less unexpected given Foucaults work on surveillance and biopolitics topics painfully relevant to the ongoing COVID-19 pandemic.
The pandemic in many ways provided the senior-most bureaucrat responsible for internal security in Australia a perfect entry point for the inclusion of a wide variety of threats, beyond traditional ones, which do not emanate from human actors and therefore cant be deterred or met with through the use of force. (Noting that overarming the state is as bad as underarming it, at one point Pezzullo suggested, quite correctly, that when it came to Australias security right now, handwashing is more important than every weapon system in the arsenal of the Australian Defense Forces.)
Pezzullos self-described apocalyptic list of risks included catastrophic ones, defined loosely as those with the potential to inflict serious harm to humanity on a global scale, perhaps even spanning generations. Pandemics clearly are catastrophic risks; but so are many others he named, ranging from geomagnetic storms from unusual solar activity and permanent loss of natural diversity to manmade risks, including those posed by advanced technologies such as artificial intelligence (AI) and synthetic biology.
Get briefed on the story of the week, and developing stories to watch across the Asia-Pacific.
Three fundamental questions arise when it comes to catastrophic risks, none of which are easy to answer: To what extent can probabilities be assigned to these risks within a fixed time horizon and such risks compared; how much resources should a state a priori allocate to mitigating their impact; and what is the role of the national security bureaucracy in managing them?
Lets start with the issue of estimating probabilities of catastrophic risks. Many natural ones, from the risk of direct hit from an asteroid to the existential risk posed by a supernova explosion, can be easily calculated. In a new book, Oxford scholar Toby Ord computes them: It turns out that the probability of an asteroid bigger than 10 kilometers hitting the earth over the next century is less than one out of 150 million. The chance of a supernova depleting the earths ozone layer by more than 30 percent over the next century is less than 1 in 50 million. However, when it came to significant mortal risks from pandemics, probabilists Pasquale Cirillo and Nassim Nicholas Taleb have mathematically established they are higher than widely assumed.
That said, when it comes to human-generated anthropogenic risks, odds become harder to calculate: Consider Pezzullos Terminator example or, dressed in academese, the problem of an artificial superintelligence of the kind studied by Ords colleague Nick Bostrom. (In Bostroms theorizing, it is entirely possible that such an AI could wipe out humanity leaving no possibility of regeneration in the future a truly existential risk.) Such an intelligence could naturally arise out of exponential progress in machine learning within the next 10 years or not in a 100; much depends on how you see certain technological trends projecting into the future. Absent precise, objectively reliable, ways to quantify many anthropogenic risks, pooled expert predictions are often used to arrive at a number. (One such, in 2008, put the chances of human extinction this century at 19 percent.)
This naturally leads to a very practical question: How much of government resources should be allocated to meet catastrophic risks, especially when there is a plethora of them competing for money with on-the-horizon plausible national security challenges, such defense spending in the face of great power rivals, and there is no obvious way to rank all of the risks side by side? Furthermore, planners like armies often tend to prep for the last contingency they faced. With COVID-19 very much still here, it is likely that it would animate debates around government spending priorities for some time to come. (However, this is not to say that the possibility another pandemic after COVID-19 is remote; if anything, systematic destruction of animal habitats and climate change very much makes it possible that another deadly virus will reappear in the foreseeable future. The point here is that to focus on that possibility alone, at the expense of other risks, would be foolish.)
But fundamentally, theres a conceptual issue at hand. A catastrophic risk is almost by definition something with significant second and higher-order effects. (The ongoing global economic decimation from COVID-19 and attendant possibility of political chaos are cases in point.) Given that many such risks are distributed, networked and interconnected, as Pezzullo described them, estimating the cost of their impact (that is, pricing the risk) is extremely hard though not impossible. Add to this the fact that different potential catastrophic risks will play out differently: For example, while the ongoing pandemic has spared the earths environment, that may not be the case with a supervolcanic eruption.
When it comes to mitigation strategies too, there are no silver bullets. Take the issue of machine superintelligence, as an example. Beyond repeated calls for responsible, ethical AI research and hysteria around killer robots, the fact of the matter is that a large part of the cutting-edge research in this direction is taking place in the private sector, whose compliance with a voluntary set of regulations should they be put in place by governments is uncertain. While it is common in some circles to note, as Pezzullo did in his lecture, that risks acquire added lethality when they transmit themselves through networks the very reason why social distancing holds the key to beating the coronavirus, as Taleb and collaborators prophetically argued in January a uniform strategy of shutting networks down in face of an incipient threat could also backfire in unexpected ways. Think of the economic costs of a large-scale internet shutdown, for example.
Finally comes the role of the national security bureaucracy in managing catastrophic non-traditional threats. Here too are two sides of the same coin. As some security studies scholars have long argued, declaring a threat (such as a pandemic) to be a national security one, to securitize it, has obvious downsides. For one, such a move restricts the flow of information which, as we saw with Chinas initial reaction to the coronavirus, is singularly detrimental. At the same time, denoting something as a security threat also stands to attract significant resources to meet it and centralize response authority. And while Pezzullo, in his lecture, rightly argued that the definition of national security should not be broadened to include all policy discourse, the fact of the matter is that the national security apparatus, especially intelligence agencies, have resources (for example, intel collectors at global hotspots) that stand to significantly help mitigate emerging threats.
At the end, the answer to many of these questions may indeed lie with a proposal of the Australian Home Affairs secretary: of an extended state a network of governmental organizations, businesses, civil society and others that rises to meet security challenges rather than leaving that task to the state alone. Fleshing that idea out fully to incorporate a range of catastrophic risks their mitigation or dealing with them when they manifest remains an interesting exercise.
View post:
Can Apocalypse Be Dealt With? The Diplomat - The Diplomat
- Ethical Issues In Advanced Artificial Intelligence - December 14th, 2016 [December 14th, 2016]
- How Long Before Superintelligence? - Nick Bostrom - December 21st, 2016 [December 21st, 2016]
- Parallel universes, the Matrix, and superintelligence ... - January 4th, 2017 [January 4th, 2017]
- The Artificial Intelligence Revolution: Part 2 - Wait But Why - January 25th, 2017 [January 25th, 2017]
- Superintelligence - Nick Bostrom - Oxford University Press - January 27th, 2017 [January 27th, 2017]
- Experts have come up with 23 guidelines to avoid an AI apocalypse ... - ScienceAlert - February 7th, 2017 [February 7th, 2017]
- Elon Musk's Surprising Reason Why Everyone Will Be Equal in the ... - Big Think - February 7th, 2017 [February 7th, 2017]
- Stephen Hawking and Elon Musk Endorse 23 Asilomar Principles ... - Inverse - February 7th, 2017 [February 7th, 2017]
- SoftBank's Fantastical Future Still Rooted in the Now - Wall Street Journal - February 8th, 2017 [February 8th, 2017]
- The Moment When Humans Lose Control Of AI - Vocativ - February 8th, 2017 [February 8th, 2017]
- Game Theory: Google tests AIs to see whether they'll fight or work together - Neowin - February 9th, 2017 [February 9th, 2017]
- Simulation hypothesis: The smart person's guide - TechRepublic - February 10th, 2017 [February 10th, 2017]
- Elon Musk jokes 'I'm not an alien' while discussing how to contact extraterrestrials - Yahoo News - February 13th, 2017 [February 13th, 2017]
- Another Expert Joins Stephen Hawking and Elon Musk in Warning About the Dangers of AI - Futurism - February 13th, 2017 [February 13th, 2017]
- Artificial intelligence predictions surpass reality - UT The Daily Texan - February 13th, 2017 [February 13th, 2017]
- Elon Musk - 2 Things Humans Need to Do to Have a Good Future - Big Think - February 27th, 2017 [February 27th, 2017]
- Don't Fear Superintelligent AICCT News - CCT News - February 27th, 2017 [February 27th, 2017]
- Superintelligent AI explains Softbank's push to raise a $100BN ... - TechCrunch - February 28th, 2017 [February 28th, 2017]
- Building A 'Collective Superintelligence' For Doctors And Patients Around The World - Forbes - February 28th, 2017 [February 28th, 2017]
- Tech Leaders Raise Concern About the Dangers of AI - iDrop News - March 2nd, 2017 [March 2nd, 2017]
- Softbank CEO: The Singularity Will Happen by 2047 - Futurism - March 2nd, 2017 [March 2nd, 2017]
- Disruptive by Design: Siri, Tell Me a Joke. No, Not That One. - Signal Magazine - March 2nd, 2017 [March 2nd, 2017]
- Horst Simon to Present Supercomputers and Superintelligence at PASC17 in Lugano - insideHPC - March 3rd, 2017 [March 3rd, 2017]
- Supersentience - March 7th, 2017 [March 7th, 2017]
- Superintelligence | Guardian Bookshop - March 7th, 2017 [March 7th, 2017]
- The AI debate must stay grounded in reality - Prospect - March 8th, 2017 [March 8th, 2017]
- Who is afraid of AI? - The Hindu - April 8th, 2017 [April 8th, 2017]
- The Nonparametric Intuition: Superintelligence and Design Methodology - Lifeboat Foundation (blog) - April 8th, 2017 [April 8th, 2017]
- How humans will lose control of artificial intelligence - The Week Magazine - April 8th, 2017 [April 8th, 2017]
- The AI Revolution: The Road to Superintelligence (PDF) - June 6th, 2017 [June 6th, 2017]
- A reply to Wait But Why on machine superintelligence - June 6th, 2017 [June 6th, 2017]
- Are You Ready for the AI Revolution and the Rise of Superintelligence? - TrendinTech - June 7th, 2017 [June 7th, 2017]
- Using AI to unlock human potential - EJ Insight - June 9th, 2017 [June 9th, 2017]
- Cars 3 gets back to what made the franchise adequate - Vox - June 13th, 2017 [June 13th, 2017]
- Facebook Chatbots Spontaneously Invent Their Own Non-Human ... - Interesting Engineering - June 17th, 2017 [June 17th, 2017]
- U.S. Navy reaches out to gamers to troubleshoot post ... - June 18th, 2017 [June 18th, 2017]
- The bots are coming - The New Indian Express - June 20th, 2017 [June 20th, 2017]
- Effective Altruism Says You Can Save the Future by Making Money - Motherboard - June 20th, 2017 [June 20th, 2017]
- No need to fear Artificial Intelligence - Livemint - Livemint - June 30th, 2017 [June 30th, 2017]
- Integrating disciplines 'key to dealing with digital revolution' | Times ... - Times Higher Education (THE) - July 4th, 2017 [July 4th, 2017]
- What an artificial intelligence researcher fears about AI - Huron Daily ... - Huron Daily Tribune - July 16th, 2017 [July 16th, 2017]
- AI researcher: Why should a superintelligence keep us around? - TNW - July 18th, 2017 [July 18th, 2017]
- Giving Up the Fags: A Self-Reflexive Speech on Critical Auto-ethnography About the Shame of Growing up Gay/Sexual ... - The Good Men Project (blog) - July 22nd, 2017 [July 22nd, 2017]
- Will we be wiped out by machine overlords? Maybe we need a ... - PBS NewsHour - July 22nd, 2017 [July 22nd, 2017]
- The end of humanity as we know it is 'coming in 2045' and Google is preparing for it - Metro - July 27th, 2017 [July 27th, 2017]
- The Musk/Zuckerberg Dustup Represents a Growing Schism in AI - Motherboard - August 4th, 2017 [August 4th, 2017]
- Infographic: Visualizing the Massive $15.7 Trillion Impact of AI - Visual Capitalist (blog) - August 25th, 2017 [August 25th, 2017]
- Being human in the age of artificial intelligence - Science Weekly podcast - The Guardian - August 25th, 2017 [August 25th, 2017]
- Why won't everyone listen to Elon Musk about the robot apocalypse? - Ladders - August 25th, 2017 [August 25th, 2017]
- Friendly artificial intelligence - Wikipedia - August 25th, 2017 [August 25th, 2017]
- The AI Revolution: The Road to Superintelligence | Inverse - August 25th, 2017 [August 25th, 2017]
- Steam Workshop :: Superintelligence - February 15th, 2018 [February 15th, 2018]
- Steam Workshop :: Superintelligence (BNW) - February 15th, 2018 [February 15th, 2018]
- Artificial Superintelligence: The Coming Revolution ... - June 3rd, 2018 [June 3rd, 2018]
- Superintelligence survey - Future of Life Institute - June 23rd, 2018 [June 23rd, 2018]
- Amazon.com: Superintelligence: Paths, Dangers, Strategies ... - August 18th, 2018 [August 18th, 2018]
- Superintelligence: From Chapter Eight of Films from the ... - October 11th, 2018 [October 11th, 2018]
- Superintelligence - Hardcover - Nick Bostrom - Oxford ... - March 6th, 2019 [March 6th, 2019]
- Global Risks Report 2017 - Reports - World Economic Forum - March 6th, 2019 [March 6th, 2019]
- Superintelligence: Paths, Dangers, Strategies - Wikipedia - May 3rd, 2019 [May 3rd, 2019]
- Elon Musk warns 'advanced A.I.' will soon manipulate social media - Big Think - October 1st, 2019 [October 1st, 2019]
- Aquinas' Fifth Way: The Proof from Specification - Discovery Institute - October 22nd, 2019 [October 22nd, 2019]
- The Best Artificial Intelligence Books you Need to Read Today - Edgy Labs - October 22nd, 2019 [October 22nd, 2019]
- Here's How to Watch Watchmen, HBOs Next Game of Thrones - Cosmopolitan - October 24th, 2019 [October 24th, 2019]
- Idiot Box: HBO Max joins the flood of streaming services - Weekly Alibi - October 24th, 2019 [October 24th, 2019]
- AMC Is Still In the Theater Business, But VOD Is a Funny Way of Showing It - IndieWire - October 24th, 2019 [October 24th, 2019]
- Melissa McCarthy And Ben Falcone Have Decided To Release 'Superintelligence' Via HBO Max Ins - Science Fiction - October 24th, 2019 [October 24th, 2019]
- Melissa McCarthy & Director Ben Falcone On Choosing HBO Max Bow Instead Of WB Xmas Release For Superintelligence - Deadline - October 24th, 2019 [October 24th, 2019]
- Doubting The AI Mystics: Dramatic Predictions About AI Obscure Its Concrete Benefits - Forbes - December 5th, 2019 [December 5th, 2019]
- AI R&D is booming, but general intelligence is still out of reach - The Verge - December 18th, 2019 [December 18th, 2019]
- Playing Tetris Shows That True AI Is Impossible - Walter Bradley Center for Natural and Artificial Intelligence - December 21st, 2019 [December 21st, 2019]
- Liquid metal tendons could give robots the ability to heal themselves - Digital Trends - December 21st, 2019 [December 21st, 2019]
- NIU expert: 4 leaps in technology to expect in the 2020s | NIU - NIU Newsroom - December 21st, 2019 [December 21st, 2019]
- Thinking Beyond Flesh and Bones with AI - Ghana Latest Football News, Live Scores, Results - Ghanasoccernet.com - February 18th, 2020 [February 18th, 2020]
- Elon Musk dings Bill Gates and says their conversations were underwhelming, after the Microsoft billionaire buys an electric Porsche - Pulse Nigeria - February 18th, 2020 [February 18th, 2020]
- Is Artificial Intelligence (AI) A Threat To Humans? - Forbes - March 4th, 2020 [March 4th, 2020]
- The world's best virology lab isn't where you think - Spectator.co.uk - April 3rd, 2020 [April 3rd, 2020]
- Josiah Henson: the forgotten story in the history of slavery - The Guardian - June 21st, 2020 [June 21st, 2020]
- The Shadow of Progress - Merion West - July 13th, 2020 [July 13th, 2020]
- If you can't beat 'em, join 'em Elon Musk tweets out the mission statement for his AI-brain-chip Neuralink - Business Insider India - July 13th, 2020 [July 13th, 2020]