University of Oxford
Oli Scarff/Getty Images
Oxford and Cambridge, the oldest universities in Britain and two of the oldest in the world, are keeping a watchful eye on the buzzy field of artificial intelligence (AI), which has been hailed as a technology that will bring about a new industrial revolution and change the world as we know it.
Over the last few years, each of the centuries-old institutions have pumped millions of pounds into researching the possible risks associated with machines of the future.
Clever algorithms can already outperform humans at certain tasks. For example, they can beat the best human players in the world at incredibly complex games like chess and Go, and they're able to spot cancerous tumors in a mammogram far quicker than a human clinician can. Machines can also tell the difference between a cat and a dog, or determine a random person's identity just by looking at a photo of their face. They can also translate languages, drive cars, and keep your home at the right temperature. But generally speaking, they're still nowhere near as smart as the average 7-year-old.
The main issue is that AI can't multitask. For example, a game-playing AI can't yet paint a picture. In other words, AI today is very "narrow" in its intelligence. However, computer scientists at the the likes of Google and Facebook are aiming to make AI more "general" in the years ahead, and that's got some big thinkers deeply concerned.
Nick Bostrom, a 47-year-old Swedish born philosopher and polymath, founded the Future of Humanity Institute (FHI) at the University of Oxford in 2005 to assess how dangerous AI and other potential threats might be to the human species.
In the main foyer of the institute, complex equations beyond most people's comprehension are scribbled on whiteboards next to words like "AI safety" and "AI governance." Pensive students from other departments pop in and out as they go about daily routines.
It's rare to get an interview with Bostrom, a transhumanist who believes that we can and should augment our bodies with technology to help eliminate ageing as a cause of death.
"I'm quite protective about research and thinking time so I'm kind of semi-allergic to scheduling too many meetings," he says.
Tall, skinny and clean shaven, Bostrom has riled some AI researchers with his openness to entertain the idea that one day in the not so distant future, machines will be the top dog on Earth. He doesn't go as far as to say when that day will be, but he thinks that it's potentially close enough for us to be worrying about it.
Swedish philosopher Nick Bostrom is a polymath and the author of "Superintelligence."
The Future of Humanity Institute
If and when machines possess human-level artificial general intelligence, Bostrom thinks they could quickly go on to make themselves even smarter and become superintelligent. At this point, it's anyone's guess what happens next.
The optimist says the superintelligent machines will free up humans from work and allow them to live in some sort of utopia where there's an abundance of everything they could ever desire. The pessimist says they'll decide humans are no longer necessary and wipe them all out.Billionare Elon Musk, who has a complex relationship with AI researchers, recommended Bostrom's book "Superintelligence" on Twitter.
Bostrom's institute has been backed with roughly $20 million since its inception. Around $14 million of that coming from the Open Philanthropy Project, a San Francisco-headquartered research and grant-making foundation. The rest of the money has come from the likes of Musk and the European Research Council.
Located in an unassuming building down a winding road off Oxford's main shopping street, the institute is full of mathematicians, computer scientists, physicians, neuroscientists, philosophers, engineers and political scientists.
Eccentric thinkers from all over the world come here to have conversations over cups of tea about what might lie ahead. "A lot of people have some kind of polymath and they are often interested in more than one field," says Bostrom.
The FHI team has scaled from four people to about 60 people over the years. "In a year, or a year and a half, we will be approaching 100 (people)," says Bostrom. The culture at the institute is a blend of academia, start-up and NGO, according to Bostrom, who says it results in an "interesting creative space of possibilities" where there is "a sense of mission and urgency."
If AI somehow became much more powerful, there are three main ways in which it could end up causing harm, according to Bostrom. They are:
"Each of these categories is a plausible place where things could go wrong," says Bostrom.
With regards to machines turning against humans, Bostrom says that if AI becomes really powerful then "there's a potential risk from the AI itself that it does something different than anybody intended that could then be detrimental."
In terms of humans doing bad things to other humans with AI, there's already a precedent there as humans have used other technological discoveries for the purpose of war or oppression. Just look at the atomic bombings of Hiroshima and Nagasaki, for example. Figuring out how to reduce the risk of this happening with AI is worthwhile, Bostrom says, adding that it's easier said than done.
I think there is now less need to emphasize primarily the downsides of AI.
Asked if he is more or less worried about the arrival of superintelligent machines than he was when his book was published in 2014, Bostrom says the timelines have contracted.
"I think progress has been faster than expected over the last six years with the whole deep learning revolution and everything," he says.
When Bostrom wrote the book, there weren't many people in the world seriously researching the potential dangers of AI. "Now there is this thriving small, but thriving field of AI safety work with a number of groups," he says.
While there's potential for things to go wrong, Bostrom says it's important to remember that there are exciting upsides to AI and he doesn't want to be viewed as the person predicting the end of the world.
"I think there is now less need to emphasize primarily the downsides of AI," he says, stressing that his views on AI are complex and multifaceted.
Bostrom says the aim of FHI is "to apply careful thinking to big picture questions for humanity." The institute is not just looking at the next year or the next 10 years, it's looking at everything in perpetuity.
"AI has been an interest since the beginning and for me, I mean, all the way back to the 90s," says Bostrom. "It is a big focus, you could say obsession almost."
The rise of technology is one of several plausible ways that could cause the "human condition" to change in Bostrom's view. AI is one of those technologies but there are groups at the FHI looking at biosecurity (viruses etc), molecular nanotechnology, surveillance tech, genetics, and biotech (human enhancement).
A scene from 'Ex Machina.'
Source: Universal Pictures | YouTube
When it comes to AI, the FHI has two groups; one does technical work on the AI alignment problem and the other looks at governance issuesthat will arise as machine intelligence becomes increasingly powerful.
The AI alignment group is developing algorithms and trying to figure out how to ensure complex intelligent systems behave as we intend them to behave. That involves aligning them with "human preferences," says Bostrom.
Roughly 66 miles away at the University of Cambridge, academics are also looking at threats to human existence, albeit through a slightly different lens.
Researchers at the Center for the Study of Existential Risk (CSER) are assessing biological weapons, pandemics, and, of course, AI.
We are dedicated to the study and mitigation of risks that could lead to human extinction or civilization collapse.
Centre for the Study of Existential Risk (CSER)
"One of the most active areas of activities has been on AI," said CSER co-founder Lord Martin Rees from his sizable quarters at Trinity College in an earlier interview.
Rees, a renowned cosmologist and astrophysicist who was the president of the prestigious Royal Society from 2005 to 2010, is retired so his CSER role is voluntary, but he remains highly involved.
It's important that any algorithm deciding the fate of human beings can be explained to human beings, according to Rees. "If you are put in prison or deprived of your credit by some algorithm then you are entitled to have an explanation so you can understand. Of course, that's the problem at the moment because the remarkable thing about these algorithms like AlphaGo (Google DeepMind's Go-playing algorithm) is that the creators of the program don't understand how it actually operates. This is a genuine dilemma and they're aware of this."
The idea for CSER was conceived in the summer of 2011 during a conversation in the back of a Copenhagen cab between Cambridge academic Huw Price and Skype co-founder Jaan Tallinn, whose donations account for 7-8% of the center's overall funding and equate to hundreds of thousands of pounds.
"I shared a taxi with a man who thought his chance of dying in an artificial intelligence-related accident was as high as that of heart disease or cancer," Price wrote of his taxi ride with Tallinn. "I'd never met anyone who regarded it as such a pressing cause for concern let alone anyone with their feet so firmly on the ground in the software business."
University of Cambridge
Geography Photos/UIG via Getty Images
CSER is studying how AI could be used in warfare, as well as analyzing some of the longer term concerns that people like Bostrom have written about. It is also looking at how AI can turbocharge climate science and agricultural food supply chains.
"We try to look at both the positives and negatives of the technology because our real aim is making the world more secure," says Sen higeartaigh, executive director at CSER and a former colleague of Bostrom's. higeartaigh, who holds a PhD in genomics from Trinity College Dublin, says CSER currently has three joint projects on the go with FHI.
External advisors include Bostrom and Musk, as well as other AI experts like Stuart Russell and DeepMind's Murray Shanahan. The late Stephen Hawking was also an advisor when he was alive.
The Leverhulme Center for the Future of Intelligence (CFI) was opened at Cambridge in 2016 and today it sits in the same building as CSER, a stone's throw from the punting boats on the River Cam. The building isn't the only thing the centers share staff overlap too and there's a lot of research that spans both departments.
Backed with over 10 million from the grant-making Leverhulme Foundation, the center is designed to support "innovative blue skies thinking," according to higeartaigh, its co-developer.
Was there really a need for another one of these research centers? higeartaigh thinks so. "It was becoming clear that there would be, as well as the technical opportunities and challenges, legal topics to explore, economic topics, social science topics," he says.
"How do we make sure that artificial intelligence benefits everyone in a global society? You look at issues like who's involved in the development process? Who is consulted? How does the governance work? How do we make sure that marginalized communities have a voice?"
The aim of CFI is to get computer scientists and machine-learning experts working hand in hand with people from policy, social science, risk and governance, ethics, culture, critical theory and so on. As a result, the center should be able to take a broad view of the range of opportunities and challenges that AI poses to societies.
"By bringing together people who think about these things from different angles, we're able to figure out what might be properly plausible scenarios that are worth trying to mitigate against," said higeartaigh.
- Pandemics and transhumanism - The Times of India Blog - September 18th, 2020
- CD Projekt Red have nabbed Cyberpunk, but here are 5 other punks that deserve games - PC Gamer - September 18th, 2020
- Scientific Psi? Neuralink and the smarter brain - Covalence - September 18th, 2020
- Transhuman - Wikipedia - August 10th, 2020
- The Transhuman Revolution: What it is and How to Prepare ... - August 10th, 2020
- What No One Will Tell You About Robots - OZY - July 25th, 2020
- Black LGBTQ+ playwrights and musical-theater artists you need to know - Time Out New York - July 25th, 2020
- ANTIBOY: The Family of Harry Hains Unveils Animated Video For Good Enough Single - Icon Vs. Icon - July 22nd, 2020
- No death and an enhanced life: Is the future transhuman ... - July 21st, 2020
- Immortality or Bust - Film Threat - July 20th, 2020
- In Dan Brown's AI Hype Novel, the Hero Stumbles Onto God - Walter Bradley Center for Natural and Artificial Intelligence - July 15th, 2020
- Quote of the Day on the Morality of Those Seeking Heaven - Patheos - July 13th, 2020
- A music and arts festival for Mumbaikars - Times of India - July 3rd, 2020
- Experience never seen before immersive music and art festival with transhuman collective 'UNRATED' - RadioandMusic.com - July 3rd, 2020
- This startup is ensuring babies get a good nights sleep with its smart mattress - YourStory - July 3rd, 2020
- Who exactly was Jeffrey Epstein? A history of the mogul and his crimes - Film Daily - July 3rd, 2020
- Livestream event on Steve Fuller's Nietzschean Meditations - Institute for Ethics and Emerging Technologies - June 22nd, 2020
- Transhuman - TV Tropes - June 13th, 2020
- Artificial eye with 3D retina developed for the first time - Advanced Science News - June 13th, 2020
- How to go on holiday in a pandemic - The Economist - June 13th, 2020
- The Best Way to Handle Your Decline Is to Confront It Head On - The Atlantic - June 13th, 2020
- Humans will be able to replace their bodies within 50 years claims transhumanist writer - Express.co.uk - May 27th, 2020
- Heres Everything Coming to HBO Max in June - TheWrap - May 27th, 2020
- Durham's Kriya Therapuetics lands $80M to advance gene therapies for diabetes, severe obesity - WRAL Tech Wire - May 15th, 2020
- Introducing When the Sparrow Falls, the Debut Novel From Neil Sharpson - tor.com - May 15th, 2020
- New Releases And Eshop Discounts Week 20 - N-Europe - May 15th, 2020
- Things To Do: Antonio Eyez Will Perform At R&R Studios April 30 - Houston Press - April 27th, 2020
- Five Essay Collections to Read in Quarantine - Willamette Week - April 27th, 2020
- OISTE.ORG Foundation endorses preserving the human right to privacy statement during the Covid-19 pandemic signed by a group of more than 300... - April 27th, 2020
- The Demo For Soldat 2 Is Now Free On The Steam Platform - Happy Gamer - April 2nd, 2020
- Friending the World Sociality and the Transhuman Vision - Patheos - March 28th, 2020
- Technology and Human Creativity in Theological Perspective - Patheos - March 28th, 2020
- What is an artificial womb and can it work for humans? - Screen Shot - March 28th, 2020
- How the Fast & Furious Movies Should End (and Live on Forever) - Observer - March 28th, 2020
- Oxford academic claims future humans could live for thousands of years - Express.co.uk - March 26th, 2020
- Electioneering on the Eve of the Virus Nathan Thornburgh and photographer Shane Carpenter were in New - Roads and Kingdoms - March 26th, 2020
- MultiBrief: Surviving coronavirus: Bravery, health, and strength - MultiBriefs Exclusive - March 26th, 2020
- Coronavirus and the Rise and Fall of Humanism - CounterPunch - March 26th, 2020
- The Fight against Socialism Isnt Over - National Review - March 16th, 2020
- Harry Hains, actor in American Horror Story and The OA, dies at 27 - SYFY WIRE - January 11th, 2020
- WISeKey to Hold its 13th Annual Cybersecurity IoT Blockchain Roundtable in Davos on January 22, 2020 - GlobeNewswire - December 25th, 2019
- Misinformation, hacking, and imploding startups: 18 books to read in 2020 that puncture Silicon Valley utopianism - Business Insider - December 25th, 2019
- Best books of 2019 - Patheos - December 21st, 2019
- The 50 best TV shows of 2019: No 4 Years and Years - The Guardian - December 21st, 2019
- The Big Read Poppy: Human After All, the NME interview - NME.com - November 13th, 2019
- Jonathan Kay: What to make of racism, sexism and homophobia from the same people lecturing us about bigotry - National Post - November 13th, 2019
- How Australian viewers are reacting to 'Years and Years' - SBS - November 13th, 2019
- Are LED lights and other techno-implants slowly turning us into the Borg? - SYFY WIRE - October 16th, 2019
- The finale issue of House of X/Powers of X: We dig into every detail - Polygon - October 16th, 2019
- Yudkowsky - The AI-Box Experiment - June 3rd, 2019
- Corporate Growth Summit and International M&A Awards - June 3rd, 2019
- We Asked an AI to Finish Real Elon Musk Tweets - May 28th, 2019
- Watch a Super-Strong Robot Dog Pull a Three-Ton Airplane - May 28th, 2019
- United Nations: Siri and Alexa Are Encouraging Misogyny - May 28th, 2019
- New Law Could End Robocalling Once and For All - May 28th, 2019
- Scientists Set New Temperature Record for Superconductivity - May 28th, 2019
- Can You Tell Which of These Models Is CGI? - May 28th, 2019
- Watch a Tesla in an Underground Tunnel Race One on the Street - May 28th, 2019
- SpaceX Just Unleashed 60 Starlink Satellites Into Orbit - May 28th, 2019
- Asteroid Flying by Earth Is so Big It Has Its Own Moon - May 28th, 2019
- Elevate Your Leadership and Grow Your Business at Your Clouds Can 2019 - May 28th, 2019
- NASA’s Moon Mission Leader Just Quit After Only Six Weeks - May 28th, 2019
- NASA Just Hired the First Contractor to Build Lunar Space Station - May 28th, 2019
- Elon Musk Ridicules Jeff Bezos’ Plan For Space Colonies - May 28th, 2019
- See China’s Newly Unveiled Maglev Train - May 28th, 2019
- Here’s How NASA Prepares Its Spacecraft for Mars - May 28th, 2019
- This Robot Scans Preschoolers’ Faces Daily for Signs of Sickness - May 28th, 2019
- New Research: The Oceans Are Slowly Leaking Into the Earth - May 28th, 2019
- Studying the Sun’s Atmosphere Could Make Fusion Power a Reality - May 28th, 2019
- Anonymous Group of 3D-Printed Gun Makers Is Spreading Online - May 28th, 2019
- Ripple Price Forecast: XRP vs SWIFT, SEC Updates, and More - May 25th, 2019
- Cryptocurrency News: Looking Past the Bithumb Crypto Hack - May 25th, 2019
- Cryptocurrency News: This Week on Bitfinex, Tether, Coinbase, & More - May 25th, 2019
- Cryptocurrency News: Bitcoin ETFs, Andreessen Horowitz, and Contradictions in Crypto - May 25th, 2019
- Cryptocurrency News: What You Need to Know This Week - May 25th, 2019
- Cryptocurrency News: XRP Validators, Malta, and Practical Tokens - May 25th, 2019
- Cryptocurrency News: New Exchanges Could Boost Crypto Liquidity - May 25th, 2019
- Cryptocurrency News: Bitcoin ETF Rejection, AMD Microchip Sales, and Hedge Funds - May 25th, 2019
- Bitcoin Rise: Is the Recent Bitcoin Price Surge a Sign of Things to Come or Another Misdirection? - May 25th, 2019
- Cryptocurrency News: Vitalik Buterin Doesn’t Care About Bitcoin ETFs - May 25th, 2019