Human Compatible by Stuart Russell review AI and our future – The Guardian

Heres a question scientists might ask more often: what if we succeed? That is, how will the world change if we achieve what were striving for? Tucked away in offices and labs, researchers can develop tunnel vision, the rosiest of outlooks for their creations. The unintended consequences and shoddy misuses become afterthoughts messes for society to clean up later.

Today those messes spread far and wide: global heating, air pollution, plastics in the oceans, nuclear waste and babies with badly rewritten DNA. All are products of neat technologies that solve old problems by creating new ones. In the inevitable race to be first to invent, the downsides are dismissed, unexplored or glossed over.

In 1995, Stuart Russell wrote the book on AI. Co-authored with Peter Norvig, Artificial Intelligence: A Modern Approach became one of the most popular course texts in the world (Norvig worked for Nasa; in 2001, he joined Google). In the final pages of the last chapter, the authors posed the question themselves: what if we succeed? Their answer was hardly a ringing endorsement. The trends seem not to be too terribly negative, they offered. A lot has happened since: Google and Facebook for starters.

In Human Compatible, Russell returns to the question and this time does not hold back. The result is surely the most important book on AI this year. Perhaps, as Richard Brautigans poem has it, life is good when we are all watched over by machines of loving grace. But Russell, a professor at the University of California, Berkeley, sees darker eventualities. Creating machines that surpass our intelligence would be the biggest event in human history. It may also be the last, he warns. Here he makes the convincing case that how we choose to control AI is possibly the most important question facing humanity.

Russell has picked his moment well. Tens of thousands of the worlds brightest minds are now building AIs. Most work on one-trick ponies the narrow AIs that process speech, translate languages, spot people in crowds, diagnose diseases, or whip people at games from Go to Starcraft II. But these are a far cry from the fields ultimate goal: general purpose AIs that match, or surpass, the broad-based brainpower of humans.

Its is not a ludicrous ambition. From the start, DeepMind, the AI group owned by Alphabet, Googles parent company, set out to solve intelligence and then use that to solve everything else. In July, Microsoft signed a $1bn contract with OpenAI, a US outfit, to build an AI that mimics the human brain. It is a high stakes race. As Vladimir Putin said: whoever becomes the leader in AI will become the ruler of the world.

Russell doesnt claim we are nearly there. In one section he sets out the formidable problems computer engineers face in creating human-level AI. Machines must know how to turn words into coherent, reliable knowledge; they must learn how to discover new actions and order them appropriately (boil the kettle, grab a mug, toss in a teabag). And like us, they must manage their cognitive resources so they can reach good decisions fast. These are not the only hurdles, but they give a flavour of the task ahead. Russell suspects it will keep researchers busy for another 80 years, but stresses the timing is impossible to predict.

Even with apocalypse camped on the horizon, this is a wry and witty tour of intelligence and where it may take us. And where exactly is that? A machine that masters all the above would be a formidable decision maker in the real world, Russell says. It would absorb vast amounts of information from the internet, TV, radio, satellites and CCTV and with it gain a more sophisticated understanding of the world and its inhabitants than any human could ever hope for.

What could possibly go right? In education, AI tutors would maximise the potential of every child. They would master the vast complexity of the human body, letting us banish disease. As digital personal assistants they would put Siri and Alexa to shame: You would, in effect, have a high-powered lawyer, accountant, and political advisor on call at any time.

And what of the downsides? Without serious progress on AI safety and regulation, Russell foresees messes aplenty and his chapter on misuses of AI is grim reading. Advanced AI would hand governments such extraordinary powers of surveillance, persuasion and control that the Stasi will look like amateurs. And while Terminator-style killer robots are not about to eradicate humanity, drones that select and kill individuals based on their faceprints, skin colour or uniforms are entirely feasible. As for jobs, we may no longer make a living by providing physical or mental labour, but we can still supply our humanity. Russell notes: We will need to become good at being human.

Whats worse than a society-destroying AI? A society-destroying AI that wont switch off. Its a terrifying, seemingly absurd prospect that Russell devotes much time to. The idea is that smart machines will suss out, as per HAL in 2001: A Space Odyssey, that goals are hard to achieve if someone pulls the plug. Give a superintelligent AI a clear task to make the coffee, say and its first move will be to disable its off switch. The answer, Russell argues, lies in a radical new approach where AIs have some doubt about their goals, and so will never object to being shut down. He moves on to advocate provably beneficial AI, whose algorithms are mathematically proven to benefit their human users. Suffice to say this is a work in progress. How will my AI deal with yours?

Lets be clear: there are plenty of AI researchers who ridicule such fears. After the philosopher Nick Bostrom highlighted potential dangers of general purpose AI in Superintelligence (2014), a US thinktank, the Information Technology and Innovation Foundation, gave its Luddism award to alarmists touting an artificial intelligence apocalypse. This was indicative of the dismal debate around AI safety, which is on the brink of descending into tribalism. The danger that comes across here is less an abrupt destruction of the species, more an inexorable enfeeblement: a loss of striving and understanding, which erodes the foundations of civilisation and leaves us passengers in a cruise ship run by machines, on a cruise that goes on forever.

Human Compatible is published by Allen Lane (25). To order a copy go to guardianbookshop.com or call 020-3176 3837. Free UK p&p over 15, online orders only. Phone orders min p&p of 1.99.

More:

Human Compatible by Stuart Russell review AI and our future - The Guardian

Related Posts

Comments are closed.