Superintelligence: From Chapter Eight of Films from the …

This concern would often come out in conversations around meals. Id be sitting next to some engaging person, having what seemed like a normal conversation, when theyd ask So, do you believe in superintelligence? As something of an agnostic, Id either prevaricate, or express some doubts as to the plausibility of the idea. In most cases, theyd then proceed to challenge any doubts that I might express, and try to convert me to becoming a superintelligence believer. I sometimes had to remind myself that I was at a scientific meeting, not a religious convention.

Part of my problem with these conversations was that, despite respecting Bostroms brilliance as a philosopher, I dont fully buy into his notion of superintelligence, and I suspect that many of my overzealous dining companions could spot this a mile off. I certainly agree that the trends in AI-based technologies suggest we are approaching a tipping point in areas like machine learning and natural language processing. And the convergence were seeing between AI-based algorithms, novel processing architectures, and advances in neurotechnology are likely to lead to some stunning advances over the next few years. But I struggle with what seems to me to be a very human idea that narrowly-defined intelligence and a particular type of power will lead to world domination.

Here, I freely admit that I may be wrong. And to be sure, were seeing far more sophisticated ideas begin to emerge around what the future of AI might look likephysicist Max Tegmark, for one, outlines a compelling vision in his book Life 3.0. The problem is, though, that were all looking into a crystal ball as we gaze into the future of AI, and trying to make sense of shadows and portents that, to be honest, none of us really understand. When it comes to some of the more extreme imaginings of superintelligence, two things in particular worry me. One is the challenge we face in differentiating between what is imaginable and what is plausible when we think about the future. The other, looking back to chapter five and the movie Limitless, is how we define and understand intelligence in the first place.


With a creative imagination, it is certainly possible to envision a future where AI takes over the world and crushes humanity. This is the Skynet scenario of the Terminator movies, or the constraining virtual reality of The Matrix. But our technological capabilities remain light-years away from being able to create such futureseven if we do create machines that can design future generations of smarter machines. And its not just our inability to write clever-enough algorithms thats holding us back. For human- like intelligence to emerge from machines, wed first have to come up with radically different computing substrates and architectures. Our quaint, two-dimensional digital circuits are about as useful to superintelligence as the brain cells of a flatworm are to solving the unified theory of everything; its a good start, but theres a long way to go.

Here, what is plausible, rather than simply imaginable, is vitally important for grounding conversations around what AI will and wont be able to do in the near future. Bostroms ideas of superintelligence are intellectually fascinating, but theyre currently scientifically implausible. On the other hand, Max Tegmark and others are beginning to develop ideas that have more of a ring of plausibility to them, while still painting a picture of a radically different future to the world we live in now (and in Tegmarks case, one where there is a clear pathway to strong AGI leading to a vastly better future). But in all of these cases, future AI scenarios depend on an understanding of intelligence that may end up being deceptive


Superintelligence: From Chapter Eight of Films from the ...

Related Post

Comments are closed.