When computers exceed our ability to understand how the hell they do the things they do

Which would be pretty much now.

Great quote from David Ferrucci, the Lead Researcher of IBM's Watson Project:

"Watson absolutely surprises me. People say: 'Why did it get that one wrong?' I don't know. 'Why did it get that one right?' I don't know."

Essentially, the IBM team came up with a whole whack of fancy algorithms and shoved them into Watson. But they didn't know how these formulas would work in concert with each other and result in emergent effects (i.e. computational cognitive complexity). The result is the seemingly intangible, and not always coherent, way in which Watson gets questions right—and the ways in which it gets questions wrong.

As Watson has revealed, when it errs it errs really badly.

This kind of freaks me out a little. When asking computers questions that we don't know the answers to, we aren't going to know beyond a shadow of a doubt when a system like Watson is right or wrong. Because we don't know the answer ourselves, and because we don't necessarily know how the computer got the answer, we are going to have to take a tremendous leap of faith that it got it right when the answer seems even remotely plausible.

Looking even further ahead, it's becoming painfully obvious that any complex system that is even remotely superior (or simply different) relative to human cognition will be largely unpredictable. This doesn't bode well for our attempts to engineer safe, comprehensible and controllable super artificial intelligence.


Related Posts

Comments are closed.