AI Is Harder Than We Think: 4 Key Fallacies in AI Research – Singularity Hub

Posted: May 9, 2021 at 11:03 am

Artificial intelligence has been all over headlines for nearly a decade, as systems have made quick progress in long-standing AI challenges like image recognition, natural language processing, and games. Tech companies have sown machine learning algorithms into search and recommendation engines and facial recognition systems, and OpenAIs GPT-3 and DeepMinds AlphaFold promise even more practical applications, from writing to coding to scientific discoveries.

Indeed, were in the midst of an AI spring, with investment in the technology burgeoning and an overriding sentiment of optimism and possibility towards what it can accomplish and when.

This time may feel different than previous AI springs due to the aforementioned practical applications and the proliferation of narrow AI into technologies many of us use every daylike our smartphones, TVs, cars, and vacuum cleaners, to name just a few. But its also possible that were riding a wave of short-term progress in AI that will soon become part of the ebb and flow in advancement, funding, and sentiment that has characterized the field since its founding in 1956.

AI has fallen short of many predictions made over the last few decades; 2020, for example, was heralded by many as the year self-driving cars would start filling up roads, seamlessly ferrying passengers around as they sat back and enjoyed the ride. But the problem has been more difficult than anticipated, and instead of hordes of robot taxis, the most advanced projects remain in trials. Meanwhile, some in the field believe the dominant form of AIa kind of machine learning based on neural networksmay soon run out of steam absent a series of crucial breakthroughs.

In a paper titled Why AI Is Harder Than We Think, published last week on the arXiv preprint server, Melanie Mitchell, a computer science professor at Portland State University currently at the Santa Fe Institute, argues that AI is stuck in an ebb-and-flow cycle largely because we dont yet truly understand the nature and complexity of human intelligence. Mitchell breaks this overarching point down into four common misconceptions around AI, and discusses what they mean for the future of the field.

Impressive new achievements by AI are often accompanied by an assumption that these same achievements are getting us closer to reaching human-level machine intelligence. But not only, as Mitchell points out, are narrow and general intelligence as different as climbing a tree versus landing on the moon, but even narrow intelligence is still largely reliant on an abundance of task-specific data and human-facilitated training.

Take GPT-3, which some cited as having surpassed narrow intelligence: the algorithm was trained to write text, but learned to translate, write code, autocomplete images, and do math, among other tasks. But although GPT-3s capabilities turned out to be more extensive than its creators may have intended, all of its skills are still within the domain in which it was trained: that is, languagespoken, written, and programming.

Becoming adept at a non-language-related skill with no training would signal general intelligence, but this wasnt the case with GPT-3, nor has it been the case with any other recently-developed AI: they remain narrow in nature and, while significant in themselves, shouldnt be conflated with steps toward the thorough understanding of the world required for general intelligence.

Is AI smarter than a four-year-old? In most senses, the answer is no, and thats because skills and tasks that we perceive as being easy are in fact much more complex than we give them credit for, as Moravecs Paradox notes.

Four-year-olds are pretty good at figuring out cause and effect relationships based on their interactions with the world around them. If, for example, they touch a pot on the stove and burn a finger, theyll understand that the burn was caused by the pot being hot, not by it being round or silver. To humans this is basic common sense, but algorithms have a hard time making causal inferences, especially without a large dataset or in a different context than the one they were trained in.

The perceptions and choices that take place at a subconscious level in humans sit on a lifetimes worth of experience and learning, even at such an elementary level as touching hot things will burn you. Because we reach a point where this sort of knowledge is reflexive, not even requiring conscious thought, we see it as easy, but its quite the opposite. AI is harder than we think, Mitchell writes, because we are largely unconscious of the complexity of our own thought processes.

Humans have a tendency to anthropomorphize non-human things, from animals to inanimate objects to robots and computers. In so doing, we use the same words wed use to discuss human activities or intelligenceexcept these words dont quite fit the context, and in fact can muddle our own understanding of AI. Mitchell uses the term wishful mnemonics, coined by a computer scientist in the 1970s. Words like reading, understanding, and thinking are used to describe and evaluate AI, but these words dont give us an accurate depiction of how the AI is functioning or progressing.

Even learning is a misnomer, Mitchell says, because if a machine truly learned a new skill, it would be able to apply that skill in different settings; finding correlations in datasets and using the patterns identified to make predictions or meet other benchmarks is something, but its not learning in the way that humans learn.

So why all the fuss over words, if theyre all we have and theyre getting the gist across? Well, Mitchell says, this inaccurate language can not only mislead the public and the media, but can influence the way AI researchers think about their systems and carry out their work.

Mitchells final point is that human intelligence is not contained solely in the brain, but requires a physical body.

This seems self-explanatory; we use our senses to absorb and process information, and we interact with and move through the world in our bodies. Yet the prevailing emphasis in AI research is on the brain: understanding it, replicating various aspects of its form or function, and making AI more like it.

If intelligence lived just in the brain, wed be able to move closer to reaching human-level AI by, say, building a neural network with the same number of parameters as the brain has synaptic connections, thereby duplicating the brains computing capacity.

Drawing this sort of parallel may apply in cases where intelligence refers to operating by a set of rules to work towards a defined goalsuch as winning a game of chess or modeling the way proteins fold, both of which computers can already do quite well. But other types of intelligence are far more shaped by and subject to emotion, bias, and individual experience.

Going back to the GPT-3 example: the algorithm produces subjective intelligence (its own writing) using a set of rules and parameters it created with a huge dataset of pre-existing subjective intelligence (writing by humans). GPT-3 is hailed as being creative, but its writing relies on associations it drew between words and phrases in human writingwhich is replete with biases, emotion, pre-existing knowledge, common sense, and the writers unique experience of the world, all experienced through the body.

Mitchell argues that the non-rational, subjective aspects of the way humans think and operate arent a hindrance to our intelligence, but are in fact its bedrock and enabler. Leading artificial general intelligence expert Ben Goertzel similarly advocates for whole-organism architecture, writing, Humans are bodies as much as minds, and so achieving human-like AGI will require embedding AI systems in physical systems capable of interacting with the everyday human world in nuanced ways.

These misconceptions leave little doubt as to what AI researchers and developers shouldnt do. Whats less clear is how to move forward. We must start, Mitchell says, with a better understanding of intelligenceno small or straightforward task. One good place AI researchers can look, though, is in other disciplines of science that study intelligence.

Why are we so intent on creating an artificial version of human intelligence, anyway? It has evolved over millions of years and is hugely complex and intricate, yet still rife with its own shortcomings too. Perhaps the answer is that were not trying to build an artificial brain thats as good as ours; were trying to build one thats better, and that will help us solve currently unsolvable problems.

Human evolution took place over the course of about six million years. Meanwhile, its been 65 years since AI became a field of study, and its writing human-like text, making fake faces, holding its own in debates, making medical diagnoses, and more. Though theres much left to learn, it seems AI is progressing pretty well in the grand scheme of thingsand the next step in taking it further is deepening our understanding of our own minds.

Image Credit:Rene Bhmer on Unsplash

Read more:

AI Is Harder Than We Think: 4 Key Fallacies in AI Research - Singularity Hub

Related Posts