Welcome toAI book reviews, a series of posts that explore the latest literature on artificial intelligence.
Recent advances in deep learning have rekindled interest in the imminence of machines that can think and act like humans, or artificial general intelligence. By following the path of building bigger and better neural networks, the thinking goes, we will be able to get closer and closer to creating a digital version of the human brain.
But this is a myth, argues computer scientist Erik Larson, and all evidence suggests that human and machine intelligence are radically different. Larsons new book, The Myth of Artificial Intelligence: Why Computers Cant Think the Way We Do, discusses how widely publicized misconceptions about intelligence and inference have led AI research down narrow paths that are limiting innovation and scientific discoveries.
And unless scientists, researchers, and the organizations that support their work dont change course, Larson warns, they will be doomed to resignation to the creep of a machine-land, where genuine invention is sidelined in favor of futuristic talk advocating current approaches, often from entrenched interests.
From a scientific standpoint, the myth of AI assumes that we will achieve artificial general intelligence (AGI) by making progress on narrow applications, such as classifying images, understanding voice commands, or playing games. But the technologies underlying these narrow AI systems do not address the broader challenges that must be solved for general intelligence capabilities, such as holding basic conversations, accomplishing simple chores in a house, or other tasks that require common sense.
As we successfully apply simpler, narrow versions of intelligence that benefit from faster computers and lots of data, we are not making incremental progress, but rather picking the low-hanging fruit, Larson writes.
The cultural consequence of the myth of AI is ignoring the scientific mystery of intelligence and endlessly talking about ongoing progress on deep learning and other contemporary technologies. This myth discourages scientists from thinking about new ways to tackle the challenge of intelligence.
We are unlikely to get innovation if we choose to ignore a core mystery rather than face it up, Larson writes. A healthy culture for innovation emphasizes exploring unknowns, not hyping extensions of existing methods Mythology about inevitable success in AI tends to extinguish the very culture of invention necessary for real progress.
You step out of your home and notice that the street is wet. Your first thought is that it must have been raining. But its sunny and the sidewalk is dry, so you immediately cross out the possibility of rain. As you look to the side, you see a road wash tanker parked down the street. You conclude that the road is wet because the tanker washed it.
This is an example inference, the act of going from observations to conclusions, and is the basic function of intelligent beings. Were constantly inferring things based on what we know and what we perceive. Most of it happens subconsciously, in the background of our mind, without focus and direct attention.
Any system that infers must have some basic intelligence, because the very act of using what is known and what is observed to update beliefs is inescapably tied up with what we mean by intelligence, Larson writes.
AI researchers base their systems on two types of inference machines: deductive and abductive. Deductive inference uses prior knowledge to reason about the world. This is the basis of symbolic artificial intelligence, the main focus of researchers in the early decades of AI. Engineers create symbolic systems by endowing them with a predefined set of rules and facts, and the AI uses this knowledge to reason about the data it receives.
Inductive inference, which has gained more traction among AI researchers and tech companies in the past decade, is the acquisition of knowledge through experience. Machine learning algorithms are inductive inference engines. An ML model trained on relevant examples will find patterns that map inputs to outputs. In recent years, AI researchers have used machine learning, big data, and advanced processors to train models on tasks that were beyond the capacity of symbolic systems.
A third type of reasoning, abductive inference, was first introduced by American scientist Charles Sanders Peirce in the 19th century. Abductive inference is the cognitive ability to come up with intuitions and hypotheses, to make guesses that are better than random stabs at the truth.
For example, there can be numerous reasons for the street to be wet (including some that we havent directly experienced before), but abductive inference enables us to select the most promising hypotheses, quickly eliminate the wrong ones, look for new ones and reach a reliable conclusion. As Larson puts it in The Myth of Artificial Intelligence, We guess, out of a background of effectively infinite possibilities, which hypotheses seem likely or plausible.
Abductive inference is what many refer to as common sense. It is the conceptual framework within which we view facts or data and the glue that brings the other types of inference together. It enables us to focus at any moment on whats relevant among the ton of information that exists in our mind and the ton of data were receiving through our senses.
The problem is that the AI community hasnt paid enough attention to abductive inference.
Abduction entered the AI discussion with attempts at Abductive Logic Programming in the 1980s and 1990s, but those efforts were flawed and later abandoned. They were reformulations of logic programming, which is a variant of deduction, Larson told TechTalks.
Abduction got another chance in the 2010s as Bayesian networks, inference engines that try to compute causality. But like the earlier approaches, the newer approaches shared the flaw of not capturing true abduction, Larson said, adding that Bayesian and other graphical models are variants of induction. In The Myth of Artificial Intelligence, he refers to them as abduction in name only.
For the most part, the history of AI has been dominated by deduction and induction.
When the early AI pioneers like [Alan] Newell, [Herbert] Simon, [John] McCarthy, and [Marvin] Minsky took up the question of artificial inference (the core of AI), they assumed that writing deductive-style rules would suffice to generate intelligent thought and action, Larson said. That was never the case, really, as should have been earlier acknowledged in discussions about how we do science.
For decades, researchers tried to expand the powers of symbolic AI systems by providing them with manually written rules and facts. The premise was that if you endow an AI system with all the knowledge that humans know, it will be able to act as smartly as humans. But pure symbolic AI has failed for various reasons. Symbolic systems cant acquire and add new knowledge, which makes them rigid. Creating symbolic AI becomes an endless chase of adding new facts and rules only to find the system making new mistakes that it cant fix. And much of our knowledge is implicit and cannot be expressed in rules and facts and fed to symbolic systems.
Its curious here that no one really explicitly stopped and said Wait. This is not going to work! Larson said. That would have shifted research directly towards abduction or hypothesis generation or, say, context-sensitive inference.
In the past two decades, with the growing availability of data and compute resources, machine learning algorithmsespecially deep neural networkshave become the focus of attention in the AI community. Deep learning technology has unlocked many applications that were previously beyond the limits of computers. And it has attracted interest and money from some of the wealthiest companies in the world.
I think with the advent of the World Wide Web, the empirical or inductive (data-centric) approaches took over, and abduction, as with deduction, was largely forgotten, Larson said.
But machine learning systems also suffer from severe limits, including the lack of causality, poor handling of edge cases, and the need for too much data. And these limits are becoming more evident and problematic as researchers try to apply ML to sensitive fields such as healthcare and finance.
Some scientists, including reinforcement learning pioneer Richard Sutton, believe that we should stick to methods that can scale with the availability of data and computation, namely learning and search. For example, as neural networks grow bigger and are trained on more data, they will eventually overcome their limits and lead to new breakthroughs.
Larson dismisses the scaling up of data-driven AI as fundamentally flawed as a model for intelligence. While both search and learning can provide useful applications, they are based on non-abductive inference, he reiterates.
Search wont scale into commonsense or abductive inference without a revolution in thinking about inference, which hasnt happened yet. Similarly with machine learning, the data-driven nature of learning approaches means essentially that the inferences have to bein the data, so to speak, and thats demonstrably not true of many intelligent inferences thatpeople routinelyperform, Larson said. We dont just look to the past, captured, say, in a large dataset, to figure out what to conclude or think or infer about the future.
Other scientists believe that hybrid AI that brings together symbolic systems and neural networks will have a bigger promise of dealing with the shortcomings of deep learning. One example is IBM Watson, which became famous when it beat world champions at Jeopardy! More recent proof-of-concept hybrid models have shown promising results in applications where symbolic AI and deep learning alone perform poorly.
Larson believes that hybrid systems can fill in the gaps in machine learningonly or rules-basedonly approaches. As a researcher in the field of natural language processing, he is currently working on combining large pre-trained language models like GPT-3 with older work on the semantic web in the form of knowledge graphs to create better applications in search, question answering, and other tasks.
But deduction-induction combos dont get us to abduction, because the three types of inference are formally distinct, so they dont reduce to each other and cant be combined to get a third, he said.
In The Myth of Artificial Intelligence, Larson describes attempts to circumvent abduction as the inference trap.
Purely inductively inspired techniques like machine learning remain inadequate, no matter how fast computers get, and hybrid systems like Watson fall short of general understanding as well, he writes. In open-ended scenarios requiring knowledge about the world like language understanding, abduction is central and irreplaceable. Because of this, attempts at combining deductive and inductive strategies are always doomed to fail The field needs a fundamental theory of abduction. In the meantime, we are stuck in traps.
The AI communitys narrow focus on data-driven approaches has centralized research and innovation in a few organizations that have vast stores of data and deep pockets. With deep learning becoming a useful way to turn data into profitable products, big tech companies are now locked in a tight race to hire AI talent, driving researchers away from academia by offering them lucrative salaries.
This shift has made it very difficult for non-profit labs and small companies to become involved in AI research.
When you tie research and development in AI to the ownership and control of very large datasets, you get a barrier to entry for start-ups, who dont own the data, Larson said, adding that data-driven AI intrinsically creates winner-take-all scenarios in the commercial sector.
The monopolization of AI is in turn hampering scientific research. With big tech companies focusing on creating applications in which they can leverage their vast data resources to maintain the edge over their competitors, theres little incentive to explore alternative approaches to AI. Work in the field starts to skew toward narrow and profitable applications at the expense of efforts that can lead to new inventions.
No one at present knows how AI would look in the absence of such gargantuan centralized datasets, so theres nothing really on offer for entrepreneurs looking to compete by designing different and more powerful AI, Larson said.
In his book, Larson warns about the current culture of AI, which is squeezing profits out of low-hanging fruit, while continuing to spin AI mythology. The illusion of progress on artificial general intelligence can lead to another AI winter, he writes.
But while an AI winter might dampen interest in deep learning and data-driven AI, it can open the way for a new generation of thinkers to explore new pathways. Larson hopes scientists start looking beyond existing methods.
In The Myth of Artificial Intelligence, Larson provides an inference framework that sheds light on the challenges that the field faces today and helps readers to see through the overblown claims about progress toward AGI or singularity.
My hope is that non-specialists have some tools to combat this kind of inevitability thinking, which isnt scientific, and that my colleagues and other AI scientists can view it as a wake-up call to get to work on the very real problems the field faces, Larson said.
View original post here:
Abductive inference: The blind spot of artificial intelligence - TechTalks
- Chinese national arrested and charged with stealing AI trade secrets from Google - NPR - March 8th, 2024 [March 8th, 2024]
- President Biden Calls for Ban on AI Voice Impersonations During State of the Union - Variety - March 8th, 2024 [March 8th, 2024]
- Revolutionize Your Business with AWS Generative AI Competency Partners | Amazon Web Services - AWS Blog - March 8th, 2024 [March 8th, 2024]
- Broadcom Expects AI Demand to Help Offset Weakness Elsewhere - Yahoo Finance - March 8th, 2024 [March 8th, 2024]
- Micron Hits Record High With Analysts Calling It an 'Under-Appreciated AI Beneficiary' - Investopedia - March 8th, 2024 [March 8th, 2024]
- The Adams administration quietly hired its first AI czar. Who is he? - City & State New York - March 8th, 2024 [March 8th, 2024]
- AI likely to increase energy use and accelerate climate misinformation report - The Guardian - March 8th, 2024 [March 8th, 2024]
- This Artificial Intelligence (AI) Stock Could Double, and It Is Way Cheaper Than Nvidia - Yahoo Finance - March 8th, 2024 [March 8th, 2024]
- Fake images made to show Trump with Black supporters highlight concerns around AI and elections - The Associated Press - March 8th, 2024 [March 8th, 2024]
- Artificial intelligence and illusions of understanding in scientific research - Nature.com - March 8th, 2024 [March 8th, 2024]
- Analysis | House AI task force leaders take long view on regulating the tools - The Washington Post - March 8th, 2024 [March 8th, 2024]
- Don't Give Your Business Data to AI Companies - Dark Reading - March 8th, 2024 [March 8th, 2024]
- NIST, the lab at the center of Bidens AI safety push, is decaying - The Washington Post - March 8th, 2024 [March 8th, 2024]
- Essay | AI is Coming! Tips for Staying Calm and Carrying On - The Wall Street Journal - March 8th, 2024 [March 8th, 2024]
- AI can be easily used to make fake election photos - report - BBC.com - March 8th, 2024 [March 8th, 2024]
- 5 Artificial Intelligence (AI) Stocks That Could Make You a Millionaire - Yahoo Finance - March 8th, 2024 [March 8th, 2024]
- AI could be an extraordinary force for good. So why do our politicians still not have a plan? - The Guardian - March 8th, 2024 [March 8th, 2024]
- Mapping Disease Trajectories from Birth to Death with AI - Neuroscience News - March 8th, 2024 [March 8th, 2024]
- India plans 10,000-GPU sovereign AI supercomputer - The Register - March 8th, 2024 [March 8th, 2024]
- SAP enhances Datasphere and SAC for AI-driven transformation - CIO - March 8th, 2024 [March 8th, 2024]
- Jim Cramer names companies and sectors poised to rally on the AI wave - CNBC - March 8th, 2024 [March 8th, 2024]
- The job applicants shut out by AI: The interviewer sounded like Siri - The Guardian - March 8th, 2024 [March 8th, 2024]
- Microsoft confirms Surface and Windows AI event for March 21st - The Verge - March 8th, 2024 [March 8th, 2024]
- Adobes new Express app brings Firefly AI tools to iOS and Android - The Verge - March 8th, 2024 [March 8th, 2024]
- A Google AI Watched 30,000 Hours of Video GamesNow It Makes Its Own - Singularity Hub - March 8th, 2024 [March 8th, 2024]
- Palantir CEO Karp on TITAN, AI Warfare Technology - Bloomberg - March 8th, 2024 [March 8th, 2024]
- Elliptic Curve Murmurations Found With AI Take Flight - Quanta Magazine - March 8th, 2024 [March 8th, 2024]
- 5 AI Stocks to Buy in March 2024, According to Analysts - TipRanks.com - TipRanks - March 8th, 2024 [March 8th, 2024]
- Wix's new AI chatbot builds websites in seconds based on prompts - The Verge - March 8th, 2024 [March 8th, 2024]
- Amid record high energy demand, America is running out of electricity - The Washington Post - March 8th, 2024 [March 8th, 2024]
- AI Crypto Tokens in 5 Minutes: What to Know and Where to Start - Inc. - February 26th, 2024 [February 26th, 2024]
- 'The Worlds I See' by AI visionary Fei-Fei Li '99 selected as Princeton Pre-read - Princeton University - February 26th, 2024 [February 26th, 2024]
- AI is having a 1995 moment, analyst says - Business Insider - February 26th, 2024 [February 26th, 2024]
- Vatican research group's book outlines AI's 'brave new world' - National Catholic Reporter - February 26th, 2024 [February 26th, 2024]
- Honor's Magic 6 Pro launches internationally with AI-powered eye tracking on the way - The Verge - February 26th, 2024 [February 26th, 2024]
- Google explains Gemini's embarrassing AI pictures of diverse Nazis - The Verge - February 26th, 2024 [February 26th, 2024]
- Google cut a deal with Reddit for AI training data - The Verge - February 26th, 2024 [February 26th, 2024]
- What's the point of Elon Musk's AI company? - The Verge - February 26th, 2024 [February 26th, 2024]
- AI agents like Rabbit aim to book your vacation and order your Uber - NPR - February 26th, 2024 [February 26th, 2024]
- Announcing Microsofts open automation framework to red team generative AI Systems - Microsoft - February 26th, 2024 [February 26th, 2024]
- After Nvidia's latest blowout, here are 20 AI stocks expected to rise as much as 44% - Yahoo Finance - February 26th, 2024 [February 26th, 2024]
- 1 Exceptional AI Chip Stock Investors Need to Know About in 2024 - The Motley Fool - February 26th, 2024 [February 26th, 2024]
- Nvidia briefly hits $2 trillion valuation as AI frenzy grips Wall Street - Reuters - February 26th, 2024 [February 26th, 2024]
- AI Chatbots Can Guess Your Personal Information From What You ... - WIRED - October 18th, 2023 [October 18th, 2023]
- Harvard IT Launches Pilot of AI Sandbox to Enable Walled-Off Use ... - Harvard Crimson - October 18th, 2023 [October 18th, 2023]
- Advancing policing through AI: Insights from the global law ... - Police News - October 18th, 2023 [October 18th, 2023]
- Hochul announces new SUNY, IBM investments in AI - Olean Times Herald - October 18th, 2023 [October 18th, 2023]
- Nvidia's banking on TensorRT to expand its generative AI dominance - The Verge - October 18th, 2023 [October 18th, 2023]
- AI expands from MRFs to vehicles - Plastics Recycling Update - October 18th, 2023 [October 18th, 2023]
- AI Reads Ancient Scroll Charred by Mount Vesuvius in Tech First - Scientific American - October 18th, 2023 [October 18th, 2023]
- A DEEPer (squared) dive into AI Harvard Gazette - Harvard Gazette - October 18th, 2023 [October 18th, 2023]
- Florida bar weighs whether lawyers using AI need client consent - Reuters - October 18th, 2023 [October 18th, 2023]
- Cognizant and Vianai Systems Announce Strategic Partnership to ... - PR Newswire - October 18th, 2023 [October 18th, 2023]
- How AI could speed up scientific discoveries, from proteins to ... - NPR - October 18th, 2023 [October 18th, 2023]
- AI challenge to deliver better healthcare | Western Australian ... - Government of Western Australia - October 18th, 2023 [October 18th, 2023]
- Henry Kissinger: The Path to AI Arms Control - Foreign Affairs Magazine - October 18th, 2023 [October 18th, 2023]
- Stability AI releases StableStudio in latest push for open-source AI - The Verge - May 18th, 2023 [May 18th, 2023]
- Google CEO Sundar Pichai Predicts That This Profession Will Be ... - The Motley Fool - May 18th, 2023 [May 18th, 2023]
- Frances privacy watchdog eyes protection against data scraping in AI action plan - TechCrunch - May 18th, 2023 [May 18th, 2023]
- Investing in Hippocratic AI - Andreessen Horowitz - May 18th, 2023 [May 18th, 2023]
- As Alphabet flexes its AI prowess, there's a 'new elephant in the room' for Google - MarketWatch - May 18th, 2023 [May 18th, 2023]
- The Boring Future of Generative AI | WIRED - WIRED - May 18th, 2023 [May 18th, 2023]
- OpenAI readies new open-source AI model, The Information reports - Reuters.com - May 18th, 2023 [May 18th, 2023]
- What every CEO should know about generative AI - McKinsey - May 18th, 2023 [May 18th, 2023]
- AI creates images of the 'perfect' man and woman - Sky News - May 18th, 2023 [May 18th, 2023]
- Audit AI search tools now, before they skew research - Nature.com - May 18th, 2023 [May 18th, 2023]
- 3 Reasons C3.ai Stock Could Be Your Golden Ticket to the AI ... - InvestorPlace - May 18th, 2023 [May 18th, 2023]
- Zoom makes a big bet on AI with investment in Anthropic - VentureBeat - May 18th, 2023 [May 18th, 2023]
- AI voice phone scams are on the rise. Here's how to avoid them - USA TODAY - May 18th, 2023 [May 18th, 2023]
- Amazon is building an AI-powered conversational experience for ... - The Verge - May 18th, 2023 [May 18th, 2023]
- AI speculators need to 'differentiate between actual spending and investment' and hype: Strategist - Yahoo Finance - May 18th, 2023 [May 18th, 2023]
- AI Can Be Both Accurate and Transparent - HBR.org Daily - May 18th, 2023 [May 18th, 2023]
- You're Probably Underestimating AI Chatbots | WIRED - WIRED - May 18th, 2023 [May 18th, 2023]
- AI presents political peril for 2024 with threat to mislead voters - The Associated Press - May 18th, 2023 [May 18th, 2023]
- We need AI to help us face the challenges of the future - The Guardian - May 18th, 2023 [May 18th, 2023]
- End Of Googles Dominance? Stock Gets Rare Analyst Downgrade Over AI Fears - Forbes - May 18th, 2023 [May 18th, 2023]
- Watch 44 million atoms simulated using AI and a supercomputer - New Scientist - May 18th, 2023 [May 18th, 2023]
- AI Is The New Electricity: Bank Of America Picks 20 Stocks To Cash In On ChatGPT Hype - Forbes - March 2nd, 2023 [March 2nd, 2023]
- Tech Giants Are Barreling Headfirst Into an AI Arms Race - February 20th, 2023 [February 20th, 2023]
- Bing's AI Is Threatening Users. That's No Laughing Matter - TIME - February 20th, 2023 [February 20th, 2023]