Based on the number of new bills across the states and in Congress, the number of working groups and reports commissioned by city, state, and local governments, and the drumbeat of activity from the White House, it would appear that it is an agenda-setting moment for policy regarding artificial intelligence (AI) in the United States. But the language describing AI research and applications continues to generate confusion and seed the ground for potentially harmful missteps.
Stakeholders agree that AI warrants thoughtful legislation, but struggle for consensus around problems and corresponding solutions. An aspect of this confusion is embodied by words we use. It is imperative that we not only know what we are talking about regarding AI, but agree on how we talk about it.
Last fall, the US Senate convened a series of closed-door meetings to inform US AI strategy. It brought together academics and civil society leaders, but was disproportionately headlined by prominent industry voices who have an interest in defining the terms of the discussion. From the expanding functionality of ever-larger AI models to the seemingly far-off threat to human existence, lawmakers and the public are immersed in AI branding and storytelling. Loaded terminology can mislead policymakers and stakeholders, ultimately causing friction between competing aspects of an effective AI agenda. While speculative and imprecise language has always permeated AI, we must emphasize nomenclature leaning more towards objectivity than sensationalism. Otherwise, US AI strategy could be misplaced or unbalanced.
Intelligence represents the promise of AI, yet its a construct thats difficult to measure. The very notion is multifaceted and characterized by a fraught history. The intelligence quotient (IQ), the supposed numerical representation of cognitive ability, remains misused and misinterpreted to this day. Corresponding research has led to contentious debates regarding purported fundamental differences between IQ scores of Black, White, and Hispanic people in the US. There's a long record of dubious attempts to quantify intelligence in ways that cause a lot of harm, and it poses a real danger that language about AI might do the same.
Modern discussions in the public sphere give full credence to AI imbued with human-like attributes. Yet, this idea serves as a shaky foundation for debate about the technology. Evaluating the power of current AI models relies on how theyre tested, but the alignment between test results and our understanding of what they can do is often not clear. AI taxonomy today is predominantly defined by commercial institutions. Artificial general intelligence (AGI), for example, is a phrase intended to illustrate the point at which AI matches or surpasses humans on a variety of tasks. It suggests a future where computers serve as equally competent partners. One by one, industry leaders have now made AGI a business milestone. But its uncertain how to know once weve crossed that threshold, and so the mystique seeps into the ethos.
Other examples illustrate this sentiment as well. The idea of a models emergent capabilities nods to AIs inherent capacity to develop and even seem to learn in unexpected ways. Similar developments have convinced some users of a large language models (LLM) sentience.
However, while these concepts are currently disputed, other scientists contend that even though bigger LLMs typically yield better performance, the presence of these phenomena ultimately relies on a practitioners test metrics.
The language and research of the private sector disproportionately influences society on AI. Perhaps its their prerogative; entrepreneurs and industry experts arent wrong to characterize their vision in their own way, and aspirational vocabulary helps aim higher and broader. But it may not always be in the public interest.
These terms arent technical jargon buried deep in a peer-review article. They are tossed around every day in print, on television, and in congressional hearings. Theres an ever-present tinge of not-quite-proven positive valence. On one hand, its propped up with bold attributes full of potential, but on the other, often dismissed and reduced to a mechanical implement when things go wrong.
The potential societal impact is inevitable when unproven themes are parroted by policymakers who may not always have time to do their homework.
Politicians are not immune to the hype. Examples abound in the speeches of world leaders like UK Prime Minister Rishi Sunak and in the statements of President Joe Biden. Congressional hearings and global meetings of the United Nations have adopted language from the loudest, most visible voices providing a wholesale dressing for the entire sector.
Whats missing here is the acknowledgement of how much language sets the conditions for our reality, and how these conversations play out in front of the media and public. We lack common, empirical, and objective terminology. Modern AI descriptors mean one thing to researchers, but may express something entirely different to the public.
We must call for intentional efforts to define and interrogate the words we use to describe AI products and their potential functionality. Exhaustive and appropriate test metrics must also justify claims. Ultimately, hypothetical metaphors can be distorting to the public and lawmakers, and this can influence the suitability of laws or inspire emerging AI institutions with ill-defined missions.
We cant press reset, but we can provide more thoughtful framing.
The effects of AI language are incredibly broad and indirect but, in total, can be enormously impactful. Steady and small-scale steps may deliver us to a place where our understanding of AI has been shaped, gradually modifying behavior by reinforcing small and successive approximations bringing us ever closer to a desired belief.
By the time we ask, how did we get here, the ground may have shifted underneath our feet.
Continued here:
- Creating 'good' AGI that won't kill us all: Crypto's Artificial Superintelligence Alliance - Cointelegraph - March 29th, 2024 [March 29th, 2024]
- Whoever develops artificial general intelligence first wins the whole game - ForexLive - March 29th, 2024 [March 29th, 2024]
- AGI and Democracy - Ash Center - March 29th, 2024 [March 29th, 2024]
- Elon Musk Believes 'Super Intelligence' Is Inevitable and Could End Humanity - Observer - March 29th, 2024 [March 29th, 2024]
- What was (A)I made for? - by The Ink - The.Ink - March 29th, 2024 [March 29th, 2024]
- The evolution of artificial intelligence (AI) spending by the U.S. government | Brookings - Brookings Institution - March 29th, 2024 [March 29th, 2024]
- Fetch.ai, Ocean Protocol and SingularityNET to Partner on Decentralized AI - PYMNTS.com - March 29th, 2024 [March 29th, 2024]
- Unveiling Sam Altman's Insights from Lex Fridman Interview - hackernoon.com - March 29th, 2024 [March 29th, 2024]
- Scientists create AI models that can talk to each other and pass on skills with limited human input - Livescience.com - March 29th, 2024 [March 29th, 2024]
- Will AI save humanity? U.S. tech fest offers reality check - Japan Today - March 18th, 2024 [March 18th, 2024]
- Artificial general intelligence and higher education - Inside Higher Ed - March 18th, 2024 [March 18th, 2024]
- The Madness of the Race to Build Artificial General Intelligence - Truthdig - March 18th, 2024 [March 18th, 2024]
- Companies Like Morgan Stanley Are Already Making Early Versions of AGI - Observer - March 18th, 2024 [March 18th, 2024]
- Types of Artificial Intelligence That You Should Know in 2024 - Simplilearn - March 18th, 2024 [March 18th, 2024]
- US government warns AI may be an 'extinction-level threat' to humans - TweakTown - March 18th, 2024 [March 18th, 2024]
- Amazon's VP of AGI: Arrival of AGI Not 'Moment in Time' SXSW 2024 - AI Business - March 14th, 2024 [March 14th, 2024]
- What is general intelligence in the world of AI and computers? The race for the artificial mind explained - PC Gamer - March 14th, 2024 [March 14th, 2024]
- Beyond human intelligence: Claude 3.0 and the quest for AGI - VentureBeat - March 14th, 2024 [March 14th, 2024]
- DeepMind Co-founder on AGI and the AI Race - SXSW 2024 - AI Business - March 14th, 2024 [March 14th, 2024]
- Rejuve.Bio Launches Groundbreaking Crowd Fund on NetCapital to Pioneer the Future of Artificial General ... - PR Newswire - March 14th, 2024 [March 14th, 2024]
- Employees at Top AI Labs Fear Safety Is an Afterthought - TIME - March 14th, 2024 [March 14th, 2024]
- Meta hooks up with Hammerspace for advanced AI infrastructure project Blocks and Files - Blocks & Files - March 14th, 2024 [March 14th, 2024]
- Among the A.I. Doomsayers - The New Yorker - March 14th, 2024 [March 14th, 2024]
- Artificial Superintelligence Could Arrive by 2027, Scientist Predicts - Futurism - March 14th, 2024 [March 14th, 2024]
- OpenAI, Salesforce and Others Boost Efforts for Ethical AI - PYMNTS.com - March 14th, 2024 [March 14th, 2024]
- Google DeepMind C.E.O. Demis Hassabis on the Path From Chatbots to A.G.I. - The New York Times - February 24th, 2024 [February 24th, 2024]
- Bill Foster, a particle physicist-turned-congressman, on why he's worried about artificial general intelligence - FedScoop - February 24th, 2024 [February 24th, 2024]
- Generative AI Defined: How It Works, Benefits and Dangers - TechRepublic - February 24th, 2024 [February 24th, 2024]
- AI and You: OpenAI's Sora Previews Text-to-Video Future, First Ivy League AI Degree - CNET - February 24th, 2024 [February 24th, 2024]
- Vitalik Buterin and Sandeep Nailwal headline decentralized agi summit @ Ethdenver tackling threats of centralized AI - Grit Daily - February 24th, 2024 [February 24th, 2024]
- What is AI? A-to-Z Glossary of Essential AI Terms in 2024 - Tech.co - February 22nd, 2024 [February 22nd, 2024]
- With Sora, OpenAI highlights the mystery and clarity of its mission | The AI Beat - VentureBeat - February 22nd, 2024 [February 22nd, 2024]
- Why, Despite All the Hype We Hear, AI Is Not One of Us - Walter Bradley Center for Natural and Artificial Intelligence - February 22nd, 2024 [February 22nd, 2024]
- What is Artificial General Intelligence (AGI) and Why It's Not Here Yet: A Reality Check for AI Enthusiasts - Unite.AI - February 22nd, 2024 [February 22nd, 2024]
- Forget Artificial General Intelligence (AGI) the big impact is already here and its called AI agents - IT World Canada - February 22nd, 2024 [February 22nd, 2024]
- Future of Artificial Intelligence: Predictions and Impact on Society - Medriva - February 22nd, 2024 [February 22nd, 2024]
- Bret Taylor and Clay Bavor talk AI startups, AGI, and job disruptions - Semafor - February 22nd, 2024 [February 22nd, 2024]
- 3 scary breakthroughs AI will make in 2024 - Livescience.com - January 2nd, 2024 [January 2nd, 2024]
- What Is Artificial Intelligence (AI)? - Council on Foreign Relations - January 2nd, 2024 [January 2nd, 2024]
- 6. AI in Everyday Life How Artificial Intelligence is Impacting Society - Medium - January 2nd, 2024 [January 2nd, 2024]
- 2023 Was A Breakout Year for AI - What Can We Expect Looking Forward? - Securities.io - January 2nd, 2024 [January 2nd, 2024]
- 2024 Tech Predictions: From Sci-Fi Fantasy to Reality - Exploring Cinematic Tech Prophecies - Medium - January 2nd, 2024 [January 2nd, 2024]
- AGI predictions for 2024. The major LLM players, as well as | by Paul Pallaghy, PhD | Dec, 2023 - Medium - January 2nd, 2024 [January 2nd, 2024]
- 10 Scary Breakthroughs AI Will Make in 2024 | by AI News | Dec, 2023 - Medium - January 2nd, 2024 [January 2nd, 2024]
- Top 5 Myths About AI Debunked. Unraveling the Truth: Separating AI | by Michiel Meire | Dec, 2023 - Medium - January 2nd, 2024 [January 2nd, 2024]
- AI Revolution: Unleashing the Power of Artificial Intelligence in Our Lives - Medium - January 2nd, 2024 [January 2nd, 2024]
- Superintelligence Unleashed: Navigating the Perils and Promises of Tomorrow's AI Landscape with - Medium - January 2nd, 2024 [January 2nd, 2024]
- AI in taxation: Transforming or replacing? - Times of Malta - January 2nd, 2024 [January 2nd, 2024]
- ChatGPT egg balancing task convinced Microsoft that AGI is closer - Business Insider - May 18th, 2023 [May 18th, 2023]
- Can We Stop Runaway A.I.? - The New Yorker - May 18th, 2023 [May 18th, 2023]
- Microsoft's 'Sparks of AGI' ignite debate on humanlike AI - The Jerusalem Post - May 18th, 2023 [May 18th, 2023]
- Artificial General Intelligence is the Answer, says OpenAI CEO - Walter Bradley Center for Natural and Artificial Intelligence - May 18th, 2023 [May 18th, 2023]
- AI glossary: words and terms to know about the booming industry - NBC News - May 18th, 2023 [May 18th, 2023]
- Is medicine ready for AI? Doctors, computer scientists, and ... - MIT News - May 18th, 2023 [May 18th, 2023]
- What marketers should keep in mind when adopting AI - MarTech - May 18th, 2023 [May 18th, 2023]
- Zoom Invests in and Partners With Anthropic to Improve Its AI ... - PYMNTS.com - May 18th, 2023 [May 18th, 2023]
- Hippocratic AI launches With $50M to power healthcare chatbots - VatorNews - May 18th, 2023 [May 18th, 2023]
- The Potential of AI in Tax Practice Relies on Understanding its ... - Thomson Reuters Tax & Accounting - May 18th, 2023 [May 18th, 2023]
- How AI Knows Things No One Told It - Scientific American - May 18th, 2023 [May 18th, 2023]
- Operation HOPE and CAU Host ChatGPT Creator to Discuss AI - Black Enterprise - May 18th, 2023 [May 18th, 2023]
- Paper Claims AI May Be a Civilization-Destroying "Great Filter" - Futurism - May 18th, 2023 [May 18th, 2023]
- Vintage AI Predictions Show Our Hopes and Fears Aren't New ... - Gizmodo - May 18th, 2023 [May 18th, 2023]
- AIs Impact on Journalism - Signals AZ - May 18th, 2023 [May 18th, 2023]
- What is Augmented Intelligence? Explanation and Examples - Techopedia - May 18th, 2023 [May 18th, 2023]
- 'Godfather' of AI is now having second thoughts - The B.C. Catholic - May 18th, 2023 [May 18th, 2023]
- Top Philippine universities - Philstar.com - May 18th, 2023 [May 18th, 2023]
- The Politics of Artificial Intelligence (AI) - National and New Jersey ... - InsiderNJ - May 18th, 2023 [May 18th, 2023]
- People warned AI is becoming like a God and a 'catastrophe' is ... - UNILAD - May 18th, 2023 [May 18th, 2023]
- "Zero tolerance" for hallucinations - Dr. Vishal Sikka on how Vianai builds AI applications, and the mixed emotions of the AI hype cycle -... - May 18th, 2023 [May 18th, 2023]