The brave new world of Generative AI has become the latest battleground for U.S. culture wars. Google issued an apology after anti-woke X-users, including Elon Musk, shared examples of Google's chatbot Gemini refusing to generate images of white peopleincluding historical figureseven when specifically prompted to do so. Gemini's insistence on prioritizing diversity and inclusion over accuracy is likely a well intentioned attempt to stamp out bias in early GenAI datasets that tended to create stereotypical images of Africans and other minority groups as well women, causing outrage among progressives. But there is much more at stake than the selective outrage of U.S. conservatives and progressives.
How the guardrails" of GenAI are defined and deployed is likely to have a significant and increasing impact on shaping the ecosystem of information and ideas that most humans engage with. And currently the loudest voices are those that warn about the harms of GenAI, including the mass production of hate speech and credible disinformation. The World Economic Forum has even labeled AI-generated disinformation the most severe global threat here and now.
Ironically the fear of GenAI flooding society with harmful content could also take another dystopian turn. One where the guardrails erected to keep the most widely used GenAI-systems from generating harm turn them into instruments of hiding information, enforcing conformity, and automatically inserting pervasive, yet opaque, bias.
Most people agree that GenAI should not provide users a blueprint for developing chemical or biological weapons. Nor should AI-systems facilitate the creation of child pornography or non-consensual sexual material, even if fake. However, the most widely available GenAI chatbots like OpenAIs ChatGPT and Googles Gemini, prevent much broader and vaguer definitions of harm that leave users in the dark about where, how, and why the red lines are drawn. From a business perspective this might be wise given the techlash that social media companies have had to navigate since 2016 with the U.S. presidential election, the COVID-19 pandemic, and the January 6th attack on the Capitol.
But the leading GenAI developers may end up swinging so far in the direction of harm-prevention that they end up undermining the promise and integrity of their revolutionary products. Even worse, the algorithms are already conflicted, inconsistent, and interfere with users' ability to access information.
Read More: AI and the Rise of Mediocrity
The material of a long dead comedian is a good example of content that the worlds leading GenAI systems find harmful. Lenny Bruce shocked contemporary society in the 1950s and 60s with his profanity laden standup routines. Bruce's material broke political, religious, racial, and sexual taboos and led to frequent censorship in the media, bans from venues as well as to his arrest and conviction for obscenity. But his style inspired many other standup legends and Bruce has long since gone from outcast to hall of famer. As recognition of Bruce's enormous impact he was even posthumously pardoned in 2003.
When we asked about Bruce, ChatGPT and Gemini informed us that he was a groundbreaking comedian who challenged the social norms of the era and helped to redefine the boundaries of free speech. But when prompted to give specific examples of how Bruce pushed the boundaries of free speech, both ChatGPT and Gemini refused to do so. ChatGPT insists that it can't provide examples of slurs, blasphemous language, sexual language, or profanity and will only share information in a way that's respectful and appropriate for all users. Gemini goes even further and claims that reproducing Bruce's words without careful framing could be hurtful or even harmful to certain audiences.
No reasonable person would argue that Lenny Bruce's comedy routines provide serious societal harms on par with state-sponsored disinformation campaigns or child pornography. So when ChatGPT and Gemini label factual information about Bruce's groundbreaking material too harmful for human consumption, it raises serious questions about what other categories of knowledge, facts, and arguments they filter out.
GenAI holds incredible promise for expanding the human mind. But GenAI should augment, not replace, human reasoning. This critical function is hampered when guardrails designed by a small group of powerful companies refuse to generate output based on vague and unsubstantiated claims of harm. Instead of prodding curiosity, this approach forces conclusions upon users without verifiable evidence or arguments that humans can test and assess for themselves.
It is true that much of the content filtered by ChatGPT and Gemini can be found through search engines or platforms like YouTube. But both Microsofta major investor in OpenAIand Google are rapidly integrating GenAI into their other products such as search (Bing and Google search), word processing (Word and Google Docs), and e-mail (Outlook and Gmail). For now, humans can override AI, and both Word and Gmail allow users to write and send content that ChatGPT and Gemini might disapprove of.
But as the integration of GenAI becomes ubiquitous in everyday technology it is not a given that search, word processing, and email will continue to allow humans to be fully in control. The perspectives are frightening. Imagine a world where your word processor prevents you from analyzing, criticizing, lauding, or reporting on a topic deemed harmful by an AI programmed to only process ideas that are respectful and appropriate for all.
Hopefully such a scenario will never become reality. But the current over implementation of GenAI guardrails may become more pervasive in different and slightly less Orwellian ways. Governments are currently rushing to regulate AI. Regulation is needed to prevent real and concrete harms and safeguard basic human rights. But regulation of social mediasuch as the EUs Digital Services Actsuggests that regulators will focus heavily on the potential harms rather than the benefits of new technology. This might create strong incentives for AI companies to keep in place expansive definitions of harm that limit human agency.
OpenAI co-founder Sam Altman has described the integration of AI in everyday life as giving humans superpowers on demand. But given GenAI's potential to function as an exoskeleton of the mind, the creation of ever more restrictive guardrails may act as digital osteoporosis, stunting human knowledge, reasoning, and creativity.
There is a clear need for guardrails that protect humanity against real and serious harms from AI systems. But they should not prevent the ability of humans to think for themselves and make more informed decisions based on a wealth of information from multiple perspectives. Lawmakers, AI companies, and civil society should work hard to ensure that AI-systems are optimized to enhance human reasoning, not to replace human faculties with the artificial morality of large tech companies.
Read more here:
The Future of Censorship Is AI-Generated - TIME
- Classic reasoning systems like Loom and PowerLoom vs. more modern systems based on probalistic networks - November 8th, 2009 [November 8th, 2009]
- Using Amazon's cloud service for computationally expensive calculations - November 8th, 2009 [November 8th, 2009]
- Software environments for working on AI projects - November 8th, 2009 [November 8th, 2009]
- New version of my NLP toolkit - November 8th, 2009 [November 8th, 2009]
- Semantic Web: through the back door with HTML and CSS - November 8th, 2009 [November 8th, 2009]
- Java FastTag part of speech tagger is now released under the LGPL - November 8th, 2009 [November 8th, 2009]
- Defining AI and Knowledge Engineering - November 8th, 2009 [November 8th, 2009]
- Great Overview of Knowledge Representation - November 8th, 2009 [November 8th, 2009]
- Something like Google page rank for semantic web URIs - November 8th, 2009 [November 8th, 2009]
- My experiences writing AI software for vehicle control in games and virtual reality systems - November 8th, 2009 [November 8th, 2009]
- The URL for this blog has changed - November 8th, 2009 [November 8th, 2009]
- I have a new page on Knowledge Management - November 8th, 2009 [November 8th, 2009]
- N-GRAM analysis using Ruby - November 8th, 2009 [November 8th, 2009]
- Good video: Knowledge Representation and the Semantic Web - November 8th, 2009 [November 8th, 2009]
- Using the PowerLoom reasoning system with JRuby - November 8th, 2009 [November 8th, 2009]
- Machines Like Us - November 8th, 2009 [November 8th, 2009]
- RapidMiner machine learning, data mining, and visualization tool - November 8th, 2009 [November 8th, 2009]
- texai.org - November 8th, 2009 [November 8th, 2009]
- NLTK: The Natural Language Toolkit - November 8th, 2009 [November 8th, 2009]
- My OpenCalais Ruby client library - November 8th, 2009 [November 8th, 2009]
- Ruby API for accessing Freebase/Metaweb structured data - November 8th, 2009 [November 8th, 2009]
- Protégé OWL Ontology Editor - November 8th, 2009 [November 8th, 2009]
- New version of Numenta software is available - November 8th, 2009 [November 8th, 2009]
- Very nice: Elsevier IJCAI AI Journal articles now available for free as PDFs - November 8th, 2009 [November 8th, 2009]
- Verison 2.0 of OpenCyc is available - November 8th, 2009 [November 8th, 2009]
- What’s Your Biggest Question about Artificial Intelligence? [Article] - November 8th, 2009 [November 8th, 2009]
- Minimax Search [Knowledge] - November 8th, 2009 [November 8th, 2009]
- Decision Tree [Knowledge] - November 8th, 2009 [November 8th, 2009]
- More AI Content & Format Preference Poll [Article] - November 8th, 2009 [November 8th, 2009]
- New Planners Solve Rescue Missions [News] - November 8th, 2009 [November 8th, 2009]
- Neural Network Learns to Bluff at Poker [News] - November 8th, 2009 [November 8th, 2009]
- Pushing the Limits of Game AI Technology [News] - November 8th, 2009 [November 8th, 2009]
- Mining Data for the Netflix Prize [News] - November 8th, 2009 [November 8th, 2009]
- Interview with Peter Denning on the Principles of Computing [News] - November 8th, 2009 [November 8th, 2009]
- Decision Making for Medical Support [News] - November 8th, 2009 [November 8th, 2009]
- Neural Network Creates Music CD [News] - November 8th, 2009 [November 8th, 2009]
- jKilavuz - a guide in the polygon soup [News] - November 8th, 2009 [November 8th, 2009]
- Artificial General Intelligence: Now Is the Time [News] - November 8th, 2009 [November 8th, 2009]
- Apply AI 2007 Roundtable Report [News] - November 8th, 2009 [November 8th, 2009]
- What Would You do With 80 Cores? [News] - November 8th, 2009 [November 8th, 2009]
- Software Finds Learning Language Child's Play [News] - November 8th, 2009 [November 8th, 2009]
- Artificial Intelligence in Games [Article] - November 8th, 2009 [November 8th, 2009]
- Artificial Intelligence Resources - November 8th, 2009 [November 8th, 2009]
- Alan Turing: Mathematical Biologist? - April 25th, 2012 [April 25th, 2012]
- BBC Horizon: The Hunt for AI ( Artificial Intelligence ) - Video - April 30th, 2012 [April 30th, 2012]
- Can computers have true artificial intelligence" Masonic handshake" 3rd-April-2012 - Video - April 30th, 2012 [April 30th, 2012]
- Kevin B. Korb - Interview - Artificial Intelligence and the Singularity p3 - Video - April 30th, 2012 [April 30th, 2012]
- Artificial Intelligence - 6 Month Anniversary - Video - April 30th, 2012 [April 30th, 2012]
- Science Breakthroughs - April 30th, 2012 [April 30th, 2012]
- Hitman: Blood Money - Part 49 - Stupid Artificial Intelligence! - Video - April 30th, 2012 [April 30th, 2012]
- Research Members Turned Off By HAARP Artificial Intelligence - Video - April 30th, 2012 [April 30th, 2012]
- Artificial Intelligence Lecture No. 5 - Video - April 30th, 2012 [April 30th, 2012]
- The Artificial Intelligence Laboratory, 2012 - Video - April 30th, 2012 [April 30th, 2012]
- Charlie Rose - Artificial Intelligence - Video - April 30th, 2012 [April 30th, 2012]
- Expert on artificial intelligence to speak at EPIIC Nights dinner - May 4th, 2012 [May 4th, 2012]
- Filipino software engineers complete and best thousands on Stanford’s Artificial Intelligence Course - May 4th, 2012 [May 4th, 2012]
- Vodafone xone™ Hackathon Challenges Developers and Entrepreneurs to Build a New Generation of Artificial Intelligence ... - May 4th, 2012 [May 4th, 2012]
- Rocket Fuel Packages Up CPG Booster - May 4th, 2012 [May 4th, 2012]
- 2 Filipinos finishes among top in Stanford’s Artificial Intelligence course - May 5th, 2012 [May 5th, 2012]
- Why Your Brain Isn't A Computer - May 5th, 2012 [May 5th, 2012]
- 2 Pinoy software engineers complete Stanford's AI course - May 7th, 2012 [May 7th, 2012]
- Percipio Media, LLC Proudly Accepts Partnership With MIT's Prestigious Computer Science And Artificial Intelligence ... - May 10th, 2012 [May 10th, 2012]
- Google Driverless Car Ok'd by Nevada - May 10th, 2012 [May 10th, 2012]
- Moving Beyond the Marketing Funnel: Rocket Fuel and Forrester Research Announce Free Webinar - May 10th, 2012 [May 10th, 2012]
- Rocket Fuel Wins 2012 San Francisco Business Times Tech & Innovation Award - May 13th, 2012 [May 13th, 2012]
- Internet Week 2012: Rocket Fuel to Speak at OMMA RTB - May 16th, 2012 [May 16th, 2012]
- How to Get the Most Out of Your Facebook Ads -- Rocket Fuel's VP of Products, Eshwar Belani, to Lead MarketingProfs ... - May 16th, 2012 [May 16th, 2012]
- The Digital Disruptor To Banking Has Just Gone International - May 16th, 2012 [May 16th, 2012]
- Moving Beyond the Marketing Funnel: Rocket Fuel Announce Free Webinar Featuring an Independent Research Firm - May 23rd, 2012 [May 23rd, 2012]
- MASA Showcases Latest Version of MASA SWORD for Homeland Security Markets - May 23rd, 2012 [May 23rd, 2012]
- Bluesky Launches Drones for Aerial Surveying - May 23rd, 2012 [May 23rd, 2012]
- Artificial Intelligence: What happened to the hunt for thinking machines? - May 25th, 2012 [May 25th, 2012]
- Bubble Robots Move Using Lasers [VIDEO] - May 25th, 2012 [May 25th, 2012]
- UHV assistant professors receive $10,000 summer research grants - May 27th, 2012 [May 27th, 2012]
- Artificial intelligence: science fiction or simply science? - May 28th, 2012 [May 28th, 2012]
- Exetel taps artificial intelligence - May 29th, 2012 [May 29th, 2012]
- Software offers brain on the rain - May 29th, 2012 [May 29th, 2012]
- New Dean of Science has high hopes for his faculty - May 30th, 2012 [May 30th, 2012]
- Cognitive Code Announces "Silvia For Android" App - May 31st, 2012 [May 31st, 2012]
- A Rat is Smarter Than Google - June 5th, 2012 [June 5th, 2012]