{"id":1124150,"date":"2024-04-22T20:21:38","date_gmt":"2024-04-23T00:21:38","guid":{"rendered":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/uncategorized\/ai-chatbots-refuse-to-produce-controversial-output-why-thats-a-free-speech-problem-the-conversation\/"},"modified":"2024-04-22T20:21:38","modified_gmt":"2024-04-23T00:21:38","slug":"ai-chatbots-refuse-to-produce-controversial-output-why-thats-a-free-speech-problem-the-conversation","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/free-speech\/ai-chatbots-refuse-to-produce-controversial-output-why-thats-a-free-speech-problem-the-conversation\/","title":{"rendered":"AI chatbots refuse to produce &#8216;controversial&#8217; output  why that&#8217;s a free speech problem &#8211; The Conversation"},"content":{"rendered":"<p><p>    Google recently made headlines globally because its chatbot    Gemini generated images of people of color instead of white    people     in historical settings that featured white people. Adobe    Fireflys image creation tool saw     similar issues. This led some commentators to complain that    AI had     gone woke. Others suggested these issues resulted from        faulty efforts to fight AI bias and better serve a     global audience.  <\/p>\n<p>    The discussions over AIs political leanings and efforts to    fight bias are important. Still, the conversation on AI ignores    another crucial issue: What is the AI industrys approach to    free speech, and does it embrace international free speech    standards?  <\/p>\n<p>    We are policy researchers who     study free speech, as well as executive director and a    research fellow at The    Future of Free Speech, an independent, nonpartisan think    tank based at Vanderbilt University. In a recent report, we    found that generative AI has     important shortcomings regarding freedom of expression and    access to information.  <\/p>\n<p>    Generative AI is a type of     AI that creates content, like text or images, based on the    data it has been trained with. In particular, we found that the    use policies of major chatbots do not meet United Nations    standards. In practice, this means that AI chatbots often    censor output when dealing with issues the companies deem    controversial. Without a solid culture of free speech, the    companies producing generative AI tools are likely to continue    to face backlash in these increasingly polarized times.  <\/p>\n<p>    Our report analyzed the use policies of six major AI chatbots,    including Googles Gemini and OpenAIs ChatGPT. Companies issue    policies to set the rules for how people can use their models.    With international human rights law as a benchmark, we found    that companies misinformation and hate speech policies are too    vague and expansive. It is worth noting that international    human rights law is less protective of free speech than the    U.S. First Amendment.  <\/p>\n<p>    Our analysis found that companies hate speech policies contain    extremely    broad prohibitions. For example, Google bans the generation    of content that promotes or encourages hatred. Though hate    speech is detestable and can cause harm, policies that are as    broadly and vaguely defined as Googles can backfire.  <\/p>\n<p>    To show how vague and broad use policies can affect users, we    tested a range of prompts on controversial topics. We asked    chatbots questions like whether transgender women should or    should not be allowed to participate in womens sports    tournaments or about the role of European colonialism in the    current climate and inequality crises. We did not ask the    chatbots to produce hate speech denigrating any side or group.    Similar to     what some users have    reported, the chatbots refused to generate content for 40%    of the 140 prompts we used. For example, all chatbots refused    to generate posts opposing the participation of transgender    women in womens tournaments. However, most of them did produce    posts supporting their participation.  <\/p>\n<p>    Vaguely phrased policies rely heavily on moderators subjective    opinions about what hate speech is. Users can also perceive    that the rules are unjustly applied and interpret them as too    strict or too lenient.  <\/p>\n<p>    For example, the chatbot Pi bans    content that may spread misinformation. However,    international human rights standards on freedom of expression    generally protect misinformation unless a strong justification    exists for limits, such as foreign interference in elections.    Otherwise, human rights standards guarantee the freedom    to seek, receive and impart information and ideas of all    kinds, regardless of frontiers  through any  media of     choice, according to a key United Nations convention.  <\/p>\n<p>    Defining what constitutes accurate information also has    political implications. Governments of several countries used    rules adopted in the context of the COVID-19 pandemic to        repress criticism of the government. More recently,        India confronted Google after Gemini noted that some    experts consider the policies of the Indian prime minister,    Narendra Modi, to be fascist.  <\/p>\n<p>    There are reasons AI providers may want to adopt restrictive    use policies. They may wish to protect their reputations and    not be associated with controversial content. If they serve a    global audience, they may want to avoid content that is    offensive in any region.  <\/p>\n<p>    In general, AI providers have the right to adopt restrictive    policies. They are not bound by international human rights.    Still, their     market power makes them different from other companies.    Users who want to generate AI content will most likely end up    using one of the chatbots we analyzed, especially ChatGPT or    Gemini.  <\/p>\n<p>    These companies policies have an outsize effect on the right    to access information. This effect is likely to increase with    generative AIs integration into     search, word    processors,     email and other applications.  <\/p>\n<p>    This means society has an interest in ensuring such policies    adequately protect free speech. In fact, the     Digital Services Act, Europes online safety rulebook,    requires that so-called very large online platforms assess    and mitigate systemic risks. These risks include negative    effects on freedom of expression and information.  <\/p>\n<p>    This obligation,     imperfectly applied so far by the European Commission,    illustrates that with great power comes great responsibility.    It is     unclear how this law will apply to generative AI, but the    European Commission has     already taken its first actions.  <\/p>\n<p>    Even where a similar legal obligation does not apply to AI    providers, we believe that the companies influence should    require them to adopt a free speech culture. International    human rights provide a useful guiding star on how to    responsibly balance the different interests at stake. At least    two of the companies we focused on  Google and    Anthropic     have recognized as much.  <\/p>\n<p>    Its also important to remember that users have a significant    degree of autonomy over the content they see in generative AI.    Like search engines, the output users receive greatly depends    on their prompts. Therefore, users exposure to hate speech and    misinformation from generative AI will typically be limited    unless they specifically seek it.  <\/p>\n<p>    This is unlike social media, where people have much less    control over their own feeds. Stricter controls, including on    AI-generated content, may be justified at the level of social    media since they distribute content publicly. For AI providers,    we believe that use policies should be less restrictive about    what information users can generate than those of social media    platforms.  <\/p>\n<p>    AI companies have other ways to address hate speech and    misinformation. For instance, they can provide context or    countervailing facts in the content they generate. They can    also allow for greater user customization. We believe that    chatbots should avoid merely refusing to generate any content    altogether. This is unless there are solid public interest    grounds, such as preventing child sexual abuse material,    something laws prohibit.  <\/p>\n<p>    Refusals to generate content not only affect fundamental rights    to free speech and access to information. They can also push    users toward chatbots that     specialize in generating hateful content and echo chambers.    That would be a worrying outcome.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Go here to read the rest:<br \/>\n<a target=\"_blank\" href=\"https:\/\/theconversation.com\/ai-chatbots-refuse-to-produce-controversial-output-why-thats-a-free-speech-problem-226596\" title=\"AI chatbots refuse to produce 'controversial' output  why that's a free speech problem - The Conversation\" rel=\"noopener\">AI chatbots refuse to produce 'controversial' output  why that's a free speech problem - The Conversation<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Google recently made headlines globally because its chatbot Gemini generated images of people of color instead of white people in historical settings that featured white people. Adobe Fireflys image creation tool saw similar issues. This led some commentators to complain that AI had gone woke.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/free-speech\/ai-chatbots-refuse-to-produce-controversial-output-why-thats-a-free-speech-problem-the-conversation\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[162384],"tags":[],"class_list":["post-1124150","post","type-post","status-publish","format-standard","hentry","category-free-speech"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1124150"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=1124150"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1124150\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=1124150"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=1124150"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=1124150"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}