{"id":1122325,"date":"2024-02-20T18:55:38","date_gmt":"2024-02-20T23:55:38","guid":{"rendered":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/uncategorized\/how-ai-generated-deepfakes-threaten-the-2024-election-journalists-resource\/"},"modified":"2024-02-20T18:55:38","modified_gmt":"2024-02-20T23:55:38","slug":"how-ai-generated-deepfakes-threaten-the-2024-election-journalists-resource","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-intelligence\/how-ai-generated-deepfakes-threaten-the-2024-election-journalists-resource\/","title":{"rendered":"How AI-generated deepfakes threaten the 2024 election &#8211; Journalist&#8217;s Resource"},"content":{"rendered":"<p><p>                Facebook                Twitter                LinkedIn                Reddit                Email      <\/p>\n<p>    Last month,     arobocall impersonating U.S. President Joe Biden went    out to New Hampshire voters, advising them not to vote in the    states presidential primary election.The voice,    generated by artificial intelligence, sounded quite real.  <\/p>\n<p>    Save your vote for the November election, the voice stated,    falsely asserting that a vote in the primary would prevent    voters from being able to participate in the November general    election.  <\/p>\n<p>    The robocall incident reflects a growing concern that    generative AI will make it cheaper and easier to spread    misinformation and run disinformation campaigns. The Federal    Communications Commission last week     issued a ruling to make AI-generated voices in robocalls    illegal.  <\/p>\n<p>        Deepfakes already have affected other elections around the    globe. In    recent elections in Slovakia, for example, AI-generated    audio recordings circulated on Facebook, impersonating a    liberal candidate discussing plans to raise alcohol prices and    rig the election.     During the February 2023 Nigerian elections, an    AI-manipulated audio clip falsely implicated a presidential    candidate in plans to manipulate ballots. With elections this    year in     over 50 countries involving half the globes population,    there are fears deepfakes could seriously undermine their    integrity.  <\/p>\n<p>    Media outlets including the    BBC and     the New York Times     sounded the alarm on deepfakes as far back as 2018.    However, in past elections, including the 2022 U.S. midterms,    the technology did not produce believable fakes and was not    accessible enough, in terms of both affordability and ease of    use, to be weaponized for political disinformation. Instead,    those looking to manipulate media narratives relied on simpler    and cheaper ways to spread disinformation, including     mislabeling or misrepresenting authentic videos,     text-based disinformation campaigns, or just plain old        lying on air.  <\/p>\n<p>    As Henry Ajder, a researcher on AI and synthetic media writes    in     a 2022 Atlantic piece, Its far more effective to use a    cruder form of media manipulation, which can be done quickly    and by less sophisticated actors, than to release an expensive,    hard-to-create deepfake, which actually isnt going to be as    good a quality as you had hoped.  <\/p>\n<p>    As deepfakes continually improve in sophistication and    accessibility, they will increasingly contribute to the deluge    of informational detritus. Theyre already convincing. Last    month,     The New York Times published an online test inviting    readers to look at 10 images and try to identify which were    real and which were generated by AI, demonstrating first-hand    the difficulty of differentiating between real and AI-generated    images. This was supported by multiple academic studies, which    found that faces of white people created by AI systems were    perceived as more realistic than genuine photographs, New York    Times reporter Stuart A. Thompson explained.  <\/p>\n<p>    Listening to the     audio clip of the fake robocall that targeted New Hampshire    voters, it is difficult to distinguish from Bidens real    voice.  <\/p>\n<p>    The jury is still out on how generative AI will impact this    years elections. In a December blog post on GatesNotes,    Microsoft co-founder Bill Gates    estimates we are still 18-24 months away from significant    levels of AI use by the general population in high-income    countries. In a December post on her website Anchor Change,        Katie Harbath, former head of elections policy at Facebook,    predicts that although AI will be used in elections, it    will not be at the scale yet that everyone imagines.  <\/p>\n<p>    It may, therefore, not be deepfakes themselves, but the    narrative around them that undermines election integrity. AI    and deepfakes will be firmly in the public consciousness as we    go to the polls this year, with their increased prevalence    supercharged by outsized media coverage on the topic. In her    blog post, Harbath adds that its the narrative of what havoc    AI could have that will have the bigger impact.  <\/p>\n<p>        Those engaging in media manipulation can exploit the public        perception that deepfakes are everywhere to undermine        trust in information. These people use false claims and        discredit true ones by exploiting the liars        dividend.      <\/p>\n<p>        The liars dividend, a term coined by legal scholars                Robert Chesney and Danielle Keats Citron in a 2018        California Review article, suggests that as the public        becomes more aware about the idea that video and audio can        be convincingly faked, some will try to escape        accountability for their actions by denouncing authentic        audio and video as deepfakes.      <\/p>\n<p>        Fundamentally, it captures the spirit of political        strategist Steve Bannons strategy to flood the zone with        shit, as he stated in         a 2018 meeting with journalist Michael Lewis.      <\/p>\n<p>                As journalist Sean Illing comments in a 2020 Vox        article, this tactic is part of a broader strategy to        create widespread cynicism about the truth and the        institutions charged with unearthing it, and, in doing so,        erode the very foundation of liberal democracy.      <\/p>\n<p>    There are already notable examples of the liars dividend in    political contexts.     In recent elections in Turkey, a video tape surfaced    showing compromising images of a candidate. In response, the    candidate claimed the video was a deepfake when it was, in    fact, real.  <\/p>\n<p>    In April 2023,     an Indian politician claimed that audio recordings of him    criticizing members of his party were AI-generated. But a    forensic analysis suggested at least one of the recordings was    authentic.  <\/p>\n<p>    Kaylyn    Jackson Schiff, Daniel Schiff, and Natalia Buen,    researchers who study the impacts of AI on politics, carry out    experiments to understand the impacts of the liars dividend on    audiences. In     an article forthcoming in the American Political Science    Review, they note that in refuting authentic media as fake,    bad actors will either blame their political opposition or an    uncertain information environment.  <\/p>\n<p>    Their findings suggest that the liars dividend becomes more    powerful as people become more familiar with deepfakes. In    turn, media consumers will be primed to dismiss legitimate    campaign messaging. It is therefore imperative for the public    to be confident that we can differentiate between real and    manipulated media.  <\/p>\n<p>    Journalists have a crucial role to play in responsible    reporting on AI. Widespread news coverage of the Biden    robocalls and     recent Taylor Swift deepfakes demonstrate that distorted    media     can be debunked, due to the resources of governments,    technology professionals, journalists, and, in the case of    Swift, an army of superfans.  <\/p>\n<p>    This reporting should be balanced with a     healthy dose of skepticism on the impact of AI in this    years elections. Self-interested technology vendors will be    prone to overstate its impact. AI may be a stalking horse for    broader dis- and misinformation campaigns exploiting     worsening integrity issues on these platforms.  <\/p>\n<p>        Lawmakers across states have introduced legislation to    combat election-related AI-generated dis- and misinformation.    These bills would require disclosure of the use of AI for    election-related content in Alaska, Florida,        Colorado,     Hawaii, South    Dakota, Massachusetts,        Oklahoma,     Nebraska, Indiana,    Idaho and Wyoming.    Most of the bills would require that information to be    disclosed within specific time frames before elections. A bill    in Nebraska would ban all deepfakes within 60 days of an    election.  <\/p>\n<p>    However, the introduction of these bills does not necessarily    mean they will become law.     Furthermore, their enforceability could be challenged on    the grounds of free speech, based on positioning AI-generated    content as satire. Moreover, penalties would only occur after    the fact or be evaded by foreign entities.  <\/p>\n<p>    Social media companies hold the most influence in limiting the    spread of false content, being able to detect and remove it    from their platforms. However, the policies of major platforms,    including     Facebook,     YouTube, and TikTok    state they will only remove manipulated content for cases of    egregious harm or if it aims to mislead people about voting    processes. This is in line with a general relaxation in    moderation standards, including repeals of 17 policies at the    former three companies related to hate speech, harassment and    misinformation in the last year.  <\/p>\n<p>    Their primary response to AI-generated content will be to label    it as AI-generated. For     Facebook,     YouTube and     TikTok, this will apply to all AI-generated content,    whereas for     X (formally Twitter), these labels will apply to content    identified as misleading media, as noted in recent policy    updates.  <\/p>\n<p>    This puts the onus on users to recognize these labels, which    are not yet rolled out and will take time to adjust to.    Furthermore, AI-generated content may evade the detection of    already overstretched moderation teams and not be removed or    labeled, creating false security for users. Moreover, with the    exception of X (formerly Twitter)s policy these labels do not    specify whether a piece of content is harmful, only that it is    AI-generated.  <\/p>\n<p>    A deepfake made purely for comedic purposes would be labeled,    but a manually altered video spreading disinformation might    not.     Recent recommendations from the oversight board of Meta,    the company formerly known as Facebook, advise that instead of    focusing on how a distorted image, video or audio clip was    created, the companys policy should focus on the harm    manipulated posts can cause.  <\/p>\n<p>    The continued emergence of deepfakes is worrying, but they    represent a new weapon in the arsenal of disinformation tactics    deployed by bad actors rather than a new frontier. The    strategies to mitigate the damage they cause are the same as    before  developing and enforcing responsible platform design    and moderation, underpinned by legal mandates where feasible,    coupled with journalists and civic society holding the    platforms accountable. These strategies are now more important    than ever.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Go here to see the original:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/journalistsresource.org\/home\/how-ai-deepfakes-threaten-the-2024-elections\" title=\"How AI-generated deepfakes threaten the 2024 election - Journalist's Resource\">How AI-generated deepfakes threaten the 2024 election - Journalist's Resource<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Facebook Twitter LinkedIn Reddit Email Last month, arobocall impersonating U.S.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-intelligence\/how-ai-generated-deepfakes-threaten-the-2024-election-journalists-resource\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187742],"tags":[],"class_list":["post-1122325","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1122325"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=1122325"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1122325\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=1122325"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=1122325"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=1122325"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}