{"id":1121645,"date":"2024-01-30T22:26:11","date_gmt":"2024-01-31T03:26:11","guid":{"rendered":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/uncategorized\/a-mysterious-phone-call-cloned-bidens-voice-can-the-next-one-be-stopped-politico\/"},"modified":"2024-01-30T22:26:11","modified_gmt":"2024-01-31T03:26:11","slug":"a-mysterious-phone-call-cloned-bidens-voice-can-the-next-one-be-stopped-politico","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/cloning\/a-mysterious-phone-call-cloned-bidens-voice-can-the-next-one-be-stopped-politico\/","title":{"rendered":"A mysterious phone call cloned Biden&#8217;s voice. Can the next one be stopped? &#8211; POLITICO"},"content":{"rendered":"<p><p>    This story originally appeared in Digital Future Daily,    POLITICOs newsletter about how technology is redefining global    power. Subscribe    here.  <\/p>\n<p>    The impact of deepfakes on society  and elections particularly     has been an anxiety for    years. Easy-to-use generative AI tools have recently moved    it from an issue in niche areas to a top security risk across    the board. Before the Biden robocall, AI deepfakes were used in    attempts to disrupt elections in Slovakia and    Taiwan.  <\/p>\n<p>    Congress has taken note. Sen. Mike Rounds (R-S.D.) told    POLITICO that as the Senate hashes out its priorities on    AI legislation, there is growing recognition that tackling the    use of AI in campaign ads and communications should top the    list. And after explicit deepfakes of Taylor Swift spread on X    last week, lawmakers renewed calls for urgent    legislation on the issue.  <\/p>\n<p>    A reminder: no federal laws currently prohibit the sharing or    creation of deepfakes, though several bills have been proposed    in Congress and some states have passed laws to crack down on    manipulated media. The Federal Election Commission, too, has    been considering rule    changes to regulate the use of A.I. deepfakes in campaign    materials.  <\/p>\n<p>    Deepfakes is the first test that generative AI has thrown at    us because it fundamentally eliminates all trust, Vijay    Balasubramaniyan, CEO of the phone fraud detection company    Pindrop, told Steven Overly on a POLITICO Tech    podcast episode that delved into the Biden robocall    incident. If we cant get together and figure out how to solve    that problem, yeah, the killer robots will definitely get us.  <\/p>\n<p>    No surprise thats easier said than done. One especially tricky    part will be figuring out how to tackle the full range of    manipulated media  from older techniques like splicing in fake    audio to the new generative AI-fueled advancements, and all the    hybrids in between. The robocall, for one, was not a very    advanced audio deepfake, according to Matthew Wright, who    chairs Rochester Institute of Technologys cybersecurity    department.  <\/p>\n<p>    There are tools available now that that can do a better job,    and consequently be more dangerous, he told DFD.  <\/p>\n<p>    Looking at the proposed federal bills and enacted state laws,    it turns out theres not a whole lot they collectively agree    on, starting with even what should be regulated.  <\/p>\n<p>    California and Washingtons laws target false depictions only    of political candidates, while Texas and Minnesota go further    to include those created with the intention of harming a    political candidate or influencing election outcomes.  <\/p>\n<p>    Consensus on what constitutes a deepfake is also lacking. Some    bills distinctly cover images and video, while others extend to    audio.  <\/p>\n<p>    This episode does highlight how important it is to have audio    be included in these efforts, said Mekela Panditharatne,    counsel for the Brennan Centers Democracy Program. It could    be kind of separated and done piecemeal. But I do think it    makes sense to consider those different forms of gen-AI    production together.  <\/p>\n<p>    Piecemeal seems to be the way regulation on deepfakes is    moving. Wright drew parallels with the landscape for privacy    legislation, where a patchwork of laws offer varying levels of    protection.  <\/p>\n<p>    A key question is who should be held accountable: phone service    providers, platforms, developers or distributors of the    deepfakes? How you answer that ends up defining the focus of    proposed solutions.  <\/p>\n<p>    At the federal level, bills have assigned responsibility to two    main groups, said Panditharatne. The first includes the actors    that fall under campaign finance disclosure requirements:    campaigns, super PACs and donors. Often, the resulting bills    address the timing of deepfakes  like one act that bans    false endorsements and knowingly misleading voting information    60 days before a federal election  or transparency, as in the    case of Rep. Yvette Clarkes    (D-N.Y.) bill which requires that political ads    reveal their use of AI-generated material through mandatory    labeling, watermarking or audio disclosures.  <\/p>\n<p>    The second category targets deepfake disseminators, so long as    they meet certain knowledge or intent requirements in some    cases. Rep. Joe Morelles (D-N.Y.) Preventing Deepfakes    of Intimate Images Act would make it illegal to share    deepfake pornography without consent.  <\/p>\n<p>    There is relatively little attention both at the federal and    state level in holding other actors to account for deepfakes,    Panditharatne added, giving social media companies and AI    developers as examples.  <\/p>\n<p>    As with past content moderation issues, social media giants    enjoy some protection from legal liability under federal law    (thanks to the famous Section 230 of the Communications Decency    Act), which complicates such efforts. The bipartisan Senate NO FAKES Act    is one attempt; it proposes holding liable anyone who makes or    publicly shares an unauthorized digital replica  including    companies  and allowing for penalties that start at $5,000 per    violation.  <\/p>\n<p>    Still, its unclear to Wright whether any regulations under    consideration, or industry solutions in development, could have    prevented the Biden robocall. Wright said he has built a    deepfake detection tool of his own, but also offers one    solution for which the technology does not currently exist on    phones. Every microphone is going to have to have even live    audio being constantly re-certified. That might have to be    whats required.  <\/p>\n<p>    The design of the scheme exploited an area on which detection    focuses less: a direct line with no real-time feedback from    social media and limited playback capabilities.  <\/p>\n<p>    Enforcing the regulations being floated will require some sort    of detection mechanism ( many have been invented). But for    now, some bad actors with just a voter registration list,    phone, and 30-second clip of a political figure can inevitably    fly under the radar. The FTC has sponsored a    challenge with a $25,000 top prize for the most effective    approach to safeguard against the misuse of AI-enabled voice    cloning, covering everything from imposter fraud to using    someones voice without consent in music creation. Its    suggestions include real-time detection and monitoring to alert    users to voice cloning or block calls.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Excerpt from: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/www.politico.com\/news\/2024\/01\/29\/biden-robocall-ai-trust-deficit-00138449\" title=\"A mysterious phone call cloned Biden's voice. Can the next one be stopped? - POLITICO\">A mysterious phone call cloned Biden's voice. Can the next one be stopped? - POLITICO<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> This story originally appeared in Digital Future Daily, POLITICOs newsletter about how technology is redefining global power. Subscribe here. The impact of deepfakes on society and elections particularly has been an anxiety for years <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/cloning\/a-mysterious-phone-call-cloned-bidens-voice-can-the-next-one-be-stopped-politico\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187749],"tags":[],"class_list":["post-1121645","post","type-post","status-publish","format-standard","hentry","category-cloning"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1121645"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=1121645"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1121645\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=1121645"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=1121645"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=1121645"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}