{"id":1115462,"date":"2023-06-10T20:22:46","date_gmt":"2023-06-11T00:22:46","guid":{"rendered":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/uncategorized\/the-problem-with-ai-licensing-an-fda-for-algorithms-the-federalist-society\/"},"modified":"2023-06-10T20:22:46","modified_gmt":"2023-06-11T00:22:46","slug":"the-problem-with-ai-licensing-an-fda-for-algorithms-the-federalist-society","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/federalist\/the-problem-with-ai-licensing-an-fda-for-algorithms-the-federalist-society\/","title":{"rendered":"The Problem with AI Licensing &amp; an FDA for Algorithms &#8211; The Federalist Society"},"content":{"rendered":"<p><p>    Last year, we released a study for the Federalist Society    predicting The    Coming Onslaught of Algorithmic Fairness Regulations.    That onslaught has now arrived. Interest in artificial    intelligence (AI) and its regulation has exploded at all levels    of government, and now some policymakers are floating the idea    of     licensing powerful AI systems and perhaps creating a new    FDA for algorithms, complete with a pre-market approval    regime for new AI applications. Other proposals are on the    table, including transparency mandates requiring    government-approved AI impact statements or audits, nutrition    labels for algorithmic applications, expanded liability for AI    developers, and perhaps even a new global regulatory body to    oversee AI development.  <\/p>\n<p>    Its a dangerous regulatory recipe for technological stagnation    that threatens to derail Americas ability to be a leader in    the Computational Revolution and build on     the success the nation has enjoyed in the digital economy    over the past quarter century.  <\/p>\n<p>    The Coming Avalanche of AI Regulation  <\/p>\n<p>    The Biden Administration set a dour tone for AI policy with the    release last October of its 73-page Blueprint    for an AI Bill of Rights. Although touted as a voluntary    framework, this Bill of Rights is more like a bill of    regulations. The document mostly focused on worst-case    scenarios that might flow from the expanded development of AI,    machine learning, and robotics. On May 23, the White House    announced a new     Request for Information on national priorities for    mitigating AI risks.  <\/p>\n<p>    The Department of Commerce also     recently launched a proceeding on AI accountability    policy, teasing the idea of algorithmic impact assessments and    AI audits as a new governance solution. Meanwhile, in a series    of     recent     blog     posts, the Federal Trade Commission has been hinting that    it might take some sort of action on AI issues, and the Equal    Employment Opportunity Commission last week announced     new guidance on AI and employment issues. At the state and    local level,     over 80 bills are pending or have been enacted to regulate    or study AI issues in some fashion.  <\/p>\n<p>    In Congress, Senate Majority Leader Chuck Schumer (D-N.Y.) is    readying a    new law requiring responsible AI, which is likely to    include some sort of AI transparency or explainability mandate.    In the last session of Congress, the     Algorithmic Accountability Act of 2022 was proposed, which    would have required that AI developers perform impact    assessments and file them with a new Bureau of Technology    inside the FTC.  <\/p>\n<p>    On May 16, the U.S. Senate Judiciary Committee held a hearing    on, Oversight    of A.I.: Rules for Artificial Intelligence. Senators and    the witnesses expressed a variety of fears about how AI could    lead to disinformation, discrimination, job loss, safety    issues, intellectual property problems, and so-called    existential risks.  <\/p>\n<p>    The hearing was memorable for how chummy OpenAI CEO Sam Altman    was with members of the committee. Many members openly gushed    about how much they appreciated the willingness of Altman and    other witnesses to preemptively call for AI regulation. In    fact,     Sen. Dick Durbin used the term historic to describe the    way tech firms were coming in and asking for regulation. Durbin    said AI firms were telling him and other members, Stop me    before I innovate again! which gave him great joy, and he said    that the only thing that mattered now is how we are going to    achieve this.  <\/p>\n<p>    Many regulatory ideas were floated by Senators and embraced at    least in part by the witnesses, including a formal licensing    regime for powerful AI systems and a new federal bureaucracy to    enforce it.  <\/p>\n<p>    The Problem with a New AI Regulator  <\/p>\n<p>    Is another regulatory agency the answer? Its not like America    lacks capacity to address artificial intelligence developments.    The federal government has 2.1 million civilian workers, 15    cabinet agencies, 50 independent federal commissions, and    over 430    federal departments altogether. Many of these bodies are    already contemplating how AI touches their field. Regulatory    agencies like the National Highway Traffic Safety    Administration, the Food and Drug Administration, and the    Consumer Product Safety Commission also have broad oversight    and recall authority, allowing them to remove defective or    unsafe products from the market. Consumer protection agencies    like the Federal Trade Commission and comparable state offices    will also police markets for unfair and deceptive algorithmic    practices.  <\/p>\n<p>    But now some policymakers and advocates want to add yet another    federal bureaucracy. The idea of a new digital technology    regulator has been proposed before. In fact, the idea was        something     of a     fad in 2019 and 2020, a peak of political outrage over    social media. One of us wrote a     report chapter analyzing and addressing the most prominent    of the digital tech regulation proposals. That same analysis    applies to more recent calls for an AI regulator, especially    since at least one of the     recent legislative proposals is practically identical to    earlier proposals.  <\/p>\n<p>    Creating a new regulatory agency for AI would be a dramatic    change in the U.S. approach to technology regulation. The U.S.    has never had a regulator for general purpose technologies such    as software, computers, or consumer electronics. Instead,    governance over these technologies has been through a mix of    common law, consumer protection standards, application-specific    regulation (such as health care devices and transportation),    and market competition.  <\/p>\n<p>    There is a good reason why we havent established a general    purpose technology regulator, and those reasons extend to an AI    regulator. Any proposal for a new regulatory agency for AI    faces two substantial challenges: identifying the area of    expertise that would justify a separate agency, and avoiding    regulatory capture.  <\/p>\n<p>    What Expertise? The generally accepted reason    for creating a new agency is division of labor by expertisein    a word, specialization. To justify a new agency, then, one must    identify an unsatisfied need for unique expertise. An agency    has no comparative advantage over Congress if the knowledge to    solve the problem is widely available or easily accessible. On    the other hand, assigning unrelated problems requiring    different expertise to the same agency is inefficient; itd be    better to delegate such issues to different agencies already    possessing the relevant expertise.  <\/p>\n<p>    When it comes to AI, there is a common core of technical    knowledge. But AI is a general purpose form of computation. The    applications span every industry. The risk profiles of    applications in, say, transportation or policing are quite    different from the risk profiles in, say, music or gaming.    While there may be some advantage in collecting the technical    expertise in one place, the policy expertise to judge whether    and how different uses of AI should be regulated gains little    or nothing from being consolidatedand in fact, the relevant    policy expertise on various applications already resides in    dozens of existing agencies.  <\/p>\n<p>    Another way to say this is that an agency with jurisdiction    over all uses of AI would be an economy-wide regulator. The    result would not be a specialized agency to supplement    Congress, but a shadow legislator that would replace Congress    (as well as parts of dozens of other agencies).  <\/p>\n<p>    Risk of Regulatory Capture. All agencies tend    toward     regulatory capture, where the agency serves the interests    of the regulated parties instead of the public. But    industry-specific rulemaking regulators have the highest risk    of regulatory capture in part because the agency and the    industry have a shared interest in not being disrupted by new    developments. In the fast-paced and highly innovative field of    AI, incumbents who help develop the initial regulatory approach    would benefit from raising    rivals regulatory costs. This could stifle competition and    innovation, potentially leaving the public worse off than if    there were no dedicated AI regulatory agency at all.  <\/p>\n<p>    A new AI-specific regulatory body is would not be justified by    specific expertise, and the risk of regulatory capture would be    high. There is no specific policy expertise that could be    concentrated in a single agency without the agency becoming a    miniature version of all of government. And doing so would most    likely favor todays leading AI companies and constrain other    models, such as open source.  <\/p>\n<p>    The Transparency Trap  <\/p>\n<p>    For these and other reasons, devising and funding a new federal    AI agency would be contentious once Congress started    negotiating details. In the short term, therefore, it is more    likely that policymakers will push for some sort of    transparency regulatory regime for AI. The goal would be to    make algorithms more explainable by requiring the revelation    of information about the data powering them or the specific    developer preferences regarding how tools and applications are    tailored. This would be accomplished through nutrition labels    for AI, mandated impact assessments prior to product release,    or audits after the fact.  <\/p>\n<p>    But explainability is easier in theory than reality.    Practically speaking, we know that transparency mandates around    privacy and even traditional food nutrition labels have little    impact on consumer behavior. And AI has the additional    difficulty of figuring out what exactly can be disclosed    accurately. Even the humans who train deep networks generally    cannot look under the hood and provide explanations for the    decision their networks make, notes Melanie Mitchell, author    of     Artificial Intelligence: A Guide for Thinking Humans. This    confusion would be magnified if policymakers enforce AI    transparency through mandated AI audits and impact assessments    from those who develop or deploy algorithmic systems.  <\/p>\n<p>    Companies are motivated to produce useful and safe services    that their users desire. Industry best practices, audits, and    impact assessments can play a useful role in the market process    for AI companies, as they already do for financial practices,    workplace safety, supply chain issues, and more.  <\/p>\n<p>    What we ought to avoid is a convoluted, European-style top-down    regulatory compliance regime, the kind already enshrined in the    E.U.'s forthcoming AI Act, which includes     costly requirements for prior conformity assessments for    many algorithmic services. Such approaches fail for a number of    reasons:  <\/p>\n<p>        Algorithmic auditing is inherently        subjective. Auditing algorithms is not like        auditing an accounting ledger, where the numbers either do        or do not add up. Companies, regulators, and users can have        differing value preferences. Algorithms have to make        express or implied tradeoffs between privacy, safety,        security, objectivity, accuracy, and other values in a        given system. There is no scientifically correct answer        to the question of how to rank these values.      <\/p>\n<p>        Rapid iteration and evolution. AI systems        are being shipped and updated on a weekly or even daily        basis. Requiring formal signoff on audits or        assessmentsmany of which would be obsolete before they        were completedwould slow the iteration cycle. And        converting audits into a formal regulatory process would        create several veto points that opponents of AI could use        to slow progress in the field. AI developers would likely        look to innovate in other jurisdictions if auditing or        impact assessments became a bureaucratic and highly        convoluted compliance nightmare.      <\/p>\n<p>    Finally, legislatively mandated algorithmic auditing could also    give rise to the problem of significant political meddling in    speech platforms powered by algorithms, which could have    serious free speech implications. If code is speech, then    algorithms are speech too.  <\/p>\n<p>    More Constructive Approaches  <\/p>\n<p>    Rather than licensing AI development through a new federal    agency, there is a better way.  <\/p>\n<p>    First, politicians and regulators ought to drill down.    Policymakers should understand that AI isn't a singular,    overarching technology, but a diverse range of technologies    with different applications. Each specific area of application    of AI should be assessed for potential benefits and risks. This    should involve a detailed examination of how AI is used, who is    affected by these uses, and what outcomes might be expected. A    balance should be sought to maximize benefits while minimizing    risks.  <\/p>\n<p>    An important part of evaluating a specific use is understanding    the role markets, reputation, and consumer demand play in    aligning each use with the public interest. Each area of AI    application could have unique market pressures and mechanisms    for dealing with that pressure, such as user education, private    codes of conduct, and other soft law mechanisms. These    established practices could obviate the need for regulation or    help identify where gaps remain.  <\/p>\n<p>    After assessing the various AI applications and market    conditions, regulators should prioritize areas where high risks    are not effectively addressed by norms or by existing    regulatory bodies such as the Department of Transportation or    Food and Drug Administration. This prioritization would ensure    that the most urgent and potentially harmful areas receive    adequate regulatory attention. In addressing these gaps,    policymakers should look first to how to supplement existing    agencies with experience in the industry area where AI is being    applied.  <\/p>\n<p>    We do not need a new agency to govern AI. We need a better,    more detailed understanding of the opportunities and risks of    specific applications of AI. Policy makers should take the time    to develop this understanding before jumping to create a whole    new agency. There is much to be done to ensure the benefits and    minimize the risks of AI, and there is no silver bullet.    Instead, policy makers should gird themselves for a long    process of investigating and addressing the issues raised by    specific applications of AI. Its not as flashy as a new    agency, but its far more likely to address the concerns    without killing the beneficial uses.  <\/p>\n<p>    Note from the Editor: The Federalist Society takes no    positions on particular legal and public policy matters. Any    expressions of opinion are those of the author. To join the    debate, please email us <a href=\"mailto:atinfo@fedsoc.org\">atinfo@fedsoc.org<\/a>.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>View post: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/fedsoc.org\/commentary\/fedsoc-blog\/the-problem-with-ai-licensing-an-fda-for-algorithms\" title=\"The Problem with AI Licensing &amp; an FDA for Algorithms - The Federalist Society\">The Problem with AI Licensing &amp; an FDA for Algorithms - The Federalist Society<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Last year, we released a study for the Federalist Society predicting The Coming Onslaught of Algorithmic Fairness Regulations. That onslaught has now arrived. Interest in artificial intelligence (AI) and its regulation has exploded at all levels of government, and now some policymakers are floating the idea of licensing powerful AI systems and perhaps creating a new FDA for algorithms, complete with a pre-market approval regime for new AI applications <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/federalist\/the-problem-with-ai-licensing-an-fda-for-algorithms-the-federalist-society\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[487839],"tags":[],"class_list":["post-1115462","post","type-post","status-publish","format-standard","hentry","category-federalist"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1115462"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=1115462"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1115462\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=1115462"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=1115462"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=1115462"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}