{"id":169663,"date":"2024-06-28T02:39:35","date_gmt":"2024-06-28T06:39:35","guid":{"rendered":"https:\/\/www.immortalitymedicine.tv\/should-artificial-intelligence-supply-plain-meaning-the-11th-circuit-wants-to-know-hunton-andrews-kurth-llp\/"},"modified":"2024-08-18T12:47:42","modified_gmt":"2024-08-18T16:47:42","slug":"should-artificial-intelligence-supply-plain-meaning-the-11th-circuit-wants-to-know-hunton-andrews-kurth-llp","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/should-artificial-intelligence-supply-plain-meaning-the-11th-circuit-wants-to-know-hunton-andrews-kurth-llp.php","title":{"rendered":"Should Artificial Intelligence Supply Plain Meaning? The 11th Circuit Wants to Know &#8211; Hunton Andrews Kurth LLP"},"content":{"rendered":"<p><p>  Should Artificial Intelligence Supply Plain Meaning? The 11th  Circuit Wants to Know<\/p>\n<p>    Insurance coverage lawsuits often hinge on the plain and    ordinary meaning of specific words or phrases. But not every    word in an insurance policy can be defined. Yet without stable    and predictable definitions, neither policyholders nor insurers    can establish a clear and consistent scope of coverage. In a    recent concurring opinion, Eleventh Circuit Judge Kevin Newsom    suggests that artificial intelligence (AI) large language    models (LLMs) could help resolve these definitional debates.    His opinion inSnell v. United Specialty    Insurance Company, No. 22-12581, 2024 WL 2717700 (11th    Cir. May 28, 2024) highlights the pros and cons of calling upon    technology to supply plain meaning.  <\/p>\n<p>    This approach may even offer promise for a fundamental issue    plaguing the insurability of AI risk,    whichwe    discussed last month. That is, how to define AI to    ensure a functional and predictable scope of coverage?      <\/p>\n<p>    LLMs as a Tool in the Interpretive Toolkit  <\/p>\n<p>    InSnell, an insured sought coverage    under a Commercial General Liability policy in connection with    a lawsuit brought after a child sustained injuries while using    an in-ground trampoline. The insurer denied coverage and    refused to defend the lawsuit. The lawsuit alleged that Snell,    a landscaper, negligently installed the trampoline in a    clients backyard. The district court found that coverage would    turn on whether installation of the trampoline amounted to    landscaping, as that term was used in the policy. But the    policy did not supply a definition for the term landscaping.    The court, therefore, turned to the common, everyday meaning    of the term, which the district court found to not include    trampoline installation.  <\/p>\n<p>    The Eleventh Circuit ultimately affirmed the district courts    decision based on Alabama-law specific grounds unrelated to the    meaning of landscaping. Yet, of particular note, in a    concurring opinion, Judge Newsom suggested that LLMs like    OpenAIs ChatGPT, Googles Gemini and Anthropics Claude could    help discern the ordinary meaning of undefined words in legal    instruments, including insurance policies.  <\/p>\n<p>    Judge Newsom identified several benefits to using LLMs for this    purpose. LLMs train on vast amounts of ordinary-language    datamuch more than might be available through a    dictionarymaking them particularly adept at determining common    usage. They understand context, which helps detect nuanced    language patterns. LLMs are also increasingly accessible,    making their use an inexpensive research tool for judges,    lawyers and ordinary citizens. In addition, using LLMs to    facilitate ordinary meaning interpretation enhances the    transparency of the interpreting process because queries and    outputs from LLMs can be fully disclosed to the public.  <\/p>\n<p>    Judge Newsom also pointed out potential drawbacks. LLMs can    hallucinate and generate incorrect information at times. They    might not fully capture offline speech, potentially missing how    underrepresented groups use language. There is also the risk    that lawyers, judges or litigants may try to manipulate LLMs to    favor their preferred interpretation. Moreover, over-reliance    on LLMs could lead to dystopian robo judges.  <\/p>\n<p>    Despite these concerns, Judge Newsom suggests that AI could be    another tool in the judiciarys toolbox, alongside dictionaries    and semantic canons.  <\/p>\n<p>    Can LLMs Define AI?  <\/p>\n<p>    Asdiscussed    previously, we have identified the general inability to    clearly and uniformly define AI for purposes of insuring AI    risks. Following Judge Newsoms suggestion, we asked ChatGPT to    provide a stable and predictable definition of AI for use in    insurance policies to answer the question posed in our last    article. That is, how should insurance policies define AI to    ensure a functional scope of coverage? Heres ChatGPTs    response:  <\/p>\n<p>    Artificial Intelligence (AI) refers to the capability of a    machine to perform tasks that typically require human    intelligence, such as learning, reasoning, problem-solving,    perception, and natural language understanding. AI systems    include technologies like machine learning algorithms, neural    networks, and natural language processing, which enable these    machines to analyze data, recognize patterns, and make    decisions or predictions based on that analysis.  <\/p>\n<p>    ChatGPT even explained why its definition is a good one. It    explained that this definition is comprehensive, covering all    AI capabilities and ensuring all current and foreseeable    technologies are included. It is clear and specific,    identifying key components of AI systems, and technologically    neutral, making it adaptable to future advancements. By    focusing on AIs functional aspects, it aligns with practical    insurance concerns. This definition provides a clear framework,    reducing disputes and aligning with existing legal frameworks,    making it easier to use in legal contexts. It also minimizes    differing interpretations, leading to more straightforward    claims processes and fewer litigation risks. Whether this    AI-generated definition or some iteration might provide a    functional definition for purposes of insurance remains to be    seen.  <\/p>\n<p>    Conclusion  <\/p>\n<p>    As policyholders and insurers work to resolve the age-old task    of supplying meaning to undefined terms, or defining new risks    like those posed by AI, they might find it useful to follow    Judge Newsoms recommendation and use AI among the other tools    in their toolkits to resolve definitional debates. For now,    however, while landscapers and acrobats can rest assured    knowing that trampolines are not landscaping (at least in the    11thCircuit), the more vexing    insurance-related AI issue remains:    whatisAI?  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Go here to see the original: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/www.huntonak.com\/hunton-retail-law-resource\/should-ai-supply-plain-meaning-the-11th-circuit-wants-to-know\" title=\"Should Artificial Intelligence Supply Plain Meaning? The 11th Circuit Wants to Know - Hunton Andrews Kurth LLP\">Should Artificial Intelligence Supply Plain Meaning? The 11th Circuit Wants to Know - Hunton Andrews Kurth LLP<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Should Artificial Intelligence Supply Plain Meaning? The 11th Circuit Wants to Know Insurance coverage lawsuits often hinge on the plain and ordinary meaning of specific words or phrases. But not every word in an insurance policy can be defined <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/should-artificial-intelligence-supply-plain-meaning-the-11th-circuit-wants-to-know-hunton-andrews-kurth-llp.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[13],"tags":[],"class_list":["post-169663","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/169663"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=169663"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/169663\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=169663"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=169663"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=169663"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}