{"id":168204,"date":"2024-01-04T02:38:38","date_gmt":"2024-01-04T07:38:38","guid":{"rendered":"https:\/\/www.immortalitymedicine.tv\/opinion-a-i-use-by-law-enforcement-must-be-strictly-regulated-the-new-york-times\/"},"modified":"2024-08-18T12:53:08","modified_gmt":"2024-08-18T16:53:08","slug":"opinion-a-i-use-by-law-enforcement-must-be-strictly-regulated-the-new-york-times","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/ai\/opinion-a-i-use-by-law-enforcement-must-be-strictly-regulated-the-new-york-times.php","title":{"rendered":"Opinion | A.I. Use by Law Enforcement Must Be Strictly Regulated &#8211; The New York Times"},"content":{"rendered":"<p><p>      One of the most hopeful proposals involving police      surveillance emerged recently from a surprising quarter  the      federal Office of Management and Budget. The office, which      oversees the execution of the presidents policies, has      recommended sorely needed constraints on the use      of artificial intelligence by federal agencies, including law      enforcement.    <\/p>\n<p>      The offices work is commendable, but shortcomings in its      proposed guidance to agencies could still leave people      vulnerable to harm. Foremost among them is a provision that      would allow senior officials to seek waivers by arguing that      the constraints would hinder law enforcement. Those law      enforcement agencies should instead be required to provide      verifiable evidence that A.I. tools they or their vendors use      will not cause harm, worsen discrimination or violate      peoples rights.    <\/p>\n<p>      As scholars of algorithmic tools, policing and constitutional      law, we have witnessed the predictable and preventable harms from law      enforcements use of emerging technologies. These include      false arrests and police seizures, including a family      held at gunpoint, after people were wrongly accused of      crimes because of the irresponsible use of A.I.-driven      technologies including facial recognition and automated      license plate readers.    <\/p>\n<p>      Consider the cases of Porcha Woodruff, Michael Oliver and      Robert Julian-Borchak Williams. All were arrested between      2019 and 2023 after they were misidentified by facial      recognition technology. These arrests had indelible      consequences: Ms. Woodruff was eight months pregnant when she was falsely      accused of carjacking and robbery; Mr. Williams was arrested in front of his wife      and two young daughters as he pulled into his driveway from      work. Mr. Oliver lost his      job as a result.    <\/p>\n<p>      All are Black. This should not be a surprise. A 2018      study      co-written by one of us (Dr. Buolamwini) found that three      commercial facial-analysis programs from major technology      companies showed both skin-type and gender biases. The darker      the skin, the more often the errors arose. Questions of      fairness and bias persist about the use of these sorts of      technologies.    <\/p>\n<p>      Errors happen because law enforcement deploys emerging      technologies without transparency or community agreement that      they should be used at all, with little or no consideration      of the consequences, insufficient training and inadequate      guardrails. Often the data sets that drive the technologies      are infected with errors and racial bias. Typically, the      officers or agencies face no consequences for false arrests,      increasing the likelihood they will continue.    <\/p>\n<p>      The Office of Management and Budget guidance, which is now      being finalized after a period of public comment,      would apply to law enforcement technologies such as facial      recognition, license-plate readers, predictive policing      tools, gunshot detection, social media monitoring and      more. It sets      out criteria for A.I. technologies that, without safeguards,      could put peoples safety or well-being at risk or violate      their rights. If these proposed minimum practices are not      met, technologies that fall short would be prohibited after      next Aug. 1.    <\/p>\n<p>      Here are highlights of the proposal: Agencies must be      transparent and provide a public inventory of cases in which      A.I. was used. The cost and benefit of these technologies      must be assessed, a consideration that has been altogether      absent. Even if the technology provides real benefits, the      risks to individuals  especially in marginalized communities       must be identified and reduced. If the risks are too high,      the technology may not be used. The impact of A.I.-driven      technologies must be tested in the real world, and be      continually monitored. Agencies would have to solicit public      comment before using the technologies, including from the      affected communities.    <\/p>\n<p>      The proposed requirements are serious ones. They should have      been in place before law enforcement began using these      emerging technologies. Given the rapid adoption of these      tools, without evidence of equity or efficacy and with      insufficient attention to preventing mistakes, we fully      anticipate some A.I. technologies will not meet the proposed      standards and their use will be banned for noncompliance.    <\/p>\n<p>      The overall thrust of the federal A.I. initiative is to push      for rapid use of untested technologies by law enforcement, an      approach that too often fails and causes harm. For that      reason, the Office of Management and Budget must play a      serious oversight role.    <\/p>\n<p>      Far and away, the most worrisome elements in the proposal are      provisions that create the opportunity for loopholes. For      example, the chief A.I. officer of each federal agency could      waive proposed protections with nothing more than a      justification sent to the Office of Management and Budget.      Worse yet, the justification need only claim an unacceptable      impediment to critical agency operations  the sort of claim      law enforcement regularly makes to avoid regulation.    <\/p>\n<p>      This waiver provision has the potential to wipe away all that      the proposal promises. No waiver should be permitted without      clear proof that it is essential  proof that in our      experience law enforcement typically cannot muster. No one      person should have the power to issue such a waiver. There      must be careful review to ensure that waivers are legitimate.      Unless the recommendations are enforced strictly, we will see      more surveillance, more people forced into unjustified      encounters with law enforcement, and more harm to communities      of color. Technologies that are clearly shown to be      discriminatory should not be used.    <\/p>\n<p>      There is also a vague exception for national security, a      phrase frequently used      to excuse policing from legal protections for civil rights      and against discrimination. National security requires a      sharper definition to prevent the exemption from being      invoked without valid cause or oversight.    <\/p>\n<p>      Finally, nothing in this proposal applies beyond federal      government agencies. The F.B.I., the Transportation Security      Administration and other federal agencies are aggressively embracing facial recognition and      other biometric technologies that can recognize individuals      by their unique physical characteristics. But so are state      and local agencies, which do not fall under these guidelines.      The federal government regularly offers federal funding as a      carrot to win compliance from state and local agencies with      federal rules. It should do the same here.    <\/p>\n<p>      We hope the Office of Management and Budget will set a higher      standard at the federal level for law enforcements use of      emerging technologies, a standard that state and local      governments should also follow. It would be a shame to make      the progress envisioned in this proposal and have it      undermined by backdoor exceptions.    <\/p>\n<p>      Joy Buolamwini is      the founder of the Algorithmic Justice League, which seeks to      raises awareness about the potential harms of artificial      intelligence, and the author of Unmasking AI: My Mission to      Protect What Is Human in a World of Machines. Barry Friedman      is a professor at New York Universitys School of Law and the      faculty director of its Policing Project. He is the author of      Unwarranted: Policing Without Permission.    <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read more from the original source:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/www.nytimes.com\/2024\/01\/02\/opinion\/ai-police-regulation.html\" title=\"Opinion | A.I. Use by Law Enforcement Must Be Strictly Regulated - The New York Times\">Opinion | A.I. Use by Law Enforcement Must Be Strictly Regulated - The New York Times<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> One of the most hopeful proposals involving police surveillance emerged recently from a surprising quarter the federal Office of Management and Budget.  <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/ai\/opinion-a-i-use-by-law-enforcement-must-be-strictly-regulated-the-new-york-times.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[1234935],"tags":[],"class_list":["post-168204","post","type-post","status-publish","format-standard","hentry","category-ai"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/168204"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=168204"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/168204\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=168204"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=168204"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=168204"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}