Everything Is Not Terminator: Assessment Of Artificial Intelligence Systems – Privacy – United States – Mondaq News Alerts

Posted: November 29, 2020 at 6:24 am

To print this article, all you need is to be registered or login on Mondaq.com.

Published in The Journal of Robotics, ArtificialIntelligence & Law (January-February 2021)

Many information security and privacy laws such as theCalifornia Consumer Privacy Act1 and the New York StopHacks and Improve Electronic Data Security Act2 requireperiodic assessments of an organization's informationmanagement systems. Because many organizations collect, use, andstore personal information from individualsmuch of whichcould be used to embarrass or impersonate those individuals ifinappropriately accessedthese laws require organizations toregularly test and improve the security they use to protect thatinformation.

As of yet, there is no similar specific law in the United Statesdirected at artificial intelligence systems ("AIS"),requiring the organizations that rely on AIS to test its accuracy,fairness, bias, discrimination, privacy, and security.

However, existing law is broad enough to impose on manyorganizations a general obligation to assess their AIS, andlegislation has appeared requiring certain entities to conductimpact assessments on their AIS. Even without a regulatory mandate,many organizations should perform AIS assessments as a bestpractice.

This column summarizes current and pending legal requirementsbefore providing more details about the assessment process.

The Federal Trade Commission's ("FTC") authorityto police "unfair or deceptive acts or practices in oraffecting commerce" through rule making and administrativeadjudication is broad enough to govern AIS, and it has a departmentthat focuses on algorithmic transparency, the Office of TechnologyResearch and Investigation.3 However, the FTC has notissued clear guidance regarding AIS uses that qualify as unfair ordeceptive acts or practices. There are general practices thatorganizations can adopt that will minimize their potential forengaging in unfair or deceptive practices, which include conductingassessments of their AIS.4 However, there is no specificFTC rule obligating organizations to assess their AIS.

There have been some legislative efforts to create such anobligation, including the Algorithmic AccountabilityAct,5 which was proposed in Congress, and a similar billproposed in New Jersey,6 both in 2019.

The federal bill would require covered entities to conduct"impact assessments" on their "high-risk" AISin order to evaluate the impacts of the AIS's design processand training data on "accuracy, fairness, bias,discrimination, privacy, and security."7

The New Jersey bill is similar, requiring an evaluation of theAIS's development process, including the design and trainingdata, for impacts on "accuracy, fairness, bias,discrimination, privacy, and security," and must includeseveral elements, including a "detailed description of thebest practices used to minimize the risks" and a"cost-benefit analysis."8 It would alsorequire covered entities to work with external third parties,independent auditors, and independent technology experts to conductthe assessments, if reasonably possible.9

Although neither of these has become law, they represent theexpected trend of emerging regulation.10

When organizations rely on AIS to make or inform decisions oractions that have legal or similarly significant effects onindividuals, it is reasonable for governments to require that thoseorganizations also conduct periodic assessments of the AIS. Forexample, state criminal justice systems have begun to adopt AISthat use algorithms to report on a defendant's risk to commitanother crime, risk to miss his or her next court date, etc.; humandecision makers then use those reports to inform theirdecisions.11

The idea is that the AIS can be a tool to inform decisionmakerspolice, prosecutors, judgesto help them makebetter, data-based decisions that eliminate biases they may haveagainst defendants based on race, gender, etc.12 This ispotentially a wonderful use for AIS, but only if the AIS actuallyremoves inappropriate and unlawful human bias rather than recreateit.

Unfortunately, the results have been mixed at best, as there isevidence suggesting that some of the AIS in the criminal justicesystem is merely replicating human bias.

In one example, an African-American teenage girl and a whiteadult male were each convicted of stealing property totaling about$80. An AIS determined that the white defendant was rated as alower recidivism risk than the teenager, even though he had a muchmore extensive criminal record, with felonies versus juvenilemisdemeanors. Two years after their arrests, the AISrecommendations were revealed to be incorrect: the male defendantwas serving an eight-year sentence for another robbery; theteenager had not committed any further crimes.13 Similarissues have been observed in AIS used in hiring,14lending,15 health care,16 and schooladmissions.17

Although some organizations are conducting AIS assessmentswithout a legal requirement, a larger segment is reluctant to adoptthe assessments as a best practice, as many for-profit companiescare more about accuracy to the original data used to train theirAIS than they do about eliminating the biases in that originaldata.18 According to Daniel Soukup, a data scientistwith Mostly AI, a start-up experimenting with controlling biases indata, "There's always another priority, it seems. . . .You're trading off revenue against making fair predictions, andI think that is a very hard sell for these institutions and theseorganizations."19

I suspect, though, that the tide will turn in the otherdirection in the near future, with or without a direct legislativeimpetus, similar to the trend in privacy rights and operations.Although most companies in the United States are not subject tobroad privacy laws like the California Consumer Privacy Act or theEuropean Union's General Data Protection Regulation, I haveobserved an increasing number of clients that want to provide theprivacy rights afforded by those laws, either because theircustomers expect them to or they want to position themselves ascompanies that care about individuals' privacy.

It is not hard to see a similar trend developing among companiesthat rely on AIS. As consumers become more aware of the problematicissues involved in AIS decision-makingpotential bias, use ofsensitive personal information, security of that information, thesignificant effects, lack of oversight, etc.they will becomejust as demanding about AIS requirements as privacy requirements.Similar to privacy, consumer expectations will likely be pushed inthat direction by jurisdictions that adopt AIS assessmentlegislation, even if they do not live in those jurisdictions.

Organizations that are looking to perform AIS assessments now inanticipation of regulatory activity and consumer expectationsshould conduct an assessment consistent with the followingprinciples and goals:

Consistent with the New Jersey Algorithmic Accountability Act,any AIS assessment should be done by an outside party, preferablyby qualified AI counsel, who can retain a technological consultantto assist them. This performs two functions.

First, it will avoid the situation in which the developers thatcreated the AIS for the organization are also assessing it, whichcould result in a conflict of interest, as the developers have anincentive to assess the AIS in a way that is favorable to theirwork.

Second, by retaining outside AI counsel, in addition tobenefiting from the counsel's expertise, organizations are ableto claim that the resulting assessment report and any related workproduct is protected by attorney-client privilege in the event thatthere is litigation or a government investigation related to theAIS. Companies that experience or anticipate a data security breachor event retain outside information security counsel for similarreasons, as the resulting breach analysis could be discoverable ifoutside counsel is not properly retained. The results can be veryexpensive if the breach report is mishandled.

For example, Capital One recently entered into an $80 millionConsent Order with the Department of Treasury related to a dataincident following an order from a federal court that a breachreport prepared for Capital One was not properly coordinatedthrough outside counsel and therefore not protected byattorney-client privilege.20

An AIS assessment should identify, catalogue, and describe therisks of an organization's AIS.

Properly identifying these risks, among others, and describinghow the AIS impacts each will allow an organization to understandthe issues it must address to improve its AIS.21

Once the risks in the AIS are identified, the assessment shouldfocus on how the organization alerts impacted populations. This canbe in the form of a public-facing AI policy, posted and maintainedin a manner similar to an organization's privacypolicy.22 This can also be in the form of more pointedpop-up prompts, a written disclosure and consent form, automatedverbal statement in telephone interactions, etc. The appropriateform of the notice will depend on a number of factors, includingthe organization, the AIS, the at-risk populations, the nature ofthe risks involved, etc. The notice should include the relevantrights regarding AIS afforded by privacy laws and otherregulations.

After implementing appropriate notices, the organization shouldanticipate receiving comments from members of the impactedpopulations and the general public. The assessment should help theorganization implement a process that allows it to accept, respondto, and act on those comments. This may be similar to howorganizations process privacy rights requests from consumers anddata subjects, particularly when a notice addresses those rights.The assessment may recommend that certain employees be tasked withaccepting and responding to comments, the organization addoperative capabilities that address privacy rights impacting AIS orrisks identified in the assessment and objected to by comments,etc. It may be helpful to have a technological consult provideinput on how the organization can leverage its technology to assistin this process.

The assessment should help the organization remediate identifiedrisks. The nature of the remediation will depend on the nature ofthe risks, the AIS, and the organization. Any outside AIS counselconducting the assessment needs to be well-versed in the variousforms remediation can take. In some instances, properly noticingthe risk to the relevant individuals will be sufficient, per bothlegal requirements and the organization's principles. Otherrisks cannot or should not be "papered over," but ratherobligate the organization to reduce the AIS's potential toinjure.23 This may include adding more human oversight,at least temporarily, to check the AIS's output fordiscriminatory activity or bias. A technology consultant may beable to advise the organization regarding revising the code orprocedures of the AIS to address the identified risks.

Additionally, where the AIS is evidencing bias because of thedata used to train it, more appropriate historical data or evensynthetic data may be used to retrain the AIS to remove or reduceits discriminatory behavior.24

All organizations that rely on AIS to make decisions that havelegal or similarly significant effects on individuals shouldperiodically conduct assessments of their AIS. This is true for allorganizations: for-profit companies, non-profit corporations,governmental entities, educational institutions, etc. Doing so willhelp them avoid potential legal trouble in the event their AIS isinadvertently demonstrating illegal behavior and ensure the AISacts consistently with the organization's values.

Organizations that adopt assessments earlier rather than laterwill be in a better position to comply with AIS-specific regulationwhen it appears and to develop a brand as an organization thatcares about fairness.

Footnotes

* John Frank Weaver, a member of McLaneMiddleton's privacy and data security practice group, is amember of the Board of Editors of The Journal of Robotics,Artificial Intelligence & Law and writes its"Everything Is Not Terminator" column. Mr.Weaver has a diverse technology practice that focuses oninformation security, data privacy, and emerging technologies,including artificial intelligence, self-driving vehicles, anddrones.

1. Cal. Civ. Code 1798.150(granting private right of action when a business fails to"maintain reasonable security procedures and practicesappropriate to the nature of the information," withassessments necessary to identify reasonable procedures).

2. New York General Business Law, Chapter20, Article 39-F, 899-bb.2(b)(ii)(A)(3) (requiringentities to assess "the sufficiency of safeguards in place tocontrol the identified risks"), 899.2(b)(ii)(B)(1) (requiringentities to assess "risks in network and softwaredesign"), 899.2(b)(ii)(B)(2)(requiring entities to assess"risks in information processing, transmission andstorage"), and 899.2(b)(ii)(C)(1) (requiring entities toassess "risks of information storage and disposal").

3. 15 U.S.C. 45(b); 15 U.S.C. 57a.

4. John Frank Weaver, "Everything IsNot Terminator: Helping AI to Comply with the FederalTrade Commission Act," The Journal of ArtificialIntelligence & Law (Vol. 2, No. 4; July-August 2019),291-299 (other practices include: establishing a governingstructure for the AIS; establishing policies to address the useand/sale of AIS; establishing notice procedures; and ensuringthird-party agreements properly allocate liability andresponsibility).

5. Algorithmic Transparency Act of 2019,S. 1108, H.R. 2231, 116th Cong. (2019).

6. New Jersey Algorithmic AccountabilityAct, A.B. 5430, 218th Leg., 2019 Reg. Sess. (N.J. 2019).

7. Algorithmic Accountability Act of2019, supra note 5, at 2(2) and 3(b).

8. New Jersey Algorithmic AccountabilityAct, supra note 6, at 2.

9. Id., at 3.

10. For a fuller discussion of thesebills and other emerging legislation intended to govern AIS, seeYoon Chae, "U.S. AI Regulation Guide: Legislative Overview andPractical Considerations," The Journal of ArtificialIntelligence & Law (Vol. 3, No. 1; January-February 2020),17-40.

11. See Jason Tashea,"Courts Are Using AI to Sentence Criminals. That Must StopNow," Wired (April 17, 2017), https://www.wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now/.

12. Julia Angwin, Jeff Larson, SuryaMattu, & Lauren Kirchner, "Machine Bias,"ProPublica (May 23, 2016), https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing("The appeal of the [AIS's] risk scores is obvious. . . Ifcomputers could accurately predict which defendants were likely tocommit new crimes the criminal justice system could be fairer andmore selective about who is incarcerated and for howlong.").

13. Id.

14. Jeffrey Dastin, "Amazon scrapssecret AI recruiting tool that showed bias against women,"Reuters (October 9, 2018), https://uk.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUKKCN1MK08G(Amazon "realized its new system was not rating candidates forsoftware developer jobs and other technical posts in agender-neutral way").

15. Dan Ennis and Tim Cook, "Bankingfrom AI lending models raises questions of culpability,regulation," Banking Dive (August 16, 2019), https://www.bankingdive.com/news/artificial-intelligence-lending-bias-model-regulation-liability/561085/#:~:text=Bill%20Foster%2C%20D%2DIL%2C,lenders%20for%20mortgage%20refinancing%20loans("African-Americans may find themselves the subject ofhigher-interest credit cards simply because a computer has inferredtheir race").

16. Shraddha Chakradhar, "Widelyused algorithm for follow-up care in hospitals is racially biased,study finds," STAT (October 24, 2019), https://www.statnews.com/2019/10/24/widely-used-algorithm-hospitals-racial-bias/("An algorithm commonly used by hospitals and other healthsystems to predict which patients are most likely to need follow-upcare classified white patients overall as being more ill than blackpatientseven when they were just as sick").

17. DJ Pangburn, "Schools are usingsoftware to help pick who gets in. What could go wrong?"Fast Company (May 17, 2019), https://www.fastcompany.com/90342596/schools-are-quietly-turning-to-ai-to-help-pick-who-gets-in-what-could-go-wrong("If future admissions decisions are based on past decisiondata, Richardson warns of creating an unintended feedback loop,limiting a school's demographic makeup, harming disadvantagedstudents, and putting a school out of sync with changingdemographics.").

18. Todd Feathers, "Fake Data CouldHelp Solve Machine Learning's Bias ProblemIf We LetIt," Slate (September 17, 2020), https://slate.com/technology/2020/09/synthetic-data-artificial-intelligence-bias.html.

19. Id.

20. In the Matter of Capital One, N.A.,Capital One Bank (USA), N.A., Consent Order (Document #2020-036),Department of Treasury, Office of the Comptroller of the Currency,AA-EC-20-51 (August 5, 2020), https://www.occ.gov/static/enforcement-actions/ea2020-036.pdf;In re: Capital One Consumer Data Security BreachLitigation, MDL No. 1:19md2915 (AJT/JFA) (E.D. Va. May 26,2020).

21. For a great discussion of identifyingrisks in AIS, see Nicol Turner Lee, Paul Resnick, and Genie Barton,"Algorithmic bias detection and mitigation: Best practices andpolicies to reduce consumer harms," Brookings (May22, 2019), https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/.

22. For more discussion of public facingAI policies, see John Frank Weaver, "Everything Is NotTerminator: Public-Facing Artificial IntelligencePoliciesPart I," The Journal of ArtificialIntelligence & Law (Vol. 2, No. 1; January-February 2019),59-65; John Frank Weaver, "Everything Is NotTerminator: Public-Facing Artificial IntelligencePoliciesPart II," The Journal of ArtificialIntelligence & Law (Vol. 2, No. 2; March-April 2019),141-146.

23. For a broad overview of remediatingAIS, see James Manyika, Jake Silberg, and Brittany Presten,"What Do We Do About Biases in AI?" Harvard BusinessReview (October 25, 2019), https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai.

24. There are numerous popular andacademic articles exploring this idea, including Todd Feathers,"Fake Data Could Help Solve Machine Learning's BiasProblemIf We Let It," Slate (September 17,2020), https://slate.com/technology/2020/09/synthetic-data-artificial-intelligence-bias.html,and Lokke Moerel, "Algorithms can reduce discrimination, butonly with proper data," IAPP (November, 16, 2018), https://iapp.org/news/a/algorithms-can-reduce-discrimination-but-only-with-proper-data/.

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

More here:

Everything Is Not Terminator: Assessment Of Artificial Intelligence Systems - Privacy - United States - Mondaq News Alerts

Related Posts