To print this article, all you need is to be registered or login on Mondaq.com.
Published in The Journal of Robotics, ArtificialIntelligence & Law (January-February 2021)
Many information security and privacy laws such as theCalifornia Consumer Privacy Act1 and the New York StopHacks and Improve Electronic Data Security Act2 requireperiodic assessments of an organization's informationmanagement systems. Because many organizations collect, use, andstore personal information from individualsmuch of whichcould be used to embarrass or impersonate those individuals ifinappropriately accessedthese laws require organizations toregularly test and improve the security they use to protect thatinformation.
As of yet, there is no similar specific law in the United Statesdirected at artificial intelligence systems ("AIS"),requiring the organizations that rely on AIS to test its accuracy,fairness, bias, discrimination, privacy, and security.
However, existing law is broad enough to impose on manyorganizations a general obligation to assess their AIS, andlegislation has appeared requiring certain entities to conductimpact assessments on their AIS. Even without a regulatory mandate,many organizations should perform AIS assessments as a bestpractice.
This column summarizes current and pending legal requirementsbefore providing more details about the assessment process.
The Federal Trade Commission's ("FTC") authorityto police "unfair or deceptive acts or practices in oraffecting commerce" through rule making and administrativeadjudication is broad enough to govern AIS, and it has a departmentthat focuses on algorithmic transparency, the Office of TechnologyResearch and Investigation.3 However, the FTC has notissued clear guidance regarding AIS uses that qualify as unfair ordeceptive acts or practices. There are general practices thatorganizations can adopt that will minimize their potential forengaging in unfair or deceptive practices, which include conductingassessments of their AIS.4 However, there is no specificFTC rule obligating organizations to assess their AIS.
There have been some legislative efforts to create such anobligation, including the Algorithmic AccountabilityAct,5 which was proposed in Congress, and a similar billproposed in New Jersey,6 both in 2019.
The federal bill would require covered entities to conduct"impact assessments" on their "high-risk" AISin order to evaluate the impacts of the AIS's design processand training data on "accuracy, fairness, bias,discrimination, privacy, and security."7
The New Jersey bill is similar, requiring an evaluation of theAIS's development process, including the design and trainingdata, for impacts on "accuracy, fairness, bias,discrimination, privacy, and security," and must includeseveral elements, including a "detailed description of thebest practices used to minimize the risks" and a"cost-benefit analysis."8 It would alsorequire covered entities to work with external third parties,independent auditors, and independent technology experts to conductthe assessments, if reasonably possible.9
Although neither of these has become law, they represent theexpected trend of emerging regulation.10
When organizations rely on AIS to make or inform decisions oractions that have legal or similarly significant effects onindividuals, it is reasonable for governments to require that thoseorganizations also conduct periodic assessments of the AIS. Forexample, state criminal justice systems have begun to adopt AISthat use algorithms to report on a defendant's risk to commitanother crime, risk to miss his or her next court date, etc.; humandecision makers then use those reports to inform theirdecisions.11
The idea is that the AIS can be a tool to inform decisionmakerspolice, prosecutors, judgesto help them makebetter, data-based decisions that eliminate biases they may haveagainst defendants based on race, gender, etc.12 This ispotentially a wonderful use for AIS, but only if the AIS actuallyremoves inappropriate and unlawful human bias rather than recreateit.
Unfortunately, the results have been mixed at best, as there isevidence suggesting that some of the AIS in the criminal justicesystem is merely replicating human bias.
In one example, an African-American teenage girl and a whiteadult male were each convicted of stealing property totaling about$80. An AIS determined that the white defendant was rated as alower recidivism risk than the teenager, even though he had a muchmore extensive criminal record, with felonies versus juvenilemisdemeanors. Two years after their arrests, the AISrecommendations were revealed to be incorrect: the male defendantwas serving an eight-year sentence for another robbery; theteenager had not committed any further crimes.13 Similarissues have been observed in AIS used in hiring,14lending,15 health care,16 and schooladmissions.17
Although some organizations are conducting AIS assessmentswithout a legal requirement, a larger segment is reluctant to adoptthe assessments as a best practice, as many for-profit companiescare more about accuracy to the original data used to train theirAIS than they do about eliminating the biases in that originaldata.18 According to Daniel Soukup, a data scientistwith Mostly AI, a start-up experimenting with controlling biases indata, "There's always another priority, it seems. . . .You're trading off revenue against making fair predictions, andI think that is a very hard sell for these institutions and theseorganizations."19
I suspect, though, that the tide will turn in the otherdirection in the near future, with or without a direct legislativeimpetus, similar to the trend in privacy rights and operations.Although most companies in the United States are not subject tobroad privacy laws like the California Consumer Privacy Act or theEuropean Union's General Data Protection Regulation, I haveobserved an increasing number of clients that want to provide theprivacy rights afforded by those laws, either because theircustomers expect them to or they want to position themselves ascompanies that care about individuals' privacy.
It is not hard to see a similar trend developing among companiesthat rely on AIS. As consumers become more aware of the problematicissues involved in AIS decision-makingpotential bias, use ofsensitive personal information, security of that information, thesignificant effects, lack of oversight, etc.they will becomejust as demanding about AIS requirements as privacy requirements.Similar to privacy, consumer expectations will likely be pushed inthat direction by jurisdictions that adopt AIS assessmentlegislation, even if they do not live in those jurisdictions.
Organizations that are looking to perform AIS assessments now inanticipation of regulatory activity and consumer expectationsshould conduct an assessment consistent with the followingprinciples and goals:
Consistent with the New Jersey Algorithmic Accountability Act,any AIS assessment should be done by an outside party, preferablyby qualified AI counsel, who can retain a technological consultantto assist them. This performs two functions.
First, it will avoid the situation in which the developers thatcreated the AIS for the organization are also assessing it, whichcould result in a conflict of interest, as the developers have anincentive to assess the AIS in a way that is favorable to theirwork.
Second, by retaining outside AI counsel, in addition tobenefiting from the counsel's expertise, organizations are ableto claim that the resulting assessment report and any related workproduct is protected by attorney-client privilege in the event thatthere is litigation or a government investigation related to theAIS. Companies that experience or anticipate a data security breachor event retain outside information security counsel for similarreasons, as the resulting breach analysis could be discoverable ifoutside counsel is not properly retained. The results can be veryexpensive if the breach report is mishandled.
For example, Capital One recently entered into an $80 millionConsent Order with the Department of Treasury related to a dataincident following an order from a federal court that a breachreport prepared for Capital One was not properly coordinatedthrough outside counsel and therefore not protected byattorney-client privilege.20
An AIS assessment should identify, catalogue, and describe therisks of an organization's AIS.
Properly identifying these risks, among others, and describinghow the AIS impacts each will allow an organization to understandthe issues it must address to improve its AIS.21
Once the risks in the AIS are identified, the assessment shouldfocus on how the organization alerts impacted populations. This canbe in the form of a public-facing AI policy, posted and maintainedin a manner similar to an organization's privacypolicy.22 This can also be in the form of more pointedpop-up prompts, a written disclosure and consent form, automatedverbal statement in telephone interactions, etc. The appropriateform of the notice will depend on a number of factors, includingthe organization, the AIS, the at-risk populations, the nature ofthe risks involved, etc. The notice should include the relevantrights regarding AIS afforded by privacy laws and otherregulations.
After implementing appropriate notices, the organization shouldanticipate receiving comments from members of the impactedpopulations and the general public. The assessment should help theorganization implement a process that allows it to accept, respondto, and act on those comments. This may be similar to howorganizations process privacy rights requests from consumers anddata subjects, particularly when a notice addresses those rights.The assessment may recommend that certain employees be tasked withaccepting and responding to comments, the organization addoperative capabilities that address privacy rights impacting AIS orrisks identified in the assessment and objected to by comments,etc. It may be helpful to have a technological consult provideinput on how the organization can leverage its technology to assistin this process.
The assessment should help the organization remediate identifiedrisks. The nature of the remediation will depend on the nature ofthe risks, the AIS, and the organization. Any outside AIS counselconducting the assessment needs to be well-versed in the variousforms remediation can take. In some instances, properly noticingthe risk to the relevant individuals will be sufficient, per bothlegal requirements and the organization's principles. Otherrisks cannot or should not be "papered over," but ratherobligate the organization to reduce the AIS's potential toinjure.23 This may include adding more human oversight,at least temporarily, to check the AIS's output fordiscriminatory activity or bias. A technology consultant may beable to advise the organization regarding revising the code orprocedures of the AIS to address the identified risks.
Additionally, where the AIS is evidencing bias because of thedata used to train it, more appropriate historical data or evensynthetic data may be used to retrain the AIS to remove or reduceits discriminatory behavior.24
All organizations that rely on AIS to make decisions that havelegal or similarly significant effects on individuals shouldperiodically conduct assessments of their AIS. This is true for allorganizations: for-profit companies, non-profit corporations,governmental entities, educational institutions, etc. Doing so willhelp them avoid potential legal trouble in the event their AIS isinadvertently demonstrating illegal behavior and ensure the AISacts consistently with the organization's values.
Organizations that adopt assessments earlier rather than laterwill be in a better position to comply with AIS-specific regulationwhen it appears and to develop a brand as an organization thatcares about fairness.
Footnotes
* John Frank Weaver, a member of McLaneMiddleton's privacy and data security practice group, is amember of the Board of Editors of The Journal of Robotics,Artificial Intelligence & Law and writes its"Everything Is Not Terminator" column. Mr.Weaver has a diverse technology practice that focuses oninformation security, data privacy, and emerging technologies,including artificial intelligence, self-driving vehicles, anddrones.
1. Cal. Civ. Code 1798.150(granting private right of action when a business fails to"maintain reasonable security procedures and practicesappropriate to the nature of the information," withassessments necessary to identify reasonable procedures).
2. New York General Business Law, Chapter20, Article 39-F, 899-bb.2(b)(ii)(A)(3) (requiringentities to assess "the sufficiency of safeguards in place tocontrol the identified risks"), 899.2(b)(ii)(B)(1) (requiringentities to assess "risks in network and softwaredesign"), 899.2(b)(ii)(B)(2)(requiring entities to assess"risks in information processing, transmission andstorage"), and 899.2(b)(ii)(C)(1) (requiring entities toassess "risks of information storage and disposal").
3. 15 U.S.C. 45(b); 15 U.S.C. 57a.
4. John Frank Weaver, "Everything IsNot Terminator: Helping AI to Comply with the FederalTrade Commission Act," The Journal of ArtificialIntelligence & Law (Vol. 2, No. 4; July-August 2019),291-299 (other practices include: establishing a governingstructure for the AIS; establishing policies to address the useand/sale of AIS; establishing notice procedures; and ensuringthird-party agreements properly allocate liability andresponsibility).
5. Algorithmic Transparency Act of 2019,S. 1108, H.R. 2231, 116th Cong. (2019).
6. New Jersey Algorithmic AccountabilityAct, A.B. 5430, 218th Leg., 2019 Reg. Sess. (N.J. 2019).
7. Algorithmic Accountability Act of2019, supra note 5, at 2(2) and 3(b).
8. New Jersey Algorithmic AccountabilityAct, supra note 6, at 2.
9. Id., at 3.
10. For a fuller discussion of thesebills and other emerging legislation intended to govern AIS, seeYoon Chae, "U.S. AI Regulation Guide: Legislative Overview andPractical Considerations," The Journal of ArtificialIntelligence & Law (Vol. 3, No. 1; January-February 2020),17-40.
11. See Jason Tashea,"Courts Are Using AI to Sentence Criminals. That Must StopNow," Wired (April 17, 2017), https://www.wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now/.
12. Julia Angwin, Jeff Larson, SuryaMattu, & Lauren Kirchner, "Machine Bias,"ProPublica (May 23, 2016), https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing("The appeal of the [AIS's] risk scores is obvious. . . Ifcomputers could accurately predict which defendants were likely tocommit new crimes the criminal justice system could be fairer andmore selective about who is incarcerated and for howlong.").
13. Id.
14. Jeffrey Dastin, "Amazon scrapssecret AI recruiting tool that showed bias against women,"Reuters (October 9, 2018), https://uk.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUKKCN1MK08G(Amazon "realized its new system was not rating candidates forsoftware developer jobs and other technical posts in agender-neutral way").
15. Dan Ennis and Tim Cook, "Bankingfrom AI lending models raises questions of culpability,regulation," Banking Dive (August 16, 2019), https://www.bankingdive.com/news/artificial-intelligence-lending-bias-model-regulation-liability/561085/#:~:text=Bill%20Foster%2C%20D%2DIL%2C,lenders%20for%20mortgage%20refinancing%20loans("African-Americans may find themselves the subject ofhigher-interest credit cards simply because a computer has inferredtheir race").
16. Shraddha Chakradhar, "Widelyused algorithm for follow-up care in hospitals is racially biased,study finds," STAT (October 24, 2019), https://www.statnews.com/2019/10/24/widely-used-algorithm-hospitals-racial-bias/("An algorithm commonly used by hospitals and other healthsystems to predict which patients are most likely to need follow-upcare classified white patients overall as being more ill than blackpatientseven when they were just as sick").
17. DJ Pangburn, "Schools are usingsoftware to help pick who gets in. What could go wrong?"Fast Company (May 17, 2019), https://www.fastcompany.com/90342596/schools-are-quietly-turning-to-ai-to-help-pick-who-gets-in-what-could-go-wrong("If future admissions decisions are based on past decisiondata, Richardson warns of creating an unintended feedback loop,limiting a school's demographic makeup, harming disadvantagedstudents, and putting a school out of sync with changingdemographics.").
18. Todd Feathers, "Fake Data CouldHelp Solve Machine Learning's Bias ProblemIf We LetIt," Slate (September 17, 2020), https://slate.com/technology/2020/09/synthetic-data-artificial-intelligence-bias.html.
19. Id.
20. In the Matter of Capital One, N.A.,Capital One Bank (USA), N.A., Consent Order (Document #2020-036),Department of Treasury, Office of the Comptroller of the Currency,AA-EC-20-51 (August 5, 2020), https://www.occ.gov/static/enforcement-actions/ea2020-036.pdf;In re: Capital One Consumer Data Security BreachLitigation, MDL No. 1:19md2915 (AJT/JFA) (E.D. Va. May 26,2020).
21. For a great discussion of identifyingrisks in AIS, see Nicol Turner Lee, Paul Resnick, and Genie Barton,"Algorithmic bias detection and mitigation: Best practices andpolicies to reduce consumer harms," Brookings (May22, 2019), https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/.
22. For more discussion of public facingAI policies, see John Frank Weaver, "Everything Is NotTerminator: Public-Facing Artificial IntelligencePoliciesPart I," The Journal of ArtificialIntelligence & Law (Vol. 2, No. 1; January-February 2019),59-65; John Frank Weaver, "Everything Is NotTerminator: Public-Facing Artificial IntelligencePoliciesPart II," The Journal of ArtificialIntelligence & Law (Vol. 2, No. 2; March-April 2019),141-146.
23. For a broad overview of remediatingAIS, see James Manyika, Jake Silberg, and Brittany Presten,"What Do We Do About Biases in AI?" Harvard BusinessReview (October 25, 2019), https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai.
24. There are numerous popular andacademic articles exploring this idea, including Todd Feathers,"Fake Data Could Help Solve Machine Learning's BiasProblemIf We Let It," Slate (September 17,2020), https://slate.com/technology/2020/09/synthetic-data-artificial-intelligence-bias.html,and Lokke Moerel, "Algorithms can reduce discrimination, butonly with proper data," IAPP (November, 16, 2018), https://iapp.org/news/a/algorithms-can-reduce-discrimination-but-only-with-proper-data/.
The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.
More here:
- Researchers create innovative verification techniques to increase security in artificial intelligence and image processing - European Research Council - April 29th, 2024 [April 29th, 2024]
- Who in Europe is investing the most in artificial intelligence? - Euronews - April 29th, 2024 [April 29th, 2024]
- 4 Artificial Intelligence (AI) Stocks Members of Congress Can't Stop Buying (and Nvidia Isn't 1 of Them!) - The Motley Fool - April 29th, 2024 [April 29th, 2024]
- The Standard Names Porter Orr Second Vice President of Artificial Intelligence Strategy and Development - NRToday.com - April 29th, 2024 [April 29th, 2024]
- Meet Nvidia CEO Jensen Huang, the man behind the $2 trillion company powering today's artificial intelligence - CBS News - April 29th, 2024 [April 29th, 2024]
- This Warren Buffett Dividend King Stock Just Invested $1.1 Billion Into Artificial Intelligence (AI) - The Motley Fool - April 29th, 2024 [April 29th, 2024]
- Joe Rogan Reveals What Could Be 'Game Over for the Human Race' - Newsweek - April 29th, 2024 [April 29th, 2024]
- LegalTech Artificial Intelligence Market to Soar to $6.4 Billion by 2028: Global Long-term Forecast to 2033 - Strategic ... - Yahoo Finance UK - April 29th, 2024 [April 29th, 2024]
- Artificial Intelligence Tokens Stumble Again! Is The AI Hype Over? - Coinpedia Fintech News - April 29th, 2024 [April 29th, 2024]
- CISA unveils guidelines for AI and critical infrastructure - FedScoop - April 29th, 2024 [April 29th, 2024]
- IBM's Webinar: Cybersecurity In The Era Of Artificial Intelligence Keynotes - AiThority - April 29th, 2024 [April 29th, 2024]
- Being and becoming a good doctor in the age of artificial intelligence - Firstpost - April 29th, 2024 [April 29th, 2024]
- Pope to take part in G7 summit in June to talk Artificial Intelligence - Crux Now - April 29th, 2024 [April 29th, 2024]
- Pope Francis to attend G7 summit to speak on artificial intelligence - Catholic News Agency - April 29th, 2024 [April 29th, 2024]
- 1 Unstoppable Artificial Intelligence (AI) Stock to Buy and Hold Forever - Yahoo Finance - April 29th, 2024 [April 29th, 2024]
- Pope to bring his call for ethical artificial intelligence to G7 summit in June in southern Italy - The Associated Press - April 29th, 2024 [April 29th, 2024]
- Elon Musk Called Tesla an Artificial Intelligence (AI) Robotics Company. Does That Make It a Buy? - The Motley Fool - April 29th, 2024 [April 29th, 2024]
- Emulating neurodegeneration and aging in artificial intelligence systems - Tech Xplore - April 29th, 2024 [April 29th, 2024]
- Pope Francis to participate in G7 session on AI - Vatican News - English - April 29th, 2024 [April 29th, 2024]
- A Once-in-a-Generation Investment Opportunity: 1 Bill Ackman Artificial Intelligence (AI) Stock to Buy Hand Over Fist ... - Yahoo Finance - April 29th, 2024 [April 29th, 2024]
- Tesla Stock Investors Cheer Progress on Its Artificial Intelligence (AI) Future - Yahoo Finance - April 29th, 2024 [April 29th, 2024]
- In Race to Build A.I., Tech Plans a Big Plumbing Upgrade - The New York Times - April 29th, 2024 [April 29th, 2024]
- A Baltimore-area teacher is accused of using AI to make his boss appear racist - NPR - April 29th, 2024 [April 29th, 2024]
- 3 Top Artificial Intelligence (AI) Stocks That Billionaires Jim Simons, Ray Dalio, and Israel Englander Are Buying - The Motley Fool - April 29th, 2024 [April 29th, 2024]
- The iPad Pro can give us a big surprise with its artificial intelligence chips. - Softonic EN - April 29th, 2024 [April 29th, 2024]
- This Artificial Intelligence (AI) Stock Could Soar 70%, According to Wall Street. Time to Buy? - Yahoo Finance - April 29th, 2024 [April 29th, 2024]
- Billionaire Bill Ackman Owns 8 Stocks -- and This Hypergrowth Artificial Intelligence (AI) Stock Isn't One of Them - The Motley Fool - April 29th, 2024 [April 29th, 2024]
- Artificial Intelligence Has Come for Our...Beauty Pageants? - Glamour - April 29th, 2024 [April 29th, 2024]
- 3 Stocks to Grab Now to Ride the Artificial Intelligence Chip Boom to Riches - InvestorPlace - April 29th, 2024 [April 29th, 2024]
- Is ASML's Big Sell-off a Warning Sign to Artificial Intelligence (AI) Investors? - The Motley Fool - April 29th, 2024 [April 29th, 2024]
- 3 Artificial Intelligence (AI) Stocks to Buy With $1,150 and Hold for Decades - Yahoo Finance - March 22nd, 2024 [March 22nd, 2024]
- This combines the power of artificial intelligence with human insights - finews.com - March 22nd, 2024 [March 22nd, 2024]
- Cohere Targets $5 Billion Valuation for ChatGPT Rival - PYMNTS.com - March 22nd, 2024 [March 22nd, 2024]
- IMF: Artificial Intelligence (AI) Will Transform 40% of Jobs. Can Investors Capitalize? - Yahoo Finance - March 22nd, 2024 [March 22nd, 2024]
- Cathie Wood Is Selling These 2 Artificial Intelligence (AI) Stocks. Should You? - Yahoo Finance - March 22nd, 2024 [March 22nd, 2024]
- iShares Robotics and Artificial Intelligence Multisector ETF (NYSEARCA:IRBO) Shares Acquired by Creative Financial ... - Defense World - March 22nd, 2024 [March 22nd, 2024]
- Inaugural Plenary Meeting of States Endorsing the Political Declaration on Responsible Military Use of Artificial ... - Department of State - March 22nd, 2024 [March 22nd, 2024]
- AMD Fell Today -- Is This a Chance to Buy the Artificial Intelligence (AI) Stock? - Yahoo Finance - March 22nd, 2024 [March 22nd, 2024]
- Artificial Intelligence (AI) Stocks Are Red-Hot, but Here's 1 to Avoid (for Now) - The Motley Fool - March 22nd, 2024 [March 22nd, 2024]
- A Once-in-a-Generation Investment Opportunity: 1 Artificial Intelligence (AI) Growth Stock to Buy and Hold Forever - Yahoo Finance - March 22nd, 2024 [March 22nd, 2024]
- 2 Artificial Intelligence (AI) Stocks That Could Go Parabolic - The Motley Fool - March 22nd, 2024 [March 22nd, 2024]
- Artificial intelligence will radically improve health care, but only if managed carefully - The Hill - March 22nd, 2024 [March 22nd, 2024]
- Tennessee Makes A.I. an Outlaw to Protect Its Country Music and More - The New York Times - March 22nd, 2024 [March 22nd, 2024]
- When Using Artificial Intelligence In Pharma R&D, Start With Identifying Problem To Solve - HBW Insight - March 22nd, 2024 [March 22nd, 2024]
- Why Super Micro Computer, Advanced Micro Devices, and Other Artificial Intelligence (AI) Stocks Tumbled on Tuesday - Yahoo Finance - March 22nd, 2024 [March 22nd, 2024]
- 3 Billionaires Are Selling Artificial Intelligence (AI) Stock Nvidia and Buying These 10 AI Stocks Instead - The Motley Fool - March 22nd, 2024 [March 22nd, 2024]
- Nvidia Just Bought 5 Artificial Intelligence (AI) Stocks. These 2 Stand Out the Most. - The Motley Fool - March 22nd, 2024 [March 22nd, 2024]
- UN adopts first global artificial intelligence resolution - ARMENPRESS - March 22nd, 2024 [March 22nd, 2024]
- This New Artificial Intelligence (AI) Chip Is a Massive Game Changer for Nvidia Stock - The Motley Fool - March 22nd, 2024 [March 22nd, 2024]
- Nvidia Led the First Phase of Artificial Intelligence (AI), but These 2 Growth Stocks Will Lead the Next Phases ... - The Motley Fool - March 22nd, 2024 [March 22nd, 2024]
- Meet Wall Street's Newest Stock-Split Stock, Along With the Artificial Intelligence (AI) Stock Likeliest to Follow in Its ... - The Motley Fool - March 22nd, 2024 [March 22nd, 2024]
- Amid Rumors of a Deal With Rivian, Apple Acquired This Artificial Intelligence (AI) Start-Up Instead - Yahoo Finance - March 22nd, 2024 [March 22nd, 2024]
- UN adopts first global artificial intelligence resolution - CGTN - March 22nd, 2024 [March 22nd, 2024]
- 1 No-Brainer Artificial Intelligence (AI) Stock to Buy With $25 and Hold for 10 Years - The Motley Fool - March 22nd, 2024 [March 22nd, 2024]
- 2 Artificial Intelligence (AI) Stocks That Could Make You a Millionaire - The Motley Fool - March 22nd, 2024 [March 22nd, 2024]
- Is Intel the Best Artificial Intelligence (AI) Semiconductor Stock to Buy Before It Skyrockets? - The Motley Fool - February 20th, 2024 [February 20th, 2024]
- The greater social implications of artificial intelligence - New Day NW - KING5.com - February 20th, 2024 [February 20th, 2024]
- AI comes to the world of beauty as eyelash robot uses artificial intelligence to place fake lashes - Fox News - February 20th, 2024 [February 20th, 2024]
- Beyer Appointed To Bipartisan Task Force On Artificial Intelligence - Falls Church News Press - February 20th, 2024 [February 20th, 2024]
- Why Did Nvidia Invest in These 5 Artificial Intelligence (AI) Stocks? Should You Buy Them, Too? - The Motley Fool - February 20th, 2024 [February 20th, 2024]
- Three key themes on artificial intelligence - Research Information - February 20th, 2024 [February 20th, 2024]
- How Artificial Intelligence is transforming consumerism - WFLA - February 20th, 2024 [February 20th, 2024]
- SAP names Philipp Herzig as chief artificial intelligence officer - CIO - February 20th, 2024 [February 20th, 2024]
- Unleashing the Power of Artificial Intelligence: Transforming Web-Based Applications for Enhanced Efficiency and User ... - Financialbuzz.com - February 20th, 2024 [February 20th, 2024]
- Koch Industries continues to accelerate its artificial intelligence initiative - The Business Journals - February 20th, 2024 [February 20th, 2024]
- Forget Nvidia: These 3 Artificial Intelligence (AI) Stocks Can Be the Next Stock-Split Stocks - The Motley Fool - February 20th, 2024 [February 20th, 2024]
- Artificial Intelligence for small business focus of upcoming JWCC Lunch and Learn on Feb. 28 Muddy River News - Muddy River News - February 20th, 2024 [February 20th, 2024]
- Opponents Highlight the Environmental Impact of Artificial Intelligence - News-Press Now - February 20th, 2024 [February 20th, 2024]
- Generative AI's environmental costs are soaring and mostly secret - Nature.com - February 20th, 2024 [February 20th, 2024]
- Chapter Summary: Genesis of Artificial Intelligence and a Scientific Revolution: 1950-1979 - EIN News - February 20th, 2024 [February 20th, 2024]
- What would Thomas Aquinas make of Artificial Intelligence? - ACI Africa - February 20th, 2024 [February 20th, 2024]
- ChatGPT Predicted Bitcoin Price Will "Skyrocket" - Cryptonews - February 20th, 2024 [February 20th, 2024]
- Worried About an Artificial Intelligence (AI) Stock Bubble? Consider This Billionaire Investor's Advice. - Yahoo Finance - February 20th, 2024 [February 20th, 2024]
- This Super Artificial Intelligence (AI) Stock Could Be at the Beginning of a Terrific Bull Run - Yahoo Finance - February 20th, 2024 [February 20th, 2024]
- 5 Artificial Intelligence (AI) Stocks That Could Make You a Millionaire - Yahoo Finance - February 20th, 2024 [February 20th, 2024]
- Will AI replace Colorado teachers? Here's what experts say. - The Colorado Sun - February 20th, 2024 [February 20th, 2024]
- Nvidia Could Be About to Counter a Big Artificial Intelligence (AI) Threat With This Move - Yahoo Finance - February 20th, 2024 [February 20th, 2024]
- The Healthiest U.S. Pharma Companies: Ranked by RealRate's Impressive Artificial Intelligence - Medium - February 20th, 2024 [February 20th, 2024]
- Down 84%, Is This Artificial Intelligence (AI) Stock a Buy After an Earnings Pop? - The Motley Fool - February 20th, 2024 [February 20th, 2024]
- 'AI for Humans' may be the most entertaining way to learn about artificial intelligence - Fast Company - February 20th, 2024 [February 20th, 2024]