FTCs Tips on Using Artificial Intelligence (AI) and Algorithms – The National Law Review

Artificial intelligence (AI) technology that uses algorithms to assist in decision-making offers tremendous opportunity to make predictions and evaluate big data. The Federal Trade Commission (FTC), on April 8, 2020, provided reminders in its Tips and Advice blog post,Using Artificial Intelligence and Algorithms.

This is not the first time the FTC has focused on data analytics. In 2016, it issued a Big Data Report. Seehere.

AI technology may appear objective and unbiased, but the FTC warns of the potential for unfair or discriminatory outcomes or the perpetuation of existing socioeconomic disparities. For example, the FTC pointed out, a well-intentioned algorithm may be used for a positive decision, but the outcome may unintentionally disproportionately affect a particular minority group.

The FTC does not want consumers to be misled. It provided the following example: If a companys use of doppelgngers whether a fake dating profile, phony follower, deepfakes, or an AI chatbot misleads consumers, that company could face an FTC enforcement action.

Businesses obtaining AI data from a third-party consumer reporting agency (CRA) and making decisions on that have particular obligations under state and federal Fair Credit Reporting Act (FCRA) laws. Under FCRA, a vendor that assembles consumer information to automate decision-making about eligibility for credit, employment, insurance, housing, or similar benefits and transactions may be a consumer reporting agency. An employer relying on automated decisions based on information from a third-party vendor is the user of that information. As the user, the business must provide consumers an adverse action notice required by FCRA if it takes an adverse action against the consumer. The content of the notice must be appropriate to the adverse action, and may consist of a copy of the consumer report containing AI information, the federal summary of rights, and other information. The vendor that is the CRA has an obligation to implement reasonable procedures to ensure the maximum possible accuracy of consumer reports and provide consumers with access to their own information, along with the ability to correct any errors. The FTC is seeking transparency and the ability to provide well-explained AI decision-making if the consumer asks.

Takeaways for Employers

Carefully review use of AI to ensure it doesnotresult in discrimination. According to the FTC, for credit purposes, use of an algorithm such as a zip code could result in a disparate impact on a particular protected group.

Accuracy and integrity of data is key.

Validation of AI models is important to minimizing risk. Post-validation monitoring and periodic re-validation is important as well.

Review whether federal and state FCRA laws apply.

Continue self-monitoring by asking:

How representative is your data set?

Does your data model account for biases?

How accurate are your predictions based on big data?

Does your reliance on big data raise ethical or fairness concerns?

The FTCs message: use AI, but proceed with accountability and integrity.

Jackson Lewis P.C. 2020

Follow this link:

FTCs Tips on Using Artificial Intelligence (AI) and Algorithms - The National Law Review

Related Posts

Comments are closed.