No bots need apply: Microtargeting employment ads in the age of AI – HR Dive

Keith E. Sonderlingis a commissioner for the U.S. Equal Employment Opportunity Commission. Views are the author's own.

It's no secret that online advertising is big business.In 2019, digital ad spending in the United States surpassed traditional ad spending for the first time, and by 2023, digital ad spending will all but eclipse it.

It's easy to understand why.Seventy-two percent of Americans use social media, and nearly half of millennials and Gen Z report being online "almost constantly."An overwhelming majority of Americans under 40 dislike and distrust traditional advertising.Digital marketing is now the most effective way for advertisers to reach an enormous segment of the population and social media platforms have capitalized on this to the tune of billions of dollars.In 2020, digital advertising accounted for 98% of Facebook's $86 billion revenue, more than 80% of Twitter's $3.7 billion revenue, and nearly 100% of Snapchat's $2.5 billion revenue.

But clickbait alone will not guarantee that advertisers and social media platforms continue cashing in on digital marketing.For these cutting-edge marketing technologies to be sustainable in job-related advertising, they must be designed and utilized in strict compliance with longstanding civil rights laws that prohibit discriminatory marketing practices.When these laws were passed in 1964, advertising more closely resembled the TV world of Darrin Stephens and Don Draper than the current world of social media influencers and "internet famous" celebrities.Yet federal antidiscrimination laws are just as relevant to digital marketing as they were to traditional forms of advertising.

One of the reasons advertisers are willing to spend big on digital marketing is the ability to "microtarget" consumers. Online platforms are not simply selling ad space; they are selling access to consumer information culled and correlated through the use of proprietary artificial intelligence algorithms.These algorithms can connect countless data points about individual consumers, from demographic details to browsing history, to make predictions.These predictions can include what each individual is most likely to buy, when they are most likely to buy it, how much they are willing to pay, and even what type of ads they are most likely to click.

So, suppose I have a history of ordering pizza online every Thursday at about 7 pm.In that case, digital advertisers might start bombarding me with local pizzeria ads every Thursday as I approach dinnertime.Savvy advertisers might even rely on a platform's AI-enabled advertising tools to offer customized coupons to entice me to choose them over competitors.

But microtargeting ads to an audience is one thing when you are trying to sell local takeout food.It is quite another when you are advertising employment opportunities.Facebook found this out the hard way when, in March 2019, it settled several lawsuits brought by civil rights groups and private litigants arising from allegations that the social media giant's advertising platform enabled companies to exclude people from the audience for employment ads based on protected characteristics.

According to one complaint filed in the Northern District of California, advertisers could customize their audiences simply by ticking off boxes next to a list of characteristics.Employers could check an "include" box next to preferred characteristics or an "exclude" box next to disfavored characteristics, including race, sex, religion, age, and national origin.Shortly after the complaint was filed, Facebook announced that it would be disabling a number of its advertising features until the company could conduct a full review of how exclusion targeting was being used.As part of its settlement of the case, Facebook pledged to establish a separate advertising portal with limited targeting options for employment ads.

To be clear, demographics matter in advertising and relying on demographic information is not necessarily problematic from a legal perspective.Think for a moment about Superbowl ads.Advertisers have historically paid enormous sums for air time during the game not only because of the size of the audience but because of the money that members of that particular audience are willing to spend on things like lite beer, fast food, and SUVs. Superbowl advertisers make projections about who will be tuning in to the game and what sorts of products they are more or less likely to buy.They target a general audience in the knowledge that ads for McDonald's Value Meals and Domino's Pizza will reach viewers who are munching on Cheetos and nibbling on kale chips alike.

But AI-enabled advertising is different. Instead of creating ads for general audiences, online advertisers can create specific audiences for their ads.This type of "microtargeting" has significant implications under federal civil rights law, which prohibits employment discrimination based on race, color, religion, sex, national origin, age, disability, pregnancy, or genetic information.These protections extend to the hiring process.So, a law firm that is looking to hire attorneys can build a target audience consisting exclusively of people with Juris Doctorate degrees because education level is not, in itself, a protected class under federal civil rights law.However, that same employer cannot create a target audience for its employment ads that consists only of JDs of one race because race is a protected class under federal civil rights law.

From a practical standpoint, exclusions of the sort that Facebook's advertising program allegedly enabled are the high-tech equivalent of the notorious pre-Civil-Rights-Era "No Irish Need Apply" signs.From a legal standpoint, they are even worse.These sorts of microtargeted exclusions would withhold the very existence of job opportunities from members of protected classes for the sole reason of their membership in a protected class, leaving them unable to exercise their rights under federal antidiscrimination law.After all, you cannot sue over exclusion from a job opportunity if you do not know that the possibility existed in the first place.Thus, online platforms and advertisers alike may find themselves on the hook for discriminatory advertising practices.

At the same time, one of the most promising aspects of AI is its capacity to minimize the role of human bias in decision-making.Numerous studies show that the application screening process is particularly vulnerable to bias on the part of hiring professionals.For example, African Americans and Asian Americans who "whitened" their resumes by deleting references to their race received more callbacks than identical applications that included racial references. And hiring managers have proven more likely to favor resumes featuring male names over female names even though the resumes are otherwise identical.

Often, HR executives do not become aware that screeners and recruiters engage in discriminatory conduct until it is too late.But AI can help eliminate bias from the earliest stages of the hiring process.An AI-enabled resume-screening program can be programmed to disregard variables that have no bearing on job performance, such as applicants' names.An applicant's name can signal, correctly or incorrectly, variables that usually have nothing to do with the applicant's job qualifications, such as the applicant's sex, national origin, or race.Similarly, an AI-enabled bot that conducts preliminary screening interviews can be engineered to disregard factors such as age, sex, race, disability and pregnancy.It can even disregard variables that might merely suggest a candidate's membership in a protected class, including foreign or regional accents, speech impairments and vocal timbre.

I believe that we can and we must realize the full potential of AI to enhance human decision-making in full compliance with the law.But that does not mean that AI will supplant human beings any time soon.AI has the potential to make the workplace more fair and inclusive by eliminating any actual bias on the part of resume screeners or interviewers.However, this can only happen if the people who design the advertising platforms and the marketers who pay to use them are vigilant about the limitations of AI algorithms and mindful of the legal and ethical obligations that bind us all.

Original post:

No bots need apply: Microtargeting employment ads in the age of AI - HR Dive

Related Posts

Comments are closed.