Google, Facebook, And Microsoft Are Working On AI Ethics. Heres What Your Company Should Be Doing – Forbes

Posted: July 12, 2021 at 7:52 am

The Ethics of AI

As AI is making its way into more companies, the board and senior executives need to mitigate the risk of their AI-based systems. One area of risk includes the reputational, regulatory, and legal risks of AI-led ethical decisions.

AI-based systems are often faced with making decisions that were not built into their models decisions representing ethical dilemmas.

For example, suppose a company builds an AI-based system to optimize the number of advertisements we see. In that case, the AI may encourage incendiary content that causes users to get angry and comment and post their own opinions. If this works, users spend more time on the site and see more ads. The AI has done its job without ethical oversight. The unintended consequence is the polarization of users.

What happens if your company builds a system that automates work so that you no longer need that employee? What is the company's ethical responsibility to that employee, to society? Who is determining the ethics of the impact related to employment?

What if the AI tells a loan officer to recommend against providing a loan to a person? If the human doesn't understand how the AI came to that conclusion, how can the human know if the decision was ethical or not? (see How AI Can Go Terribly Wrong: 5 Biases That Create Failure)

Suppose the data used to train your AI system doesn't have sufficient data about specific classes of individuals. In that case, it may not learn what to do when it encounters those individuals. Would a facial recognition system used for check-in to a hotel recognize a person with freckles? If the system stops working and makes check-in harder for a person with freckles, what should the company do? How does the company address this ethical dilemma? (see Why Are Technology Companies Quitting Facial Recognition?)

If the developers who identify the data to be used for training an AI system aren't looking for bias, how can they prevent an ethical dilemma? For example, suppose a company has historically hired more men than women. In that case, a bias is likely to exist in the resume data. Men tend to use different words than women in their resumes. If the data is sourced from men's resumes, then women's resumes may be viewed less favorably, just based on word choice.

Google, Facebook, and Microsoft are addressing these ethical issues. Many have pointed to the missteps Google and Facebook have made in attempting to address AI ethical issues. Let's look at some of the positive elements of what they and Microsoft are doing to address AI ethics.

While each company is addressing these principles differently, we can learn a lot by examining their commonalities. Here are some fundamental principles they address.

While these tech giants are imperfect, they are leading the way in addressing ethical AI challenges. What are your board and senior management team doing to address these issues?

Below are some suggestions you can implement now.

By addressing these issues now, your company will reduce the risks of having AI make or recommend decisions that imperil the company. (see AI Can Be Dangerous - How to Reduce Risk When Using AI) Are you aware of the reputational, regulatory, and legal risks associated with the ethics of your AI?

View post:

Google, Facebook, And Microsoft Are Working On AI Ethics. Heres What Your Company Should Be Doing - Forbes

Related Posts