Artificial intelligence can be used to help a company narrow down its pool of thousands of job applicants. AI can be applied to help a doctor make a recommendation for care or perform a procedure. AI might show up in daily life by helping a driver find a faster route home.
But what if the recommendation in the doctors office is wrong or the algorithm used to make hiring decisions systematically excludes certain types of candidates? This advanced field of computer science thats intended to improve lives might end up doing more harm than good in some instances. Not to mention, companies can also suffer from reputational or legal damages if AI is used irresponsibly.
AI ethics are the principles around responsible and moral use of artificial intelligence.
To ensure AI is being used in the most accurate, unbiased and moral manner, it is important for companies to put ethical AI into practice. Thats why companies like Microsoft and IBM have created comprehensive AI ethics guidelines and smaller tech companies are creating standards around how to use AI responsibly, too.
The precision needs to be much higher in healthcare than in other situations where were just going about our lives, and youre getting a recommendation on Google Maps for a restaurant you might like, said Sachin Patel, CEO at Apixio, a healthcare AI platform. Worst case, youre like, Oh, I actually dont want to eat that today, and youre fine. But in this case, we want to make sure that youre able to very specifically make a recommendation and feel like youre 90 percent plus on that precision metric.
Built In spoke with AI and ethics experts about best practices for tech companies to ensure they are exercising strong AI ethics.
More on AI InnovationThe Future of AI: How Artificial Intelligence Will Change the World
First, a company should articulate why they are planning to use AI and how it will benefit people.
They have to say we want to be making technology that benefits the world, thats not making the world a worse place, because we all have to live here together.
Is it being used for bad things like autonomous weapons systems versus drug discovery for helping people in medicine? said Brian Green, director of technology ethics at the Markkula Center for Applied Ethics at Santa Clara University.
Even in less extreme cases, AI can cause harm to individuals by making people feel more isolated or addicted to their devices.
Theres so many algorithms and apps out there that use machine learning or other kinds of tactics to try to keep you addicted to them, which kind of violates human freedom in some ways, Green said. Youre being manipulated, basically, by these things.
For example, Flappy Bird was a mobile game where users navigated a digital bird around obstacles. After jumping to one of the top 10 most downloaded apps in the U.S. in 2014, the designer realized that the game was addictive, so he decided to pull the game from app stores.
That probably made him lose money overall, but at the same time, at least he knew himself that he wasnt going to be hurting the world by what he was doing because he had a bigger picture, Green said. Theres more important things than money.
Companies should consider how the use of AI will affect the people that use the product or engage with the technology and aim to use AI only in ways that will benefit peoples lives. For example, AI can take a toll on the environment because of the significant amount of energy that machine learning models require for training, but AI can also be used to help solve climate and efficiency issues, Green said.
I think the first thing to do for a company is the leadership has to fundamentally make a choice, Green said. They have to say we want to be making technology that benefits the world, thats not making the world a worse place, because we all have to live here together.
When a company decides to proceed with using AI in its business model, then the next step should be to articulate the organizations values and rules around how AI will be used.
Just as long as they have a set of principles, thats a good start, but then you have to figure out how to operationalize them and actually make them happen in the company, Green said. You need to get it into the product ultimately. That means you need to somehow engage the engineers, to engage the product managers. You need to engage people who are in the leadership in that part of the company. Get them on board. They need to become champions of ethics.
The Markkula Center for Applied Ethics has a toolkit to help engineers and designers think about AI ethics in their work, such as conducting ethical risk sweeping or ethical pre- and post-mortems to repond to and adjust to any ethical failures. At Apixio, the companys head of data science created an internal AI ethics oath for the whole company, but especially for the data scientists, that outlines best practices around topics like secure data transfer and data privacy, Patel said.
HireVue, a hiring platform that uses AI for pre-hire assessments and customer engagement, has created an AI explainability statement that it shares publicly. The document outlines for its customers why and how the company uses AI.
At my time with HireVue, I have seen us move more and more towards just being more transparent because what weve seen is that if we dont tell people what were doing, they often assume the worse, said Lindsey Zuloaga, chief data scientist at HireVue.
Startups using AI often find themselves rapidly testing. While its necessary, it can lead to forgetting how algorithms were initially created and why certain decisions were made at a given time, Patel said. Transparency around the creation of algorithms can help with understanding the traceability and reasoning behind decisions.
Its black box for the engineers who actually built it as well ... and that makes it even harder to figure out when the biases creeped in and how to fix the model.
Well train up a bunch of signals. They learn on their own. Its the nature of machine learning, and then youre like, Do I know how to trace to make sure [I understand] what it learned on its own? Patel said. Oftentimes, you go back a year later, and youre like, Oh, Ive got to actually relearn that now.
Sometimes machine learning techniques can become so complex that humans cant possibly understand them. Black box models in AI are created from data by an algorithm where theres no explanation to humans as to why the decisions were made. If we cant understand the algorithm, thats a problem. We want to try to protect the people who are being analyzed by the algorithm, Green said.
Transparency around algorithms is a way to help reduce potential biases in AI decision making, said Sameer Maskey, adjunct associate professor at Columbia University and CEO of Fusemachines, a machine learning company.
These days, with deep learning systems with a hundred million parameters, it spits out a decision. Its a black box for most people, Maskey said. Its black box for the engineers who actually built it as well. A lot of engineers dont have the needed transparency in figuring out why it made that decision, and that makes it even harder to figure out when the biases creeped in and how to fix the model.
Bias can creep into algorithms when the data used in AI models is over representative, inaccurate or otherwise skewed by humans. Bias is a big issue. I would say its the elephant in the room, and theres no easy way to address it, Maskey said, who further emphasized that lack of transparency in AI models is one of the major culprits.
One way to potentially decrease biases is to have a checklist for engineers to think through with regard to the data they receive before building a model, he said. Those questions might be how was the data collected? Whats the history behind it? Who was involved in collecting it? What questions were asked?
I think one of the main things that a lot of enterprises should do is having very clear guidelines on how to do a data analysis to figure out if there is bias already seeped into the model and providing very clear guidelines on that when the engineers are designing the systems, Maskey said.
Apixio uses data with wide geographic spread and representing a variety of lifestyle factors, Patel said. The company can then make healthcare recommendations based on what state a provider is in and the type of environment like a rural versus urban setting.
We feel much more confident now eight years into our journey of training these algorithms, and having seen the 40 million charts, that were actually making predictions at a level that feels good. It takes time, Patel said.
HireVue has trained evaluators who analyze thousands of data samples for bias to ensure job candidates are assessed consistently and fairly. Is the training data biased? Does it have groups that are not represented in the data? Does it represent the group of people that you want to apply the algorithm to? Zuloaga said.
Data scientists at HireVue will reoptimize algorithms if they find that decisions are not aligning with the way that a client wants to use the AI. Often, if we do see any problems, it could be we have a customer thats using an algorithm on a population thats different from the population we trained on, Zuloaga said.
More on Trusting AIAre You Sure You Can Trust That AI?
Maintaining high quality data hygiene ensures accuracy and relevancy, and companies using AI should also make sure peoples personal information is safe and kept private, Patel said.
HireVue adheres to the European Unions General Data Protection Regulation, which is one of the toughest privacy laws in the world and regulates how companies must protect the personal data of EU citizens.
We do business globally. We have to adhere to the strictest standards, so were seeing that Europe is really paving the way, and I think states are starting to follow, Zuloaga said.
In an ideal world, Maskey said opt-in for users deciding to share their personal data, rather than opt-out, would be the standard, and ideally, people would be able to easily access and research all data thats collected about them.
Its counterproductive for a lot of companies because they are using the same data to make money, Maskey said. Thats where I think the government and organizations need to come together to come up with the right framework and write policy that is more balanced, taking user privacy into account, but allowing businesses at the same time to collect data, but with a lot of controls for the users.
AI ethics evaluations can become part of a companys regular risk assessment practice. Apixio has a team of four who regularly assess whether or not the company is abiding by its AI ethics oath, Patel said.
All businesses do some sort of quarterly risk assessments, usually in the IT security realm, but what weve added to it a few years ago is actually this AI piece, so its more of a risk and ethics meeting, Patel said.
At HireVue, the company has conducted third-party audits to evaluate its AI practices, in addition to consulting with an expert advisory board that includes people with diverse backgrounds in law, industrial and organizational psychology and artificial intelligence. Theres no standard of what an AI audit is at this point. Every audit we did was really different, Zuloaga said.
If youre a minority candidate, you may be concerned that youre gonna be treated differently, so how do we kind of address all of those concerns?
HireVue conducted one AI audit with ONeil Risk Consulting & Algorithmic Auditing, which is led by Cathy ONeil, a data science consultant and author of Weapons of Math Destruction: How Big Data Increases Inequality And Threatens Democracy.
She very much took a holistic approach of saying, who are all the different people that interact with this, and how do we represent all those groups? Zuloaga said. What are their concerns? Whether theyre legitimately true or not, the concern is real. If youre a minority candidate, you may be concerned that youre gonna be treated differently, so how do we kind of address all of those concerns?
AI audits and consulting with third parties can point out potential risks of how a companys AI is being used and ways to address these concerns.
Here is the original post:
AI Ethics: A Guide to Ethical AI - Built In
- Chinese national arrested and charged with stealing AI trade secrets from Google - NPR - March 8th, 2024 [March 8th, 2024]
- President Biden Calls for Ban on AI Voice Impersonations During State of the Union - Variety - March 8th, 2024 [March 8th, 2024]
- Revolutionize Your Business with AWS Generative AI Competency Partners | Amazon Web Services - AWS Blog - March 8th, 2024 [March 8th, 2024]
- Broadcom Expects AI Demand to Help Offset Weakness Elsewhere - Yahoo Finance - March 8th, 2024 [March 8th, 2024]
- Micron Hits Record High With Analysts Calling It an 'Under-Appreciated AI Beneficiary' - Investopedia - March 8th, 2024 [March 8th, 2024]
- The Adams administration quietly hired its first AI czar. Who is he? - City & State New York - March 8th, 2024 [March 8th, 2024]
- AI likely to increase energy use and accelerate climate misinformation report - The Guardian - March 8th, 2024 [March 8th, 2024]
- This Artificial Intelligence (AI) Stock Could Double, and It Is Way Cheaper Than Nvidia - Yahoo Finance - March 8th, 2024 [March 8th, 2024]
- Fake images made to show Trump with Black supporters highlight concerns around AI and elections - The Associated Press - March 8th, 2024 [March 8th, 2024]
- Artificial intelligence and illusions of understanding in scientific research - Nature.com - March 8th, 2024 [March 8th, 2024]
- Analysis | House AI task force leaders take long view on regulating the tools - The Washington Post - March 8th, 2024 [March 8th, 2024]
- Don't Give Your Business Data to AI Companies - Dark Reading - March 8th, 2024 [March 8th, 2024]
- NIST, the lab at the center of Bidens AI safety push, is decaying - The Washington Post - March 8th, 2024 [March 8th, 2024]
- Essay | AI is Coming! Tips for Staying Calm and Carrying On - The Wall Street Journal - March 8th, 2024 [March 8th, 2024]
- AI can be easily used to make fake election photos - report - BBC.com - March 8th, 2024 [March 8th, 2024]
- 5 Artificial Intelligence (AI) Stocks That Could Make You a Millionaire - Yahoo Finance - March 8th, 2024 [March 8th, 2024]
- AI could be an extraordinary force for good. So why do our politicians still not have a plan? - The Guardian - March 8th, 2024 [March 8th, 2024]
- Mapping Disease Trajectories from Birth to Death with AI - Neuroscience News - March 8th, 2024 [March 8th, 2024]
- India plans 10,000-GPU sovereign AI supercomputer - The Register - March 8th, 2024 [March 8th, 2024]
- SAP enhances Datasphere and SAC for AI-driven transformation - CIO - March 8th, 2024 [March 8th, 2024]
- Jim Cramer names companies and sectors poised to rally on the AI wave - CNBC - March 8th, 2024 [March 8th, 2024]
- The job applicants shut out by AI: The interviewer sounded like Siri - The Guardian - March 8th, 2024 [March 8th, 2024]
- Microsoft confirms Surface and Windows AI event for March 21st - The Verge - March 8th, 2024 [March 8th, 2024]
- Adobes new Express app brings Firefly AI tools to iOS and Android - The Verge - March 8th, 2024 [March 8th, 2024]
- A Google AI Watched 30,000 Hours of Video GamesNow It Makes Its Own - Singularity Hub - March 8th, 2024 [March 8th, 2024]
- Palantir CEO Karp on TITAN, AI Warfare Technology - Bloomberg - March 8th, 2024 [March 8th, 2024]
- Elliptic Curve Murmurations Found With AI Take Flight - Quanta Magazine - March 8th, 2024 [March 8th, 2024]
- 5 AI Stocks to Buy in March 2024, According to Analysts - TipRanks.com - TipRanks - March 8th, 2024 [March 8th, 2024]
- Wix's new AI chatbot builds websites in seconds based on prompts - The Verge - March 8th, 2024 [March 8th, 2024]
- Amid record high energy demand, America is running out of electricity - The Washington Post - March 8th, 2024 [March 8th, 2024]
- AI Crypto Tokens in 5 Minutes: What to Know and Where to Start - Inc. - February 26th, 2024 [February 26th, 2024]
- 'The Worlds I See' by AI visionary Fei-Fei Li '99 selected as Princeton Pre-read - Princeton University - February 26th, 2024 [February 26th, 2024]
- AI is having a 1995 moment, analyst says - Business Insider - February 26th, 2024 [February 26th, 2024]
- Vatican research group's book outlines AI's 'brave new world' - National Catholic Reporter - February 26th, 2024 [February 26th, 2024]
- Honor's Magic 6 Pro launches internationally with AI-powered eye tracking on the way - The Verge - February 26th, 2024 [February 26th, 2024]
- Google explains Gemini's embarrassing AI pictures of diverse Nazis - The Verge - February 26th, 2024 [February 26th, 2024]
- Google cut a deal with Reddit for AI training data - The Verge - February 26th, 2024 [February 26th, 2024]
- What's the point of Elon Musk's AI company? - The Verge - February 26th, 2024 [February 26th, 2024]
- AI agents like Rabbit aim to book your vacation and order your Uber - NPR - February 26th, 2024 [February 26th, 2024]
- Announcing Microsofts open automation framework to red team generative AI Systems - Microsoft - February 26th, 2024 [February 26th, 2024]
- After Nvidia's latest blowout, here are 20 AI stocks expected to rise as much as 44% - Yahoo Finance - February 26th, 2024 [February 26th, 2024]
- 1 Exceptional AI Chip Stock Investors Need to Know About in 2024 - The Motley Fool - February 26th, 2024 [February 26th, 2024]
- Nvidia briefly hits $2 trillion valuation as AI frenzy grips Wall Street - Reuters - February 26th, 2024 [February 26th, 2024]
- AI Chatbots Can Guess Your Personal Information From What You ... - WIRED - October 18th, 2023 [October 18th, 2023]
- Harvard IT Launches Pilot of AI Sandbox to Enable Walled-Off Use ... - Harvard Crimson - October 18th, 2023 [October 18th, 2023]
- Advancing policing through AI: Insights from the global law ... - Police News - October 18th, 2023 [October 18th, 2023]
- Hochul announces new SUNY, IBM investments in AI - Olean Times Herald - October 18th, 2023 [October 18th, 2023]
- Nvidia's banking on TensorRT to expand its generative AI dominance - The Verge - October 18th, 2023 [October 18th, 2023]
- AI expands from MRFs to vehicles - Plastics Recycling Update - October 18th, 2023 [October 18th, 2023]
- AI Reads Ancient Scroll Charred by Mount Vesuvius in Tech First - Scientific American - October 18th, 2023 [October 18th, 2023]
- A DEEPer (squared) dive into AI Harvard Gazette - Harvard Gazette - October 18th, 2023 [October 18th, 2023]
- Florida bar weighs whether lawyers using AI need client consent - Reuters - October 18th, 2023 [October 18th, 2023]
- Cognizant and Vianai Systems Announce Strategic Partnership to ... - PR Newswire - October 18th, 2023 [October 18th, 2023]
- How AI could speed up scientific discoveries, from proteins to ... - NPR - October 18th, 2023 [October 18th, 2023]
- AI challenge to deliver better healthcare | Western Australian ... - Government of Western Australia - October 18th, 2023 [October 18th, 2023]
- Henry Kissinger: The Path to AI Arms Control - Foreign Affairs Magazine - October 18th, 2023 [October 18th, 2023]
- Stability AI releases StableStudio in latest push for open-source AI - The Verge - May 18th, 2023 [May 18th, 2023]
- Google CEO Sundar Pichai Predicts That This Profession Will Be ... - The Motley Fool - May 18th, 2023 [May 18th, 2023]
- Frances privacy watchdog eyes protection against data scraping in AI action plan - TechCrunch - May 18th, 2023 [May 18th, 2023]
- Investing in Hippocratic AI - Andreessen Horowitz - May 18th, 2023 [May 18th, 2023]
- As Alphabet flexes its AI prowess, there's a 'new elephant in the room' for Google - MarketWatch - May 18th, 2023 [May 18th, 2023]
- The Boring Future of Generative AI | WIRED - WIRED - May 18th, 2023 [May 18th, 2023]
- OpenAI readies new open-source AI model, The Information reports - Reuters.com - May 18th, 2023 [May 18th, 2023]
- What every CEO should know about generative AI - McKinsey - May 18th, 2023 [May 18th, 2023]
- AI creates images of the 'perfect' man and woman - Sky News - May 18th, 2023 [May 18th, 2023]
- Audit AI search tools now, before they skew research - Nature.com - May 18th, 2023 [May 18th, 2023]
- 3 Reasons C3.ai Stock Could Be Your Golden Ticket to the AI ... - InvestorPlace - May 18th, 2023 [May 18th, 2023]
- Zoom makes a big bet on AI with investment in Anthropic - VentureBeat - May 18th, 2023 [May 18th, 2023]
- AI voice phone scams are on the rise. Here's how to avoid them - USA TODAY - May 18th, 2023 [May 18th, 2023]
- Amazon is building an AI-powered conversational experience for ... - The Verge - May 18th, 2023 [May 18th, 2023]
- AI speculators need to 'differentiate between actual spending and investment' and hype: Strategist - Yahoo Finance - May 18th, 2023 [May 18th, 2023]
- AI Can Be Both Accurate and Transparent - HBR.org Daily - May 18th, 2023 [May 18th, 2023]
- You're Probably Underestimating AI Chatbots | WIRED - WIRED - May 18th, 2023 [May 18th, 2023]
- AI presents political peril for 2024 with threat to mislead voters - The Associated Press - May 18th, 2023 [May 18th, 2023]
- We need AI to help us face the challenges of the future - The Guardian - May 18th, 2023 [May 18th, 2023]
- End Of Googles Dominance? Stock Gets Rare Analyst Downgrade Over AI Fears - Forbes - May 18th, 2023 [May 18th, 2023]
- Watch 44 million atoms simulated using AI and a supercomputer - New Scientist - May 18th, 2023 [May 18th, 2023]
- AI Is The New Electricity: Bank Of America Picks 20 Stocks To Cash In On ChatGPT Hype - Forbes - March 2nd, 2023 [March 2nd, 2023]
- Tech Giants Are Barreling Headfirst Into an AI Arms Race - February 20th, 2023 [February 20th, 2023]
- Bing's AI Is Threatening Users. That's No Laughing Matter - TIME - February 20th, 2023 [February 20th, 2023]