These days, artificial intelligence systems make our steering wheels vibrate when we drive unsafely, suggest how to invest our money, and recommend workplace hiring decisions. In these situations, the AI has been intentionally designed to alter our behavior in beneficial ways: We slow the car, take the investment advice, and hire people we might not have otherwise considered.
Each of these AI systems also keeps humans in the decision-making loop. Thats because, while AIs are much better than humans at some tasks (e.g., seeing 360 degrees around a self-driving car), they are often less adept at handling unusual circumstances (e.g., erratic drivers).
In addition, giving too much authority to AI systems can unintentionally reduce human motivation. Drivers might become lazy about checking their rearview mirrors; investors might be less inclined to research alternatives; and human resource managers might put less effort into finding outstanding candidates. Essentially, relying on an AI system risks the possibility that people will, metaphorically speaking, fall asleep at the wheel.
How should businesses and AI designers think about these tradeoffs? In a recent paper, economics professor Susan Athey of Stanford Graduate School of Business and colleagues at the University of Toronto laid out a theoretical framework for organizations to consider when designing and delegating decision-making authority to AI systems. This paper responds to the realization that organizations need to change the way they motivate people in environments where parts of their jobs are done by AI, says Athey, who is also an associate director of the Stanford Institute for Human-Centered Artificial Intelligence, or HAI.
Atheys model suggests that an organizations decision of whether to use AI at allor how thoroughly to design or train an AI systemmay depend not only on whats technically available, but also on how the AI impacts its human coworkers.
The idea that decision-making authority incentivizes employees to work hard is not new. Previous research has shown that employees who have been given decision-making authority are more motivated to do a better job of gathering the information to make a good decision. Bringing that idea back to the AI-human tradeoff, Athey says, there may be times wheneven if the AI can make a better decision than the humanyou might still want to let humans be in charge because that motivates them to pay attention. Indeed, the paper shows that, in some cases, improving the quality of an AI can be bad for a firm if it leads to less effort by humans.
Atheys theoretical framework aims to provide a logical structure to organize thinking about implementing AI within organizations. The paper classifies AI into four types, two with the AI in charge (replacement AI and unreliable AI), and two with humans in charge (augmentation AI and antagonistic AI). Athey hopes that by gaining an understanding of these classifications and their tradeoffs, organizations will be better able to design their AIs to obtain optimal outcomes.
Replacement AI is in some ways the easiest to understand: If an AI system works perfectly every time, it can replace the human. But there are downsides. In addition to taking a persons job, replacement AI has to be extremely well-trained, which may involve a prohibitively costly investment in training data. When AI is imperfect or unreliable, humans play a key role in catching and correcting AI errorspartially compensating for AI imperfections with greater effort. This scenario is most likely to produce optimal outcomes when the AI hits the sweet spot where it makes bad decisions often enough to keep human coworkers on their toes.
With augmentation AI, employees retain decision-making power while a high-quality AI augments their effort without decimating their motivation. Examples of augmentative AI might include systems that, in an unbiased way, review and rank loan applications or job applications but dont make lending or hiring decisions. However, human biases will have a bigger influence on decisions in this scenario.
Antagonistic AI is perhaps the least intuitive classification. It arises in situations where theres an imperfect yet valuable AI, human effort is essential but poorly incentivized, and the human retains decision rights when the human and AI conflict. In such cases, Atheys model proposes, the best AI design might be one that produces results that conflict with the preferences of the human agents, thereby antagonistically motivating them to put in effort so they can influence decisions. People are going to be, at the margin, more motivated if they are not that happy with the outcome when they dont pay attention, Athey says.
To illuminate the value of Atheys model, she describes the possible design issues as well as tradeoffs for worker effort when companies use AI to address the issue of bias in hiring. The scenario runs like this: If hiring managers, consciously or not, prefer to hire people who look like them, an AI trained with hiring data from such managers will likely learn to mimic that bias (and keep those managers happy).
If the organization wants to reduce bias, it may have to make an effort to expand the AI training data or even run experimentsfor example, adding candidates from historically black colleges and universities who might not have been considered beforeto gather the data needed to train an unbiased AI system. Then, if biased managers are still in charge of decision-making, the new, unbiased AI could actually antagonistically motivate them to read all of the applications so they can still make a case for hiring the person who looks like them.
But since this doesnt help the owner achieve the goal of eliminating bias in hiring, another option is to design the organization so that the AI can overrule the manager, which will have another unintended consequence: an unmotivated manager.
These are the tradeoffs that were trying to illuminate, Athey says. AI in principle can solve some of these biases, but if you want it to work well, you have to be careful about how you train the AI and how you maintain motivation for the human.
As AI is adopted in more and more contexts, it will change the way organizations function. Firms and other organizations will need to think differently about organizational design, worker incentives, how well the decisions by workers and AI are aligned with the goals of the firm, and whether an investment in training data to improve AI quality will have desirable consequences, Athey says. Theoretical models can help organizations think through the interactions among all of these choices.
This piece was originally published by the Stanford University Graduate School of Business.
See the article here:
Why organizations might want to design and train less-than-perfect AI - Fast Company
- Classic reasoning systems like Loom and PowerLoom vs. more modern systems based on probalistic networks - November 8th, 2009 [November 8th, 2009]
- Using Amazon's cloud service for computationally expensive calculations - November 8th, 2009 [November 8th, 2009]
- Software environments for working on AI projects - November 8th, 2009 [November 8th, 2009]
- New version of my NLP toolkit - November 8th, 2009 [November 8th, 2009]
- Semantic Web: through the back door with HTML and CSS - November 8th, 2009 [November 8th, 2009]
- Java FastTag part of speech tagger is now released under the LGPL - November 8th, 2009 [November 8th, 2009]
- Defining AI and Knowledge Engineering - November 8th, 2009 [November 8th, 2009]
- Great Overview of Knowledge Representation - November 8th, 2009 [November 8th, 2009]
- Something like Google page rank for semantic web URIs - November 8th, 2009 [November 8th, 2009]
- My experiences writing AI software for vehicle control in games and virtual reality systems - November 8th, 2009 [November 8th, 2009]
- The URL for this blog has changed - November 8th, 2009 [November 8th, 2009]
- I have a new page on Knowledge Management - November 8th, 2009 [November 8th, 2009]
- N-GRAM analysis using Ruby - November 8th, 2009 [November 8th, 2009]
- Good video: Knowledge Representation and the Semantic Web - November 8th, 2009 [November 8th, 2009]
- Using the PowerLoom reasoning system with JRuby - November 8th, 2009 [November 8th, 2009]
- Machines Like Us - November 8th, 2009 [November 8th, 2009]
- RapidMiner machine learning, data mining, and visualization tool - November 8th, 2009 [November 8th, 2009]
- texai.org - November 8th, 2009 [November 8th, 2009]
- NLTK: The Natural Language Toolkit - November 8th, 2009 [November 8th, 2009]
- My OpenCalais Ruby client library - November 8th, 2009 [November 8th, 2009]
- Ruby API for accessing Freebase/Metaweb structured data - November 8th, 2009 [November 8th, 2009]
- Protégé OWL Ontology Editor - November 8th, 2009 [November 8th, 2009]
- New version of Numenta software is available - November 8th, 2009 [November 8th, 2009]
- Very nice: Elsevier IJCAI AI Journal articles now available for free as PDFs - November 8th, 2009 [November 8th, 2009]
- Verison 2.0 of OpenCyc is available - November 8th, 2009 [November 8th, 2009]
- What’s Your Biggest Question about Artificial Intelligence? [Article] - November 8th, 2009 [November 8th, 2009]
- Minimax Search [Knowledge] - November 8th, 2009 [November 8th, 2009]
- Decision Tree [Knowledge] - November 8th, 2009 [November 8th, 2009]
- More AI Content & Format Preference Poll [Article] - November 8th, 2009 [November 8th, 2009]
- New Planners Solve Rescue Missions [News] - November 8th, 2009 [November 8th, 2009]
- Neural Network Learns to Bluff at Poker [News] - November 8th, 2009 [November 8th, 2009]
- Pushing the Limits of Game AI Technology [News] - November 8th, 2009 [November 8th, 2009]
- Mining Data for the Netflix Prize [News] - November 8th, 2009 [November 8th, 2009]
- Interview with Peter Denning on the Principles of Computing [News] - November 8th, 2009 [November 8th, 2009]
- Decision Making for Medical Support [News] - November 8th, 2009 [November 8th, 2009]
- Neural Network Creates Music CD [News] - November 8th, 2009 [November 8th, 2009]
- jKilavuz - a guide in the polygon soup [News] - November 8th, 2009 [November 8th, 2009]
- Artificial General Intelligence: Now Is the Time [News] - November 8th, 2009 [November 8th, 2009]
- Apply AI 2007 Roundtable Report [News] - November 8th, 2009 [November 8th, 2009]
- What Would You do With 80 Cores? [News] - November 8th, 2009 [November 8th, 2009]
- Software Finds Learning Language Child's Play [News] - November 8th, 2009 [November 8th, 2009]
- Artificial Intelligence in Games [Article] - November 8th, 2009 [November 8th, 2009]
- Artificial Intelligence Resources - November 8th, 2009 [November 8th, 2009]
- Alan Turing: Mathematical Biologist? - April 25th, 2012 [April 25th, 2012]
- BBC Horizon: The Hunt for AI ( Artificial Intelligence ) - Video - April 30th, 2012 [April 30th, 2012]
- Can computers have true artificial intelligence" Masonic handshake" 3rd-April-2012 - Video - April 30th, 2012 [April 30th, 2012]
- Kevin B. Korb - Interview - Artificial Intelligence and the Singularity p3 - Video - April 30th, 2012 [April 30th, 2012]
- Artificial Intelligence - 6 Month Anniversary - Video - April 30th, 2012 [April 30th, 2012]
- Science Breakthroughs - April 30th, 2012 [April 30th, 2012]
- Hitman: Blood Money - Part 49 - Stupid Artificial Intelligence! - Video - April 30th, 2012 [April 30th, 2012]
- Research Members Turned Off By HAARP Artificial Intelligence - Video - April 30th, 2012 [April 30th, 2012]
- Artificial Intelligence Lecture No. 5 - Video - April 30th, 2012 [April 30th, 2012]
- The Artificial Intelligence Laboratory, 2012 - Video - April 30th, 2012 [April 30th, 2012]
- Charlie Rose - Artificial Intelligence - Video - April 30th, 2012 [April 30th, 2012]
- Expert on artificial intelligence to speak at EPIIC Nights dinner - May 4th, 2012 [May 4th, 2012]
- Filipino software engineers complete and best thousands on Stanford’s Artificial Intelligence Course - May 4th, 2012 [May 4th, 2012]
- Vodafone xone™ Hackathon Challenges Developers and Entrepreneurs to Build a New Generation of Artificial Intelligence ... - May 4th, 2012 [May 4th, 2012]
- Rocket Fuel Packages Up CPG Booster - May 4th, 2012 [May 4th, 2012]
- 2 Filipinos finishes among top in Stanford’s Artificial Intelligence course - May 5th, 2012 [May 5th, 2012]
- Why Your Brain Isn't A Computer - May 5th, 2012 [May 5th, 2012]
- 2 Pinoy software engineers complete Stanford's AI course - May 7th, 2012 [May 7th, 2012]
- Percipio Media, LLC Proudly Accepts Partnership With MIT's Prestigious Computer Science And Artificial Intelligence ... - May 10th, 2012 [May 10th, 2012]
- Google Driverless Car Ok'd by Nevada - May 10th, 2012 [May 10th, 2012]
- Moving Beyond the Marketing Funnel: Rocket Fuel and Forrester Research Announce Free Webinar - May 10th, 2012 [May 10th, 2012]
- Rocket Fuel Wins 2012 San Francisco Business Times Tech & Innovation Award - May 13th, 2012 [May 13th, 2012]
- Internet Week 2012: Rocket Fuel to Speak at OMMA RTB - May 16th, 2012 [May 16th, 2012]
- How to Get the Most Out of Your Facebook Ads -- Rocket Fuel's VP of Products, Eshwar Belani, to Lead MarketingProfs ... - May 16th, 2012 [May 16th, 2012]
- The Digital Disruptor To Banking Has Just Gone International - May 16th, 2012 [May 16th, 2012]
- Moving Beyond the Marketing Funnel: Rocket Fuel Announce Free Webinar Featuring an Independent Research Firm - May 23rd, 2012 [May 23rd, 2012]
- MASA Showcases Latest Version of MASA SWORD for Homeland Security Markets - May 23rd, 2012 [May 23rd, 2012]
- Bluesky Launches Drones for Aerial Surveying - May 23rd, 2012 [May 23rd, 2012]
- Artificial Intelligence: What happened to the hunt for thinking machines? - May 25th, 2012 [May 25th, 2012]
- Bubble Robots Move Using Lasers [VIDEO] - May 25th, 2012 [May 25th, 2012]
- UHV assistant professors receive $10,000 summer research grants - May 27th, 2012 [May 27th, 2012]
- Artificial intelligence: science fiction or simply science? - May 28th, 2012 [May 28th, 2012]
- Exetel taps artificial intelligence - May 29th, 2012 [May 29th, 2012]
- Software offers brain on the rain - May 29th, 2012 [May 29th, 2012]
- New Dean of Science has high hopes for his faculty - May 30th, 2012 [May 30th, 2012]
- Cognitive Code Announces "Silvia For Android" App - May 31st, 2012 [May 31st, 2012]
- A Rat is Smarter Than Google - June 5th, 2012 [June 5th, 2012]