{"id":208073,"date":"2017-07-26T16:18:24","date_gmt":"2017-07-26T20:18:24","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/how-ai-will-change-the-way-we-make-decisions-harvard-business-review\/"},"modified":"2017-07-26T16:18:24","modified_gmt":"2017-07-26T20:18:24","slug":"how-ai-will-change-the-way-we-make-decisions-harvard-business-review","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/how-ai-will-change-the-way-we-make-decisions-harvard-business-review\/","title":{"rendered":"How AI Will Change the Way We Make Decisions &#8211; Harvard Business Review"},"content":{"rendered":"<p><p>Executive Summary    <\/p>\n<p>    Recent advances in AI are best thought of as a drop in the cost    of prediction.Prediction is useful because it helps    improve decisions. But it isnt the only input into    decision-making; the other key input is judgment.    Judgmentis the process of determining what the reward to    a particular action is in a particular environment.In    many cases, especially in the near term, humans will be    required to exercise this sort of judgment. Theyll specialize    in weighing the costs and benefits of different decisions, and    then that judgment will be combined with machine-generated    predictions to make decisions. But couldnt AI calculate costs    and benefits itself? Yes, but someone would have had to program    the AI as to what the appropriate profit measure is. This    highlights a particular form of human judgment that we believe    will become both more common and more valuable.  <\/p>\n<p>    With the recent explosion in AI, there has been the    understandable concern about its potential impact on human    work. Plenty of people have tried to predict which industries    and jobs will be most affected, and which skills will be most    in demand. (Should you learn to code? Or will AI replace coders    too?)  <\/p>\n<p>    Rather than trying to predict specifics, we suggest an    alternative approach. Economic theory suggests that AI will    substantially raise the value of human judgment. People who    display good judgment will become more valuable, not less. But    to understand what good judgment entails and why it will become    more valuable, we have to be precise about what we mean.  <\/p>\n<p>    Recent advances in AI are best thought of as a drop    in the cost of prediction. By prediction, we dont just    mean the futureprediction is about using data that you have to    generate data that you dont have, often by translating large    amounts of data into small, manageable amounts. For example,    using images divided into parts to detect whether or not the    image contains a human face is a classic prediction problem.    Economic theory tells us that as the cost of machine prediction    falls, machines will do more and more prediction.  <\/p>\n<p>    Prediction is useful because it helps improve decisions. But it    isnt the only input into decision-making; the other key input    is judgment. Consider the example of a credit card network    deciding whether or not to approve each attempted transaction.    They want to allow legitimate transactions and decline fraud.    They use AI to predict whether each attempted transaction is    fraudulent. If such predictions were perfect, the networks    decision process is easy. Decline if and only if fraud exists.  <\/p>\n<p>    However, even the best AIs make mistakes, and that is unlikely    to change anytime soon. The people who have run the credit card    networks know from experience that there is a trade-off between    detecting every case of fraud and inconveniencing the user.    (Have you ever had a card declined when you tried to use it    while traveling?) And since convenience is the whole credit    card business, that trade-off is not something to ignore.  <\/p>\n<p>    This means that to decide whether to approve a transaction, the    credit card network has to know the cost of mistakes. How bad    would it be to decline a legitimate transaction? How bad would    it be to allow a fraudulent transaction?  <\/p>\n<p>    Someone at the credit card association needs to assess how the    entire organization is affected when a legitimate transaction    is denied. They need to trade that off against the effects of    allowing a transaction that is fraudulent. And that trade-off    may be different for high net worth individuals than for casual    card users. No AI can make that call. Humans need to do    so.This decision is what we call judgment.  <\/p>\n<p>    Judgment is the process of determining what the reward to a    particular action is in a particular environment. Judgment is    howwe work out the benefits and costs of different    decisions in different situations.  <\/p>\n<p>    Credit card fraud is an easy decision to explain in this    regard. Judgment involves determining how much money is lost in    a fraudulent transaction, how unhappy a legitimate customer    will be when a transaction is declined, as well as the reward    for doing the right thing and allowing good transactions and    declining bad ones. In many other situations, the trade-offs    are more complex, and the payoffs are not straightforward.    Humans learn the payoffs to different outcomes by experience,    making choices and observing their mistakes.  <\/p>\n<p>    Getting the payoffs right is hard. It requires an understanding    of what your organization cares about most, what it benefits    from, and what could go wrong.  <\/p>\n<p>    In many cases, especially in the near term, humans will be    required to exercise this sort of judgment. Theyll specialize    in weighing the costs and benefits of different decisions, and    then that judgment will be combined with machine-generated    predictions to make decisions.  <\/p>\n<p>    But couldnt AI calculate costs and benefits itself? In the    credit card example, couldnt AI use customer data to consider    the trade-off and optimize for profit? Yes, but someone would    have had to program the AI as to what the appropriate profit    measure is. This highlights a particular form of human judgment    that we believe will become both more common and more valuable.  <\/p>\n<p>    Like people, AIs can also learn from experience. One important    technique in AI is reinforcement learning whereby a computer is    trained to take actions that maximize a certain reward    function. For instance, DeepMinds AlphaGo was trained this way    to maximize its chances of winning the game of Go. Games are    often easy to apply this method of learning because the reward    can be easily described and programmed  shutting out a human    from the loop.  <\/p>\n<p>    But games can be cheated.     As Wired reports, when AI researchers trained an    AI to play the boat racing game, CoastRunners, the AI figured    out how to maximize its score by going around in circles rather    than completing the course as was intended. One might consider    this ingenuity of a type, but when it comes to applications    beyond games this sort of ingenuity can lead to perverse    outcomes.  <\/p>\n<p>    The key point from the CoastRunners example is that in most    applications, the goal given to the AI differs from the true    and difficult-to-measure objective of the organization. As long    as that is the case, humans will play a central role in    judgment, and therefore in organizational decision-making.  <\/p>\n<p>    In fact, even if an organization is enabling AI to make certain    decisions, getting the payoffs right for the organization as a    whole requires an understanding of how the machines make those    decisions. What types of prediction mistakes are likely? How    might a machine learn the wrong message?  <\/p>\n<p>    Enter Reward Function Engineering. As AIs serve up better and    cheaper predictions, there is a need to think clearly and work    out how to best use those predictions. Reward Function    Engineering is the job of determining the rewards to various    actions, given the predictions made by the AI.    Being great at itrequires having    an understanding of the needs of the organization and the    capabilities of the machine. (And it is not the same    as putting a human in the loop to help train the AI.)  <\/p>\n<p>    Sometimes Reward Function Engineering involves programming the    rewards in advance of the predictions so that actions can be    automated. Self-driving vehicles are an example of such    hard-coded rewards. Once the prediction is made, the action is    instant. But as the CoastRunners example illustrates, getting    the reward right isnt trivial. Reward Function Engineering has    to consider the possibility that the AI will over-optimize on    one metric of success, and in doing so act in a way thats    inconsistent with the organizations broader goals.  <\/p>\n<p>    At other times, such hard-coding of the rewards is too    difficult. There may so be many possible predictions that it is    too costly for anyone to judge all the possible payoffs in    advance. Instead, some human needs to wait for the prediction    to arrive, and then assess the payoff. This is closer to how    most decision-making works today, whether or not it includes    machine-generated predictions. Most of us already do some    Reward Function Engineering, but for humans  not machines.    Parents teach their children values. Mentors teach new workers    how the system operates. Managers give objectives to their    staff, and then tweak them to get better performance. Every    day, we make decisions and judge the rewards. But when we do    this for humans, prediction and judgment are grouped together,    and the distinct role of Reward Function Engineering has not    needed to be explicitly separate.  <\/p>\n<p>    As machines get better at prediction, the distinct value of    Reward Function Engineering will increase as the application of    human judgment becomes central.  <\/p>\n<p>    Overall, will machine prediction decrease or increase the    amount of work available for humans in decision-making?    It is too early to tell. On the one hand, machine    prediction will substitute for human prediction in    decision-making. On the other hand, machine prediction is    a complement to human judgment. And cheaper prediction will    generate more demand for decision-making, so there will be more    opportunities to exercise human judgment. So, although it    is too early to speculate on the overall impact on jobs, there    is little doubt that we will soon be witness to a great    flourishing of demand for human judgment in the form of Reward    Function Engineering.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read the original post:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"https:\/\/hbr.org\/2017\/07\/how-ai-will-change-the-way-we-make-decisions\" title=\"How AI Will Change the Way We Make Decisions - Harvard Business Review\">How AI Will Change the Way We Make Decisions - Harvard Business Review<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Executive Summary Recent advances in AI are best thought of as a drop in the cost of prediction.Prediction is useful because it helps improve decisions. But it isnt the only input into decision-making; the other key input is judgment. Judgmentis the process of determining what the reward to a particular action is in a particular environment.In many cases, especially in the near term, humans will be required to exercise this sort of judgment.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/how-ai-will-change-the-way-we-make-decisions-harvard-business-review\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":4,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187743],"tags":[],"class_list":["post-208073","post","type-post","status-publish","format-standard","hentry","category-ai"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/208073"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=208073"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/208073\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=208073"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=208073"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=208073"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}