{"id":225267,"date":"2017-07-03T02:26:56","date_gmt":"2017-07-03T06:26:56","guid":{"rendered":"http:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/uncategorized\/using-ai-in-an-uncertain-world-kmworld-magazine.php"},"modified":"2022-03-15T23:32:38","modified_gmt":"2022-03-16T03:32:38","slug":"using-ai-in-an-uncertain-world-kmworld-magazine","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/using-ai-in-an-uncertain-world-kmworld-magazine.php","title":{"rendered":"Using AI in an uncertain world &#8211; KMWorld Magazine"},"content":{"rendered":"<p><p>    Jul 3, 2017  <\/p>\n<p>    Like anything else in life except    death and taxes (and even the particulars of those are    uncertain), uncertainty is something that humans deal with    every day. From relying on the weather report for umbrella    advice to getting to work on time, every day actions are    fraught with uncertainty, and we all have learned how to    navigate an unpredictable world. As AI becomes widely deployed,    it simply adds a new dimension of unpredictability. Perhaps,    however, instead of trying to stuff the genie back in the    bottle, we can develop some realistic guidelines for its    use.  <\/p>\n<p>    Our expectations for AI and for    computers in general have always been unrealistic. The fact is    that software is buggy, that algorithms are crafted by humans    who have certain biases about how systems and how the world    worksand they may not match your biases. Furthermore, no data    set is unbiased, and we use data sets with built-in biases or    with holes in the data to train AI systems. Those systems are    by their very nature, then, biased or lacking in information.    If we depend on those systems to be perfect, we are letting    ourselves in for errors, mistakes and even disasters.  <\/p>\n<p>    However, relying on biased systems    is no different from asking a friend who shares your worldview    for information that may serve to bolster that view rather than    balance it. And we do that all the time. Finding balanced,    reliable, reputable information is hard and sometimes    impossible. Any person trying to navigate an uncertain world    tries to make decisions based on balanced information. The    import of the decision governs (or should) the effort we make    in hunting for reliable but differing sources. The speed with    which a decision must be made often interferes with that    effort. And we need to accept that our decisions will be    imperfect or even outright wrong, because no one can amass and    interpret correctly everything there is to know.  <\/p>\n<p>    Where might AI systems fit into    the information picture? We know that neither humans nor    systems are infallible in their decision-making. Adding the    input of a well-crafted, well-tested system that is based on a    large volume of reputable data to human decision making can    speed and improve the outcome. There are good reasons for that.    Human thinking balances AI systems. They can plug each others    blind spots. Humans make judgments based on their worldview.    They are capable of understanding priorities, ethics, values,    justice and beauty. Machines cant. But machines can crunch    vast volumes of data. They dont get embarrassed. They may find    patterns we wouldnt think to look for. But humans can decide    whether to use that information. That makes a perfect    partnership in which one of the partners wont be insulted if    their input is ignored.  <\/p>\n<p>    Adding AI into the physical world    in which snap decisions are required raises additional design    and ethical issues that we are ill-fit to resolve today.    Self-driving cars are a good example of that. In the abstract    and at a high level, its been shown that most accidents and    fatalities are due to human error. So, self-driving cars may    help us save lives. Now we come down to the individual level.    Suppose we have a sober, skilled, experienced driver who would    recognize a danger she has never seen before. Suppose that we    have a self-driving car that isnt trained on that particular    hazard. Should the driver or the system be in charge? I would    opt for an AI-assisted system with override from a sober,    experienced driver.  <\/p>\n<p>    On the other hand, devices with    embedded cognition can be a boon that changes someones world.    One project at IBM research is developing self-driving buses to    assist the elderly or the disabled in living their lives    independently. Like Alexa or Siri on a smaller scale, that    could change lives. We come back to the matter of context, use    and value. There is no single answer to human questions of    should.  <\/p>\n<p>    That brings us to the question of    trust. Given that we understand that several layers of bias    will inevitably be built into the cognitive systems we interact    with and given that the behaviors coming out of the systems are    occasionally unpredictable, what kind of trust should we place    in AI-based systems and under what circumstances? That depends    on:  <\/p>\n<p>    Underlying those four important    conditioning considerations is a fundamental challenge: In many    of the outcomes from systems involving machine learning,    particularly in the current applications of deep learning, it    can be exceedingly difficult for the human decision-maker to    analyze how the computer came up with its report, output or    recommendation(s).  <\/p>\n<p>    At this point in the development    of the technology, we cant simply ask the system, as we could    a human collaborator, how it reached the answer or    recommendation it is proposing. Most systems today are not    designed to be self-explanatory. Your algorithms may not be    forthcoming about what insightful routes through the data they    have taken to develop their answer for you. In many    applications where solutions are well-contextualized, the    systems answers wont raise questions or eyebrows, and you the    user will be able to say, for example, Yes, this is an image    of a nascent tumor. But in other applications, the context may    be murkier, and the human user will be left to wonder whether    to trust the recommendation coming back from the machine.    Should I really be shorting Amazon shares at this level? What    makes you think so?  <\/p>\n<p>    Is there some way to design    systems so that they become an integral part of our thinking    process, including helping us develop better questions, focus    our problem statements and reveal how reliable their    recommendations are? Can we design systems that are    transparent? Can we design systems that help people understand    the vagaries of probabilistic output? Will we be able to    collaborate with an AI about whether an umbrella or a full foul    weather outfit is the better choice for todays weather    circumstances? Insightful application design will remain the    keyalways taking direction from the context of the use and the    intention of the user.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read the rest here: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"http:\/\/www.kmworld.com\/Articles\/News\/News-Analysis\/Using-AI-in-an-uncertain-world-118960.aspx\" title=\"Using AI in an uncertain world - KMWorld Magazine\">Using AI in an uncertain world - KMWorld Magazine<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Jul 3, 2017 Like anything else in life except death and taxes (and even the particulars of those are uncertain), uncertainty is something that humans deal with every day. From relying on the weather report for umbrella advice to getting to work on time, every day actions are fraught with uncertainty, and we all have learned how to navigate an unpredictable world <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/using-ai-in-an-uncertain-world-kmworld-magazine.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[13],"tags":[],"class_list":["post-225267","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"modified_by":"Danzig","_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/225267"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=225267"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/225267\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=225267"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=225267"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=225267"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}