{"id":208595,"date":"2017-02-16T18:34:39","date_gmt":"2017-02-16T23:34:39","guid":{"rendered":"http:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/uncategorized\/frankenstein-fears-hang-over-ai-financial-times.php"},"modified":"2022-11-20T18:16:55","modified_gmt":"2022-11-20T23:16:55","slug":"frankenstein-fears-hang-over-ai-financial-times","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/frankenstein-fears-hang-over-ai-financial-times.php","title":{"rendered":"Frankenstein fears hang over AI &#8211; Financial Times"},"content":{"rendered":"<p><p>    The technology industry is facing up to the world-shaking    ramifications of artificial intelligence. There is now a    recognition that AI will disrupt how societies operate, from    education and employment to how data will be collected about    people.  <\/p>\n<p>    Machine learning, a form of advanced pattern recognition that    enables machines to make judgments by analysing large volumes    of data, could greatly supplement human thought. But such    soaring capabilities have stirred almost Frankenstein-like    fears about whether developers can control their creations.  <\/p>\n<p>    Failures of autonomous systems  like the death last yearof a US motorist in a partially    self-driving car from Tesla Motors  have led to a focus on    safety, says Stuart Russell, a professor of computer science    and AI expert at the University of California, Berkeley. That    kind of event can set back the industry a long way, so there is    a very straightforward economic self-interest here, he says.  <\/p>\n<p>    Alongside immigration and globalisation, fears of AI-driven    automation are fuelling public anxiety about inequality and job    security. The election of Donald Trump as US president and the    UKs vote to leave the EU were partly driven by such concerns.    While some politicians claim protectionist policies will help    workers, many industry experts say most jobs losses are caused by technological change,    largely automation.  <\/p>\n<p>    Global elites  those with high income and educational levels,    who live in capital cities  are considerably more enthusiastic    about innovation than the general population, the FT\/Qualcomm    Essential Future survey found. This gap, unless addressed, will    continue to cause political friction.  <\/p>\n<p>    Vivek Wadhwa, a US-based entrepreneur and academic who writes    about ethics and technology, thinks the new wave of automation    has geopolitical implications: Tech companies must accept    responsibility for what theyre creating and work with users    and policymakers to mitigate the risks and negative impacts.    They must have their people spend as much time thinking about    what could go wrong as they do hyping products.  <\/p>\n<p>    The industry is bracing itself for a backlash. Advances in AI    and robotics have brought automation to areas of white-collar    work, such as legal paperwork and analysing financial data.    Some 45 per cent of US employees work time is spent on tasks    that could be automated with existing technologies, a study by McKinsey    says.  <\/p>\n<p>    Industry and academic initiatives have been set up to ensure AI    works to help people. These include the Partnership on AI to Benefit People and    Society, established by companies including IBM, and a    $27m effort involving    Harvard and the Massachusetts Institute of Technology. Groups    like Open AI, backed by Elon Musk and Google,    have made progress, says Prof Russell: Weve seen    papers...that address the technical problem of safety.  <\/p>\n<p>    There are echoes of past efforts to deal with the complications    of a new technology. Satya Nadella, chief executive of    Microsoft, compares it to 15 years ago when Bill Gates rallied    his companys developers to combat computer malware. His    trustworthy computing initiative was a watershed moment. In    an interview with the FT, Mr Nadella    said he hoped to do something similar to ensure AI works to    benefit humans.  <\/p>\n<p>    AI presents some thorny problems, however. Machine learning    systems derive insights from large amounts of data. Eric    Horvitz, a Microsoft executive, told a US Senate hearing    late last year that these data sets may themselves be skewed.    Many of our data sets have been collected...with    assumptions we may not deeply understand, and we dont want our    machine-learned applications...to be amplifying cultural    biases, he said.  <\/p>\n<p>    Last year, an investigation by news    organisation ProPublica found that an algorithm used by the    US justice system to determine whether criminal defendants were    likely to reoffend, had a racial bias. Black defendants with a    low risk of reoffending were more likely than white ones to be    labelled as high risk.  <\/p>\n<p>    Greater transparency is one way forward, for example making it    clear what information AI systems have used. But the thought    processes of deep learning systems are not easy to audit.Mr    Horvitz says such systems are hard for humans to understand.    We need to understand how to justify [their] decisions and how    the thinking is done.  <\/p>\n<p>    As AI comes to influence more government and business    decisions, the ramifications will be widespread. How do we    make sure the machines we train dont perpetuate and amplify    the same human biases that plague society? asks Joi Ito,    director of MITs Media Lab.  <\/p>\n<p>    Executives like Mr Nadella believe a mixture of government    oversight  including, by implication, the regulation of    algorithms  and industry action will be the answer. He plans    to create an ethics board at Microsoft to deal with any    difficult questions thrown up by AI.  <\/p>\n<p>    He says: I want...an ethics board that says, If we are    going to use AI in the context of anything that is doing    prediction, that can actually have societal impact...that    it doesnt come with some bias thats built in.  <\/p>\n<p>    Making sure AI systems benefit humans without unintended    consequences is difficult. Human society is incapable of    defining what it wants, says Prof Russell, so programming    machines to maximise the happiness of the greatest number of    people is problematic.  <\/p>\n<p>    This is AIs so-called control problem: the risk that smart    machines will single-mindedly pursue arbitrary goals even when    they are undesirable. The machine has to allow for uncertainty    about what it is the human really wants, says Prof Russell.  <\/p>\n<p>    Ethics committees will not resolve concerns about AI taking    jobs, however. Fears of a backlash were apparent at this years    World Economic Forum in Davos as executives agonised over how to present AI. The    common response was to say machines will make many jobs more    fulfilling though other jobs could be replaced.  <\/p>\n<p>    The profits from productivity gains for tech companies and    their customers could be huge. How those should be distributed    will become part of the AI debate. Whenever someone cuts cost,    that means, hopefully, a surplus is being created, says Mr    Nadella. You can always tax surplus  you can always make sure    that surplus gets distributed differently.  <\/p>\n<p>    Additional reporting by Adam Jezard  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>More here:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/www.ft.com\/content\/8e228692-f251-11e6-8758-6876151821a6\" title=\"Frankenstein fears hang over AI - Financial Times\">Frankenstein fears hang over AI - Financial Times<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> The technology industry is facing up to the world-shaking ramifications of artificial intelligence.  <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/frankenstein-fears-hang-over-ai-financial-times.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[13],"tags":[],"class_list":["post-208595","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"modified_by":"Danzig","_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/208595"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=208595"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/208595\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=208595"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=208595"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=208595"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}