{"id":208447,"date":"2017-07-28T19:15:49","date_gmt":"2017-07-28T23:15:49","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/elon-musk-and-mark-zuckerberg-are-arguing-about-ai-but-theyre-both-missing-the-point-entrepreneur\/"},"modified":"2017-07-28T19:15:49","modified_gmt":"2017-07-28T23:15:49","slug":"elon-musk-and-mark-zuckerberg-are-arguing-about-ai-but-theyre-both-missing-the-point-entrepreneur","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/elon-musk-and-mark-zuckerberg-are-arguing-about-ai-but-theyre-both-missing-the-point-entrepreneur\/","title":{"rendered":"Elon Musk and Mark Zuckerberg Are Arguing About AI &#8212; But They&#8217;re Both Missing the Point &#8211; Entrepreneur"},"content":{"rendered":"<p><p>      Free Webinar | August 16th    <\/p>\n<p>      Find out how to optimize your website to give your      customers experiences that will have the biggest ROI for your      business. Register Now          <\/p>\n<p>    In Silicon Valley this week, a    debate about the potential dangers (or lack thereof) when    it comes to artificial intelligencehas flared    upbetween two tech billionaires.  <\/p>\n<p>    Facebook CEO Mark Zuckerberg thinks that AI is going to make    our lives better in the future,while SpaceX CEO Elon    Musk believes that AI a fundamental risk to the existence of    human civilization.  <\/p>\n<p>    Whos right?  <\/p>\n<p>    Related: Elon    Musk Says Mark Zuckerberg's Understanding of AI Is 'Limited'    After the Facebook CEO Called His Warnings    'Irresponsible'  <\/p>\n<p>    Theyre both right, but theyre also both missing the point.    The dangerous aspect of AI will always come from people and    their use of it, not from the technology itself. Similar to    advances in nuclear fusion, almost any kind of technological    developments can be weaponized and used to cause damage if in    the wrong hands. The regulation of machine intelligence    advancements will play a central role in whether Musks    doomsday prediction becomes a reality.  <\/p>\n<p>    It would be wrong to say that Musk is hesitant to embrace the    technology since all of this companies are direct beneficiaries    of the advances in machine learning. Take Tesla for example, where self-driving capability    is one of the biggest value adds for its cars. Musk himself    even believes that one day it will be safer to populate roads    with AI drivers rather than human ones, though publicly he hopes that society    will not ban human drivers in the future in an effort to save    us from human error.  <\/p>\n<p>    What Musk is really pushing for here by being wary of AI    technology is a more advanced hypothetical framework that we as    a society should use to have more awareness regarding the    threats that AI brings. Artificial General Intelligence (AGI),    the kind that will make decisions on its own without any    interference or guidance from humans, is still very far away    from how things work today. The AGI that we see in the movies    where robots take over the planet and destroy humanity    is very different from the narrow AI that we use and iterate on    within the industry now. In Zuckerbergs view, the doomsday    conversation that Musk has sparked is a very exaggerated way of    projecting how the future of our technology advancements would    look like.  <\/p>\n<p>    Related: The    Future of Productivity: AI and Machine Learning  <\/p>\n<p>    While there is not much discussion in our government about    apocalypse scenarios, there is definitely a conversation    happening about preventing the potentially harmful impacts on    society from artificial intelligence. White House recently    released a couple of reports on the future of artificial    intelligence and on the economic effects it    causes. The focus of these reports is on the future of    work, job marketsand research on increasing inequality    that machine intelligence may bring.  <\/p>\n<p>    There is also an attempt to tackle a very important issue of    explainability when it comes to understanding actions that    machine intelligence does and decisions it presents to us. For    example, DARPA (Defense Advanced    Research Projects Agency), an agency within the U.S. Department    of Defense, is funneling billions of dollars into projects that    would pilot vehicles and aircraft, identify targets and even    eliminate them on autopilot. If you thought the use of drone    warfare was controversial, AI warfare will be even more so.    Thats why here its even more important, maybe even more than    in any other field, to be mindful of the results AI presents.  <\/p>\n<p>    Explainable AI (XAI), the initiative funded by DARPA, aims to    create a suite of machine learning techniques that produce more    explainable results to human operators and still maintain a    high level of learning performance. The other goal of XAI is to    enable human users to understand, appropriately trust and    effectively manage the emerging generation of artificially    intelligent partners.  <\/p>\n<p>    Related: Would You Fly on an AI-Backed Plane Without a    Pilot?  <\/p>\n<p>    The XAI initiative can also help the government tackle the    problem of ethics with more transparency. Sometimes developers    of software have conscious or unconscious biases that    eventually are built into an algorithm -- the wayNikon    camera became internet famous for    detecting someone blinking when pointed at the face of an    Asian personor HP computers were proclaimed racist for not detecting black    faces on the camera. Even developers with the best    intentions can inadvertently produce systems with biased    results, which is why, as the White House report    states,AI needs good data. If the data is incomplete or    biased, AI can exacerbate problems of bias.  <\/p>\n<p>    Even with the positive use cases, the data bias can cause a lot    of serious harm to society. Take Chinas recent initiative to    use machine intelligence to predict and prevent crime.    Of course, it makes sense to deploy complex algorithms that can    spot a terrorist and prevent crime, but a lot of bad scenarios    can happen if there is an existing bias in the training data    for those algorithms.  <\/p>\n<p>    It important to note that most of these risks already exist in    our lives in some form or another, like when patients are    misdiagnosed with cancer and not treated accordingly by doctors    or when police officers make intuitive decisions under chaotic    conditions. The scale and lack of explainability of machine    intelligence will magnify our exposure to these risks and raise    a lot of uncomfortable ethical questions like, who is    responsible for a wrong prescription by an automated diagnosing    AI? A doctor? A developer? Training data provider? This is why    complex regulation will be needed to help navigate these issues    and provide a framework for resolving the uncomfortable    scenarios that AI will inevitably bring into society.  <\/p>\n<p>          Artur Kiulian, M.S.A.I., is a partner at Colab, a Los          Angeles-based venture studio that helps startups build          technology products using the benefits of machine          learning. An expert in artificial intelligence, Kiulian          is the author of Robot is...        <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>See the article here:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"https:\/\/www.entrepreneur.com\/article\/297861\" title=\"Elon Musk and Mark Zuckerberg Are Arguing About AI -- But They're Both Missing the Point - Entrepreneur\">Elon Musk and Mark Zuckerberg Are Arguing About AI -- But They're Both Missing the Point - Entrepreneur<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Free Webinar | August 16th Find out how to optimize your website to give your customers experiences that will have the biggest ROI for your business. Register Now In Silicon Valley this week, a debate about the potential dangers (or lack thereof) when it comes to artificial intelligencehas flared upbetween two tech billionaires.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/elon-musk-and-mark-zuckerberg-are-arguing-about-ai-but-theyre-both-missing-the-point-entrepreneur\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":6,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187743],"tags":[],"class_list":["post-208447","post","type-post","status-publish","format-standard","hentry","category-ai"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/208447"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=208447"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/208447\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=208447"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=208447"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=208447"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}