{"id":191983,"date":"2017-05-09T15:31:37","date_gmt":"2017-05-09T19:31:37","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai-is-the-future-of-cybersecurity-for-better-and-for-worse-harvard-business-review\/"},"modified":"2017-05-09T15:31:37","modified_gmt":"2017-05-09T19:31:37","slug":"ai-is-the-future-of-cybersecurity-for-better-and-for-worse-harvard-business-review","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/ai-is-the-future-of-cybersecurity-for-better-and-for-worse-harvard-business-review\/","title":{"rendered":"AI Is the Future of Cybersecurity, for Better and for Worse &#8211; Harvard Business Review"},"content":{"rendered":"<p><p>Executive Summary    <\/p>\n<p>    In the near future, as Artificial Intelligence (AI) systems    become more capable, we will begin to see more automated and    increasingly sophisticated social engineering attacks. The rise    of AI-enabled cyber-attacks is expected to cause an explosion    of network penetrations, personal data thefts, and an    epidemic-level spread of intelligent computer viruses.    Ironically, our best hope to defend against AI-enabled hacking    is by using AI. But this is also very likely to lead to an AI    arms race, the consequences of which may be very troubling in    the long term, especially as big government actors join in the    cyberwars. Business leaders would be well advised to    familiarize themselves with the state-of-the-art in AI safety    and security research. Armed with more knowledge, they can then    rationally consider how the addition of AI to their product or    service will enhance user experiences, while weighing the costs    of potentially subjecting users to additional data breaches and    other possible dangers.  <\/p>\n<p>    In the near future, as artificial intelligence (AI) systems    become more capable, we will begin to see more automated and    increasingly sophisticated social engineering attacks. The rise    of AI-enabled cyberattacks is expected to cause an explosion of    network penetrations, personal data thefts, and an    epidemic-level spread of intelligent computer viruses.    Ironically, our best hope to     defend against AI-enabled hacking is     by using AI. But this is very likely to lead to an AI arms    race, the consequences of which may be very troubling in the    long term, especially as     big government actors join the cyber wars.  <\/p>\n<p>    My research isat the intersection of AI and    cybersecurity. In particular, I am researching how we can    protect AI systems from     bad actors, as well as how we can protect people from        failed or     malevolent AI. This work falls into a larger framework of    AI    safety,attempts to create AI that is exceedingly    capable but also safe and beneficial.  <\/p>\n<p>    A lot has been written about problems thatmight arise    with the arrival of true AI, either as a direct impact of    such inventions or because of a programmers error. However,    intentional malice in design and AI hacking have not been    addressed to a sufficient degree in the scientific literature.    Its fair to say that when it comes to dangers from a    purposefully unethical intelligence, anything is possible.    According to Bostroms orthogonality    thesis, an AI system can potentially have any combination    of intelligence and goals. Such goals can be introduced either    throughthe initial design or throughhacking, or    introduced later, in case of an off-the-shelf software  just    add your own goals. Consequently, depending on     whose bidding the system is doing (governments,    corporations, sociopaths, dictators, military industrial    complexes, terrorists, etc.), it may attempt to inflict damage    thats unprecedented in the history of humankind or    thats perhaps inspired by previous events.  <\/p>\n<p>    Even today, AI can be used to     defend and to     attack cyber infrastructure, as well as to increase the    attack surface that     hackers can target,that is, the number of ways for    hackers to get into a system. In the future, as AIs increase in    capability, I anticipate that they will first reach and then    overtake humans in all domains of performance, as we have    already seen with games like chessandGoand    are now seeing with important human tasks such asinvestinganddriving.    Its important for business leaders to understand how that    future situation will differ from our current concerns and what    to do about it.  <\/p>\n<p>    If one of todays cybersecurity systems fails, the damage can    be unpleasant, but is tolerable in most cases: Someone     loses money orprivacy.    But for human-level AI (or above), the consequences could be    catastrophic. A single failure of a     superintelligent AI (SAI) system could cause an existential    risk event an event that has the potential to damage    human well-being on a global scale. The risks are real, as    evidenced by the fact that some of the worlds greatest minds    in technology and physics, includingStephen Hawking, Bill    Gates, and Elon Musk, have expressed concerns    about the potential for AI to evolve to a point where humans    could no longer control it.  <\/p>\n<p>    When one of todays cybersecurity systems fails, you typically    get another chance to get it right, or at least to do better    next time. But with an SAI safety system, failure or success is    a binary situation: Either you have a safe, controlled    SAIor you dont. The goal of cybersecurity in general is    to reduce the number of successful attacks on a system; the    goal of SAI safety, in contrast, is to make sure    noattacks succeed in bypassing the safety mechanisms in    place. The rise of brain-computer interfaces, in particular,    will create a     dream target for human and AI-enabled hackers. And    brain-computer interfaces are not so futuristic theyre        already being used in medical devices and gaming, for    example. If successful, attacks onbrain-computer    interfaces would compromise not only critical information such    as social security numbers or bank account numbers but also our    deepest dreams, preferences, and secrets. There is the    potential to create unprecedented     new dangers for personal privacy, free speech, equal    opportunity, and any number of human rights.  <\/p>\n<p>    Business leaders are advised to familiarize themselves with the    cutting edge ofAI safety and security research, which at    the moment is sadly similar to the state of cybersecurity in    the 1990s, andour current situation with the     lack of security forthe internet of things. Armed    with more knowledge, leaderscan rationally consider how    the addition of AI to their product or service will enhance    user experiences, while weighing the costs of potentially    subjecting users to additional data breaches and possible    dangers. Hiring a dedicated     AI safety expert may be an important next step, as most    cybersecurity experts are not trained in anticipating or    preventing attacks against intelligent systems. I am hopeful    that ongoing research will bring    additional solutions for safely incorporatingAI into the    marketplace.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>More:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"https:\/\/hbr.org\/2017\/05\/ai-is-the-future-of-cybersecurity-for-better-and-for-worse\" title=\"AI Is the Future of Cybersecurity, for Better and for Worse - Harvard Business Review\">AI Is the Future of Cybersecurity, for Better and for Worse - Harvard Business Review<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Executive Summary In the near future, as Artificial Intelligence (AI) systems become more capable, we will begin to see more automated and increasingly sophisticated social engineering attacks. The rise of AI-enabled cyber-attacks is expected to cause an explosion of network penetrations, personal data thefts, and an epidemic-level spread of intelligent computer viruses.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/ai-is-the-future-of-cybersecurity-for-better-and-for-worse-harvard-business-review\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":7,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187743],"tags":[],"class_list":["post-191983","post","type-post","status-publish","format-standard","hentry","category-ai"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/191983"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=191983"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/191983\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=191983"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=191983"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=191983"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}