{"id":196890,"date":"2017-06-06T06:15:56","date_gmt":"2017-06-06T10:15:56","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/watch-googles-igor-markov-explain-how-to-avoid-the-ai-apocalypse-venturebeat\/"},"modified":"2017-06-06T06:15:56","modified_gmt":"2017-06-06T10:15:56","slug":"watch-googles-igor-markov-explain-how-to-avoid-the-ai-apocalypse-venturebeat","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/watch-googles-igor-markov-explain-how-to-avoid-the-ai-apocalypse-venturebeat\/","title":{"rendered":"Watch Google&#8217;s Igor Markov explain how to avoid the AI apocalypse &#8211; VentureBeat"},"content":{"rendered":"<p><p>    An attack by artificial intelligence on humans, said Google    software engineerand University of Michigan professor    Igor Markov, would be sort of like when the Black Plague hit    Europe in the 14th century, killing up to 50percent    of the population.  <\/p>\n<p>    Virus particles were very small and there were no microscopes    or notion of infectious diseases, there was no explanation, so    the disease spread for many years, killed a lot of people, and    at the end no one understood what happened, he said. This    would be illustrative of what you might expect if a    superintelligent AI would attack. You would not know precisely    whats going on, there would be huge problems, and you would be    almost helpless.  <\/p>\n<p>    Rather than devising technological solutions, in a recent talk    about how to keepsuperintelligent AI from harming humans,    Markov looked to lessons from ancient history.  <\/p>\n<p>    Markov joined sci-fi author    David Brin and other influential names in the artificial    intelligence community Friday at The AI Conference in San    Francisco.  <\/p>\n<p>    One lesson fromearly humans that could help in the fight    againstAI: make friends. Domesticate AI the same way Homo    sapiens turned wolves into their protectors and friends.  <\/p>\n<p>    If you are worried about potential threats, then try to use    some of them for protection or try to adapt or domesticate    those threats. So you might develop a friendly AI that would    protect you from malicious AI or track unauthorized accesses,    he said.  <\/p>\n<p>    Markovbegan and ended his presentationby calling    himself an amateur and saying he doesnt have all the answers,    but he also said he has been thinking about ways to prevent an    AI takeover for more than a year.He now believes the most    important way for humans to prevent the rise of malicious    AIis to put in a series of physical world restraints.  <\/p>\n<p>    The bottom line here is that intelligence  either hostile or    friendly  would be limited by physical resources, and we need    to think about physical resources if we want to limit such    attacks, he said. We absolutely need to control access to    energy sources of all kinds, and we need to be very careful    about physical and network security of critical infrastructure    because if that is not taken care of, then disasters can    obviously happen.  <\/p>\n<p>    Calling upon a background in hardware design, Markov suggested    steps be taken to separate powerful systems and have    deficiencies built in to act as a kill switch, because if    superintelligent AI ever arises, it will likely be by accident.  <\/p>\n<p>    He also implores thatlimits be placed on self repair,    replication, or improvement of AI, and urges that specific    scenarios be considered, such as a nuclear weapons attack or    use of biological weapons.  <\/p>\n<p>    Generally, each agent, each part of your AI ecosystem needs to    be designed with some weakness. You dont want agents to be    able to take over everything, right? So you would control    agents through these weaknesses and separation of powers, he    said. In the discipline of electronic hardware design, we use    obstruction hierarchies. We go from transistors to CPUs to data    centers, and each level typically has a well-defined function,    so if youre looking at this from the perspective of security,    if you are defending against something, you would want to limit    or regulate every level, and you would want the same type of    limitations for AI.  <\/p>\n<p>    Markovs presentation relies on predictions made by    Ray Kurzweil, who believes that in a decade, virtual    reality will be indistinguishable from real life, after which    computers will surpass humans. Then, through augmentation,    humans will become more machine-like until we reach the    Singularity.  <\/p>\n<p>    Markov also pointed out that there is a range ofopinions    on malicious AI. Stephen Hawking believes AI will eventually    supersede humankind,    telling the BBC, The development of full artificial    intelligence could spell the end of the human race.  <\/p>\n<p>    In contrast, former Baidu AI head Andrew Ng said last year that    people should be as concerned about malicious AI as they are    about     overpopulation on Mars.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>View original post here:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"https:\/\/venturebeat.com\/2017\/06\/05\/watch-googles-igor-markov-explain-how-to-avoid-the-ai-apocalypse\/\" title=\"Watch Google's Igor Markov explain how to avoid the AI apocalypse - VentureBeat\">Watch Google's Igor Markov explain how to avoid the AI apocalypse - VentureBeat<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> An attack by artificial intelligence on humans, said Google software engineerand University of Michigan professor Igor Markov, would be sort of like when the Black Plague hit Europe in the 14th century, killing up to 50percent of the population. Virus particles were very small and there were no microscopes or notion of infectious diseases, there was no explanation, so the disease spread for many years, killed a lot of people, and at the end no one understood what happened, he said <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/watch-googles-igor-markov-explain-how-to-avoid-the-ai-apocalypse-venturebeat\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":7,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187743],"tags":[],"class_list":["post-196890","post","type-post","status-publish","format-standard","hentry","category-ai"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/196890"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=196890"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/196890\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=196890"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=196890"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=196890"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}