{"id":188487,"date":"2017-04-19T10:07:32","date_gmt":"2017-04-19T14:07:32","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/googles-new-chip-may-be-the-future-of-ai-systems-the-motley-fool-motley-fool\/"},"modified":"2017-04-19T10:07:32","modified_gmt":"2017-04-19T14:07:32","slug":"googles-new-chip-may-be-the-future-of-ai-systems-the-motley-fool-motley-fool","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/googles-new-chip-may-be-the-future-of-ai-systems-the-motley-fool-motley-fool\/","title":{"rendered":"Google&#8217;s New Chip May Be the Future of AI Systems &#8211; The Motley Fool &#8211; Motley Fool"},"content":{"rendered":"<p><p>    Alphabet's    (NASDAQ:GOOGL)    (NASDAQ:GOOG)    Google announced at its I\/O Developers Conference in May 2016    that it had designed a new chip, called the tensor processing    unit (TPU), specifically designed for the demands of training    artificial intelligence (AI) systems.The company didn't    divulge much at the time, butin a blog post that same week, hardware engineer    Norm Jouppi revealed that Google had been running the TPU in    the company's data centers for more than a year and...  <\/p>\n<p>      ... found them to deliver an order of magnitude      better-optimized performance per watt for machine learning.      This is roughly equivalent to fast-forwarding technology      about seven years into the future (three generations of      Moore's Law).    <\/p>\n<p>    The chip was an application-specific integrated circuit (ASIC),    a microchip designed for a specific application. Little else    was known about the enigmatic TPU, and the mystery continued    until last week, when Google pulled back the curtain to reveal    the inner workings of this new groundbreaking advancement for    AI.  <\/p>\n<p>      Google's tensor processing unit could revolutionize AI      processing. Image source: Google.    <\/p>\n<p>    The TPU underlies TensorFlow, Google's open-source machine    learning framework, a collection of algorithms that power the    company's deep neural networks. These AI systems are capable of    teaching themselves by processing large amounts of data. Google    tailored the TPU to meet the unique demands of training its AI    systems, which had previously been run primarily on graphics    processing units (GPUs) manufactured by NVIDIA    Corporation (NASDAQ:NVDA).    While the company currently runs the TPU and GPU side by side    (for now), this could have drastic implications for how AI    systems are trained going forward.  <\/p>\n<p>    Google released a study -- authored by more than 70    contributors --that provided a detailed analysis of the    TPU. In a     blog post earlier this month, Jouppi laid out the    capabilities of the chip. He described how it processed AI    production workloads 15 to 30 times faster than CPUs and GPUs    performing the same task, and achieved a 30 to 80 times    improvement in energy efficiency.  <\/p>\n<p>    Google realized several years ago that if customers were to use    Google voice search for just three minutes each day, that would    require the company to double its existing number of data    centers. The company also credits the TPU with providing faster    response times for search, acting as the linchpin for    improvements in Google Translate, and was a key factor in its    AI system's defeat of a world champion in the ancient Chinese    game of Go.  <\/p>\n<p>    Companies are taking a variety of approaches to bring    improvements to AI systems. Intel    Corporation's (NASDAQ:INTC)    recently acquired start-up Nervana has developed its own ASIC,    the Nervana Engine, that eliminates components from the GPU not    essential to the functions necessary for AI. The company also    re-engineered the memory and believed it could realize 10 times    the processing currently performed by GPUs.Intel is    working to integrate this capability on its    existing processor platforms to better compete with NVIDIA's    offering.  <\/p>\n<p>    A field-programmable gate array (FPGA) processor can be    reprogrammed after installation and is another chip being    leveraged for gains in AI. FPGAs have increasingly been used in    data centers to accelerate machine learning.Apple    Inc. (NASDAQ:AAPL)    is widely believed to have installed this chip in its iPhone 7 to    promote sophisticated AI advances locally on each phone. The    company has emphasized not sacrificing user privacy to make    advances in AI, so this would be a logical move for its    smartphones.  <\/p>\n<p>      NVIDIA Tesla P100 powers Facebook's AI server. Image source:      NVIDIA.    <\/p>\n<p>    Facebook, Inc. (NASDAQ:FB)    has taken a different approach in optimizing its    recently released data center server named Big Basin. The    company created a platform that utilizes eight NVIDIA Tesla    P100 GPU accelerators attached with NVLink connectors designed    to reduce bottlenecks, in what it described as \"the most    advanced data center GPU ever built.\" The company revealed that    this latest server is capable of training 30% larger machine    learning data systems in about half the time.Facebook    also indicated that thearchitecture was based NVIDIA's    DGX-1 \"AI supercomputer in a box.\"  <\/p>\n<p>    Though we have been hearing about almost daily breakthroughs in    AI, it is important to remember that the science is still in    its infancy and new developments will likely continue at a    rapid pace. These advances provide for more efficient systems    and lay the foundation for future progress in the field. These    necessary advances will propel future innovation, but are    difficult to quantify in terms of dollars and cents, as well as    the potential effects on future revenue and profitability.  <\/p>\n<p>    Suzanne Frey, an executive    at Alphabet, is a member of The Motley Fool's board of    directors. Danny Vena    owns shares of Alphabet (A shares), Apple, and Facebook.    Danny    Vena has the following options: long January 2018 $85 calls    on Apple, short January 2018 $90 calls on Apple, long January    2018 $640 calls on Alphabet (C shares), short January 2018 $650    calls on Alphabet (C shares), and long January 2018 $25 calls    on Intel. The Motley Fool owns shares of and recommends    Alphabet (A shares), Alphabet (C shares), Apple, Facebook, and    Nvidia. The Motley Fool recommends Intel. The Motley Fool has a    disclosure    policy.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read the original here:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"https:\/\/www.fool.com\/investing\/2017\/04\/18\/googles-new-chip-may-be-the-future-of-ai-systems.aspx\" title=\"Google's New Chip May Be the Future of AI Systems - The Motley Fool - Motley Fool\">Google's New Chip May Be the Future of AI Systems - The Motley Fool - Motley Fool<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Alphabet's (NASDAQ:GOOGL) (NASDAQ:GOOG) Google announced at its I\/O Developers Conference in May 2016 that it had designed a new chip, called the tensor processing unit (TPU), specifically designed for the demands of training artificial intelligence (AI) systems.The company didn't divulge much at the time, butin a blog post that same week, hardware engineer Norm Jouppi revealed that Google had been running the TPU in the company's data centers for more than a year and... .. <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/googles-new-chip-may-be-the-future-of-ai-systems-the-motley-fool-motley-fool\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187743],"tags":[],"class_list":["post-188487","post","type-post","status-publish","format-standard","hentry","category-ai"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/188487"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=188487"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/188487\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=188487"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=188487"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=188487"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}