{"id":238715,"date":"2017-08-25T01:27:17","date_gmt":"2017-08-25T05:27:17","guid":{"rendered":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/uncategorized\/intel-spills-details-on-knights-mill-processor-top500-top500-news.php"},"modified":"2017-08-25T01:27:17","modified_gmt":"2017-08-25T05:27:17","slug":"intel-spills-details-on-knights-mill-processor-top500-top500-news","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/super-computer\/intel-spills-details-on-knights-mill-processor-top500-top500-news.php","title":{"rendered":"Intel Spills Details on Knights Mill Processor | TOP500 &#8230; &#8211; TOP500 News"},"content":{"rendered":"<p><p>    At the Hot Chips conference this week, Intel lifted the curtain    a little higheron Knights Mill, a Xeon Phi processor    tweaked for machine learning applications.  <\/p>\n<p>    As part of Intels multi-pronged approach to AI,    Knights Mill represents the chipmakers first Xeon Phi offering    aimed exclusively at the machine learning market, specifically    for the training of deep neural networks. For the inferencing    side of deep learning, Intel points to its Altera-based FPGA    products, which are being used extensively by Microsoft in its    Azure cloud (for both AI and network acceleration). Intel is    also developing other machine learning products for training    work, which will be derived from the Nervana technology the    company acquired last year. These will include a Lake Crest    coprocessor, and, further down the road, a standalone Knights    Crest processor.  <\/p>\n<p>    In the meantime, its will be up to Knights Mill to fill the    gap between the current Knights Landing processor, a Xeon Phi    chip designed for HPC work, and the future Nervana-based    products. In this case, Knights Mill will inherit most of its    design from Knights Landing, the most obvious modification    being the amount of silicon devoted to lower precision math     the kind best suited for crunching on neural networks.  <\/p>\n<p>    Essentially, Knights Mill replaces the two large double    precision \/single precision floating point (64-bit\/32-bit)    ports on Knights Landings vector processing unit (VPU), with    one smaller double precision port and four Vector Neural    Network Instruction (VNNI) ports. The latter supports single    precision floating point and mixed precision integers (16-bit    input\/32-bit output). As such, it looks to be Intels version    of a tensor processing unit, which has its counterpart in the    Tensor Cores on NVIDIAs new V100 GPU. That one, though, sticks    with the more traditional 16\/32-bit floating point math.  <\/p>\n<p>    The end result is that compared to Knights Landing, Knights    Mill will provide half the double precision floating point    performance, twice the single precision floating point    performance. With the added VNNI integer support in the VPU    (256 ops\/cycle), Intel is claiming Knights Mill will deliver up    to four times the performance fordeep learning    applications.  <\/p>\n<p>    The use of integer units to beef up deep learning performance    is somewhat unconventional, since most of    theseapplications are used to employingfloating    point math. Intel, however, maintains that floating point    offers little advantage in regard to accuracy, and is    significantly more computationally expensive. Whether    thistradeoff pans outor not remains to be seen.  <\/p>\n<\/p>\n<\/p>\n<p>    Knights Mill will also support 16 GB of MCDRAM, Intels version    of on-package high bandwidth memory assembled in a 3D stack, as    well as 6 channels of DDR4 memory. From the graphic they    presented at Hot Chips (above), the design appears to support    72 cores, at least for this particular configuration. Give the    256 ops\/cycle value for the VPU, that would mean Knights Mill    could deliver more than 27 teraops of deep learning performance    for say, a 1.5 GHz processor.  <\/p>\n<p>    Well find out what actual performance can be delivered once    Intel starts cranking out the chips. Knights Mill is scheduled    for launch in Q4 of this year.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Follow this link:<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/www.top500.org\/news\/intel-spills-details-on-knights-mill-processor\/\" title=\"Intel Spills Details on Knights Mill Processor | TOP500 ... - TOP500 News\">Intel Spills Details on Knights Mill Processor | TOP500 ... - TOP500 News<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> At the Hot Chips conference this week, Intel lifted the curtain a little higheron Knights Mill, a Xeon Phi processor tweaked for machine learning applications. As part of Intels multi-pronged approach to AI, Knights Mill represents the chipmakers first Xeon Phi offering aimed exclusively at the machine learning market, specifically for the training of deep neural networks. For the inferencing side of deep learning, Intel points to its Altera-based FPGA products, which are being used extensively by Microsoft in its Azure cloud (for both AI and network acceleration).  <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/super-computer\/intel-spills-details-on-knights-mill-processor-top500-top500-news.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[41],"tags":[],"class_list":["post-238715","post","type-post","status-publish","format-standard","hentry","category-super-computer"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/238715"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=238715"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/238715\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=238715"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=238715"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=238715"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}