{"id":227050,"date":"2017-07-11T11:03:38","date_gmt":"2017-07-11T15:03:38","guid":{"rendered":"http:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/uncategorized\/openpower-efficiency-tweaks-define-europes-davide-supercomputer-the-next-platform.php"},"modified":"2017-07-11T11:03:38","modified_gmt":"2017-07-11T15:03:38","slug":"openpower-efficiency-tweaks-define-europes-davide-supercomputer-the-next-platform","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/super-computer\/openpower-efficiency-tweaks-define-europes-davide-supercomputer-the-next-platform.php","title":{"rendered":"OpenPower, Efficiency Tweaks Define Europe&#8217;s DAVIDE Supercomputer &#8211; The Next Platform"},"content":{"rendered":"<p><p>    July 11, 2017 Jeffrey Burt  <\/p>\n<p>    When talking about the future of supercomputers and    high-performance computing, the focus tends to fall on the    ongoing and high-profile competition between the United States    with its slowly eroding place as the kingpin in the industry    and China and the tens of billions of dollars that the    government has invested in recent years to rapidly expand the    reach of the countrys tech community and the use of home-grown    technologies in massive new systems.  <\/p>\n<p>    Both trends were on display at the recent International    Supercomputing Conference in Frankfurt, Germany, where China    not only continued to hold the top two spots on the Top500 list    of the worlds fastest supercomputers with the Sunway    TaihuLight and Tianhe-2 systems, but a performance boost in the    Cray-based Piz Daint supercomputer in Switzerland pushed it    into the number-three spot, marking only the second time in 24    years  and the first time since November 1996  that a    U.S.-based supercomputer has not held one of the top three    spots.  <\/p>\n<p>    That said, the United States had five of the top 10 fastest    systems on the list, and still has the most  169  of the top    500, with China trailing in second at 160. But as weve        noted here at The Next Platform, the United    States dominance of the HPC space is no longer assured, and    the combination of Chinas aggressive efforts in the field and    worries about what a Trump administration may mean to funding    of U.S. initiatives has fueled speculation of what the future    holds and has garnered much of the attention.  <\/p>\n<p>    However, not to be overlooked, Europe  as illustrated by the    rise of the Piz Daint system, which saw its performance double    with the addition of more Nvidia Tesla 100 GPU accelerators     is making its own case as a significant player in the HPC    arena. For example, Germany houses 28 of the supercomputers in    the latest    Top500 list released last month, with France and the United    Kingdom both with 17. In total, Europe houses 105 of the Top500    systems, good for 21 percent of the market and third behind the    United States and China.  <\/p>\n<p>    Something that didnt generate a lot of attention at the show    was the introduction of the DAVIDE (Development for an Added    Value Infrastructure Designed in Europe) system onto the list,    coming in at 299 with a performance of 654.2 teraflops and a    peak performance of more than 991 teraflops. Built by Italian    vendor E4 Computer Engineering, DAVIDE came out of a multi-year    Pre-Commercial Procurement (PCP) project of the Partnership for    Advanced Computing in Europe (PRACE), a nonprofit association    headquartered in Brussels. PRACE in 2014 kicked off the first    phase of its project to fund the development of a highly    power-efficient HPC system design. PCP is a process in Europe    in which different vendors compete through multiple phases of    development in a project. A procurer like PRACE gets multiple    vendors involved in the initial stage of the program and then    compete through phases  solution design, prototyping, original    development and validation and testing of first projects.    Through each evaluation phase, the number of competing vendors    is reduced. The idea is to have the procurers share risks and    benefits of innovation with the vendors. Over the course of    almost three years, the number of system vendors for this    program was whittled down from four  E4, Bull, Megaware and    Maxeler Technologies  with E4 last year being awarded the    contract to build its system.  <\/p>\n<p>    The goal was to build an HPC system that can run highly    parallelized, memory-intensive workloads such as weather    forecasting, machine learning, genomic sequencing and    computational fluid dynamics, the type of applications that are    becoming more commonplace in HPC environments. At the same    time, power efficiency also was key. With a peak performance of    more than 991 teraflops, the 45-node cluster consumes less than    2 kilowatts per node, according to E4.  <\/p>\n<p>    E4 already offers systems powered by Intels x86 processors,    ARM-based chip and GPUs, but for DAVIDE, the company opted for    IBMs OpenPower Foundation, an open hardware development    community that IBM officials launched with partners like Nvidia    and Google in 2014 to extend the reach of the vendors Power    architecture beyond the core data center and into new growth    areas like hyperscale environments and emerging workloads     including machine learning, artificial intelligence and virtual    reality  while cutting into Intels dominant share of the    server chip market. E4 already is a member of the foundation    and builds other systems running on Power.  <\/p>\n<p>    Each 2U node of the cluster is based on the OpenPower systems    design codenamed Minsky and runs on two 3.62GHz Power8+    eight-core processors and four     Nvidia Tesla P100 SXM2 GPUs based on the companys Pascal    architecture and aimed at such modern workloads as artificial    intelligence and climate predictions. The GPUs are hooked into    the CPUs via     Nvidias NVLink high-speed interconnect, and the nodes use    Mellanox Technologies EDR 100 Gb\/s Infiniband interconnects as    well as 1 Gigabit Ethernet networking. Each node offers a    maximum performance of 22 TFlops. In total, the cluster runs on    10,800 Power8+ cores, with 11,520GB of memory and SSD SATA        and NVMe drives for storage. It runs the CentOS Linux    operating system  <\/p>\n<p>    The Minsky nodes are designed to be air-cooled, but to increase    the power efficiency of the system, E4 is leveraging technology    from CoolIT Systems that uses direct hot-water cooling  at    between 35 and 40 degrees Celsius  for the CPUs and GPUs, with    cooling capacity of 40 kW. Each rack includes an independent    liquid-liquid or liquid\/air heat exchanger unit with redundant    pumps, and the compute nodes are connected to the heat exchange    via pipes and a side bar for water distribution. E4 officials    estimate that the CoolIT technology can extract about 80    percent of heat generated by the nodes.  <\/p>\n<p>    The chassis is custom-built and based on the OpenRack form    factor.  <\/p>\n<p>    Power efficiency is further enhanced by software developed in    conjunction with the University of Bologna that enables    fine-grain measuring and monitoring of power consumption by the    nodes and the system as a whole through data collected from    components like the CPUs, GPUs, memory components and fans.    There also is the ability to cap the amount of power used and    schedule tasks based on the amount of power being consumed and    to profile the power an application uses, according to the    vendor. A dedicated power monitor interface  based on the    BeagleBone Black Board, an open-source development platform     enables frequent direct sampling from the power backplane and    integrates with the system-level power management software.  <\/p>\n<p>    DAVIDE takes advantage of other IBM technologies, including IBM    XL compilers, ESSL math library and Spectrum MPI, and APIs    enable developers to tune the clusters performance and power    consumption.  <\/p>\n<p>    E4 currently is making DAVIDE available to some users for just    jobs as porting applications and profiling energy consumption.  <\/p>\n<p>    Categories: HPC, ISC17  <\/p>\n<p>    Tags: e4  <\/p>\n<p>    Ethernet Getting Back On The Moores Law    Track  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Continue reading here: <\/p>\n<p><a target=\"_blank\" href=\"https:\/\/www.nextplatform.com\/2017\/07\/11\/openpower-efficiency-tweaks-define-europes-davide-supercomputer\/\" title=\"OpenPower, Efficiency Tweaks Define Europe's DAVIDE Supercomputer - The Next Platform\">OpenPower, Efficiency Tweaks Define Europe's DAVIDE Supercomputer - The Next Platform<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> July 11, 2017 Jeffrey Burt When talking about the future of supercomputers and high-performance computing, the focus tends to fall on the ongoing and high-profile competition between the United States with its slowly eroding place as the kingpin in the industry and China and the tens of billions of dollars that the government has invested in recent years to rapidly expand the reach of the countrys tech community and the use of home-grown technologies in massive new systems. Both trends were on display at the recent International Supercomputing Conference in Frankfurt, Germany, where China not only continued to hold the top two spots on the Top500 list of the worlds fastest supercomputers with the Sunway TaihuLight and Tianhe-2 systems, but a performance boost in the Cray-based Piz Daint supercomputer in Switzerland pushed it into the number-three spot, marking only the second time in 24 years and the first time since November 1996 that a U.S.-based supercomputer has not held one of the top three spots <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/super-computer\/openpower-efficiency-tweaks-define-europes-davide-supercomputer-the-next-platform.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[41],"tags":[],"class_list":["post-227050","post","type-post","status-publish","format-standard","hentry","category-super-computer"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/227050"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=227050"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/227050\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=227050"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=227050"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=227050"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}