Supercomputer arms race hots up as US commits $258m to research – Siliconrepublic.com

To try and stay ahead in the supercomputer arms race with China, the US is pumping $258m into six firms to speed up development.

While various countries including Ireland are investing time and money to develop the next generation of supercomputers capable of performing enormous computations in seconds, the US and China are firmly in the lead.

Within the US, the most notable example would be IBMs Watson programme, which plans to overhaul various sectors from healthcare to insurance with artificial intelligence and big data.

However, aiming to be the world leader in supercomputers by 2020 is China, whichlast year revealed the most powerful machinein the world, the Sunway TaihuLight, with 93 petaflops of processing power.

Now, the supercomputer arms race is heating up with news that the US Department of Energy (DoE) is pumping $258m into research in this field across six American tech companies: IBM, Intel, HP Enterprise, Nvidia, Cray and AMD.

The purpose of the PathForward programme, the department said, is to maximise the energy efficiency and overall performance of future large-scale supercomputers.

As part of the deal, the companies will be providing additional funding, amounting to at least 40pc of their total project cost, bringing the total investment to at least $430m.

The work funded by PathForward will include development of innovative memory architectures, higher-speed interconnects, improved reliability systems, and approaches for increasing computing power without prohibitive increases in energy demand, said Paul Messina, director of the DoEs Exascale Computing Project.

It is essential that private industry plays a role in this work going forward.Advances in computer hardware and architecture will contribute to meeting all four challenges.

The head of the DoE, Rick Perry, went so far as to say that supercomputers will be essential to the USs security, prosperity, and economic competitiveness as a nation.

However, the US is now facing the prospect of a three-way race to supercomputing domination, with news that Japan is currently building a supercomputer that can outperform Chinas Sunway TaihuLight.

According to CNN, Japans AI Bridging Cloud Infrastructure supercomputer will be capable of performing at an incredible 130 petaflops, or 130 quadrillion calculations per second.

Read more:

Supercomputer arms race hots up as US commits $258m to research - Siliconrepublic.com

Kyushu University Orders Fujitsu Supercomputer – insideHPC

Today Fujitsu announced an order from the Research Institute for Information Technology at Kyushu University for a new supercomputer systemfor deployment in October 2017.

This system will consist of over 2,000 servers, including the Fujitsu Server PRIMERGY CX400, the next-generation model of Fujitsus x86 server. It is expected to offer top-class performance in Japan, providing a theoretical peak performance of about 10 petaflops. This will also be Japans first supercomputer system featuring a large-scale private cloud environment constructed on a front-end sub system, linked with a computational server of a back-end sub system through a high-speed file system.

The Research Institute for Information Technology will use this system as computational resources for JHPCNand HPCI, as well as a variety of user programs. By making this system available to users both inside and outside of the university, it will enhance the platform for academic research in Japan and contribute to the development of new academic research including AI. Fujitsu will use the technology and experience it has developed as Japans top supercomputer maker to strongly support the activities of the Research Institute for Information Technology.

As a center for education and research, Kyushu University is the largest public university in the Kyushu region, and the Research Institute for Information Technology is a nationwide joint-use facility visited by professors, graduate students, and researchers across Japan for academic research.

Currently, the Research Institute for Information Technology operates three systems: a supercomputer system (consisting of the Fujitsu Supercomputer PRIMEHPC FX10 system), a high-performance computational server system (made up of Fujitsu PRIMERGY CX400 x86 servers), and a high-performance applications server system. The three systems will be integrated as part of the new supercomputer system, creating an environment that can meet an even wider variety of needs, extending beyond the current large-scale computation and scientific simulations, to include usage and research that require extremely large-scale computation, such as AI, big data, and data science.

System Features

Sign up for our insideHPC Newsletter

Follow this link:

Kyushu University Orders Fujitsu Supercomputer - insideHPC

Atos to Build 9 Petaflop Supercomputer for GENCI – insideHPC

Today Atosannounced that the company has won a contract with GENCI to deliver one of the most powerful supercomputers in the world, planned for the end of 2017. A successor of the Curie system installed at the TGCC, the Bull Sequana supercomputer will havean overall power of 9 petaflopsforresearch purposes in France and Europe.

GENCI welcomes the acquisition of a new Atos supercomputer, which will actively help us to maintain the scientific competitivity of our French researchers. This highly anticipated acquisition indeed marks an acceleration in French investment in research organizations. This will also be of great advantage to f industrial companies, especially SMEs and start-ups. This high-performance equipment spearheads our commitment in the European research infrastructure PRACE (PartneRship for Advanced Computing in Europe) in which GENCI is highly involved in representing France says Philippe Lavocat, CEO of GENCI.

The new supercomputer will be made available to French and European researchers for use in academic and industrial fields that require extremely high computing and data processing power.

The applications for intensive computing are many and varied, such as climatology, where the supercomputer will help to model past, present and future meteorological conditions with incredible accuracy within the framework of international activities carried out by the Intergovernmental Panel on Climate Change (IPCC)[1]. When applied to life sciences, the high-performance computer will make it possible to work on the scale of basic chemical processes in molecular systems, thus paving the way for major advances in the personalization of medicine. In the energy industry, the supercomputer will not only optimize the process of combustion in motors and turbines, but also develop alternatives based on electricity with new-generation batteries in the wind, tidal and, in future, fusion power sectors. In astrophysics, only a supercomputer with this kind of power is capable of simulating the entire universe, thus putting us in a position to better understand its origin and evolution.

Based on the platform of the latest generation of the Bull Sequana X1000, which will eventually be capable of achieving an exaflop (a billion billion operations per second), the first installment of this supercomputer will have a peak computing power of 8.9 petaflops and a distributed memory capacity of almost 400 terabytes. An extension of its configuration is planned for 2019, when its computing capacity is set to increase to more than 20 petaflops.

Consisting of more than 124,000 computing cores, the supercomputer will benefit from the patented direct liquid cooling (DLC) technology used to cool the system down to room temperature, creating an energy saving of up to 40% compared to air cooling.

The design of Curies successor illustrates the expertise of Atos engineers, confirming Atos European leadership in the supercomputer domain. Following on from the Curie supercomputer, delivered in 2011 and already amongst the most powerful in the world, we are proud of the renewed trust that GENCI has placed in us to accompany them each day in their development of academic and industrial research states Philippe Vannier, Group Advisor for Technology at Atos.

The Atos group, leader of the supercomputer in Europe, has 22 supercomputers in the TOP500 of the most powerful machines in the world.

Specifications of the new supercomputer:

The entire solution functions under the new Bull SCS 5 environment based on the Linux Red Hat 7.x operating system. The solution also includes a cluster of Lustre multi-level storage capable of releasing a minimum output of 300 Gb/s (500 Gb/s over time) for a minimum required data storage of 5 Pio.

Atos will exhibit itsfull breadth of HPC solutions at ISC 2017 in Frankfurt, booth #D-1126.

Sign up for our insideHPC Newsletter

Follow this link:

Atos to Build 9 Petaflop Supercomputer for GENCI - insideHPC

US gov’t taps The Machine to beat China to exascale supercomputing – Ars Technica

Here's a gallery from a 40-node version of The Machine (aka HPE's Memory-Driven Computing initiative). These appear to be fibre-optic cables connected to some kind of chip, but it's hard to divine much more than that.

HPE

HPE chose some seriously bright neon green lights for its prototype machine.

HPE

It almost looks radioactive.

HPE

This is apparently one of HPE's X1 silicon photonics interconnect chips (in the middle of the metal clamp thing).

HPE

To create an effective exascale supercomputer from scratch, you must first invent the universesolve three problems: the inordinate power usage (gigawatts) and cooling requirements; developing the architecture and interconnectsto efficiently weave together hundreds of thousands of processorsand memory chips; and devising an operating system and client software that actually scales to onequintillion calculations per second.

You can still physically build an exascale supercomputer without solving all three problemsjust strap together a bunch of CPUs until you hit the magic numberbut it won't perform a billion-billion calculations per second, or it'll be untenably expensive to operate. That seems to be China's approach: plunk down most of the hardware in 2017, and then spend the next few years trying to make it work.

The DoE, on the other hand,is wending its way down a more sedate path by funding HPE (and supercomputer makers) to develop an exascale reference design. The funding is coming from a DoE programme called PathForward, which is part of its larger Exascale Computing Project (ECP). The ECP, which was set up under the Obama administration , has already awarded tens of millions of dollars to various exascale research efforts around the US. It isn't clear how much funding has been received by HPE.

So, what's HPE's plan? And is there any hope that HPEcan pass through three rounds of the DoE funding programme and build an exascale supercomputer before China?

HPE

In addition, and perhaps most importantly, HPE says it has developed software tools that can actually use this huge pool of memory, to derive intelligence or scientific insight from huge data setsevery post on Facebook; the entirety of the Web; the health data of every human on Earth; that kind of thing. Check out this quote from CTO Mark Potter, who apparently thinksHPE's techcan save humankind:We believe Memory-Driven Computing is the solution to move the technology industry forward in a way that can enable advancements across all aspects of society. The architecture we have unveiled can be applied to every computing categoryfrom intelligent edge devices to supercomputers."

In practice I think we're some way from realising Potter's dream, but HPE's tech is certainly a good first step towards exascale. If we compare HPE's efforts to the three main issues I outlined above, you'd probably award a score of about 1.5: they've made inroads on software, power consumption, andscaling, but there's a long way to go, especially when it comes to computational grunt.

After the US government banned the export ofIntel, Nvidia, and AMD chips to China, China's national chip design centre created a256-core RISC chip specifically for supercomputing. All that HPE can offer is the Gen-Z protocol for chip-to-chip communications, and hope thata logic chip maker steps forward.Still, this is just the first stage of funding, where HPE is expected toresearch and developcore technologies that will help the USreachexascale; only if it gets to phase two and three will HPE have to design and then build an exascale machine.

Most of the DoE's exascale funding has so far been on software. Just before this story published, we learnt that the DoE is also announcing funding for AMD, Cray, IBM, Intel, and Nvidia under the same PathForward programme. In total, the DoE is handing out $258 million over three years, with the funding recipients also committing to spend at least $172 million of their own funds over the same period. What we don't yet know is what those companies are doing with that funding; hopefully we'll find out more soon.

Now read about how cheap RAM changes computing...

This post originated on Ars Technica UK

Listing image by HPE

More:

US gov't taps The Machine to beat China to exascale supercomputing - Ars Technica

NEC LX Supercomputer coming to Czech Hydrometeorological Institute – insideHPC

Today NEC Corporation announced that the Czech Hydrometeorological Institute (CHMI) will soon deploy an NEC LX series supercomputerfor weather forecasting.

We are very happy that CHMI has selected NEC to deliver an HPC solution for their weather and climate simulations, as NEC has a very special expertise in meteorological applications. For years, we have been successfully collaborating with meteorological institutes, and we look forward to cultivating these partnerships further, said Andreas Gottlicher, Senior Sales Manager, NEC Deutschland.

NECs scale-out LX series HPC cluster will enable the Czech Hydrometeorological Service to increase the accuracy of numerical weather forecasting and related applications, namely warning systems. Weather prediction models are increasingly complex, including rainfall, temperature, wind and related variables that have to be calculated as precisely as possible several days in advance. At the same time, regional peculiarities such as orography and terrain physiography need to be considered. In addition, the public must be made aware quickly of predictions of high impact weather events affecting daily life, including environmental risks linked to air pollution. High-performance computing is therefore needed for running and completing weather and climate simulation jobs in time.

System Details

The new system is scheduled to be put into operation in early 2018.

Sign up for our insideHPC Newsletter

More:

NEC LX Supercomputer coming to Czech Hydrometeorological Institute - insideHPC

China’s New Supercomputer Puts the US Even Further Behind

rF0N#&(|HecnHh)y}}}yn EgVV?y&,WAH0qbKpCf`<:=@?Jd?&1c1[/h>+^Xa'Lo-nWAuxqf"q$'FSy;mk{l^>Q6=c3QmT6M;{nJ]/::$fat{GQb={F-0 U~}*$iz/0 )zHx?&H]0x/[9-:MO {>/~A1WVLMG|y=a=;~G[JwiXDU/Z[7dxZqgRgJl"O#cy"t0|x*x$2I6%8%X!Zwblsxk4}t>fPI:6i~[7dk ?#lD(%.fBN.?i[&XZ,<]92B'SM9 VwD]zk6U2iyvLQ2DE''tk#Imkm{xQa'k+Y5bkPkb1 J?8|"8{''kpm#sn6GoPXMO~wA 3n%De_SkvY19XQ@kBQ}g^`9?_aMG<>b02,8x|4;|4+k`vUi<_t GlVA$b#4c_Es; ~ |1:wQ?g%GxF#ju~h,10S;?Gpd!`<+<*~JMb[@2/G(la#X?a d/2>K_oL#yJt6-@T:_aRl3hI3;>uy}W/ ?-N/A~Gy@,?!;|`+-]h:1Z|5A'KBy&BYJ?U>d"/3J*9bKP'?1KV'/R}C=GFYPTup{o,@Vs9_`_||(E D`Kk _IX,`qzc?)=o!=,rtz%Xo iP~Dt6o^sw"D?`[?GzFx^rS859%_kYRx`8 7?w^gc]_YXM%:6O>Y8i|rhgwf-Ml)W243N$u4Q~O/P(GEo,w Zptp}]GT$%/_g2 ;/m{#1kzdxb:lf%@S9L$~>(5)="(/},ODY1t-c;:w&ZfR*4>Z_<'ZN^w(*;A['eLK[;!WS4pBkEgvMJ"$N$A_D>-]90#GO6W@ =k&<~ nKYxnSB p U!1G90/K /Z`4Pn?u*a id]3h~dXE%)j)1CE:Xa)qGYESbk #bPK5 R4m %G5njM.}HPQ$^4l@ARh5{CLx% ;7,f/9= SLZaT]'$y0 7n:77D&~/:E| Con>HuCbvB/MXN`qhu5jN ojBo:ak5iv )R{R=Jm ?fFo4jWgA,k#4}v ,! @/-UN1GU7A ;= (D,>4Z5:~ ZcS}z'[ @86'.1%y+!<8)+`j { ]v&H7f7k+u+p {Zdr/k&eSxYH=B`YYWBh*o9AHp!*kQ*Z?`Xhq}2)i9chqE(;'Oy jt Pu"O*$Zs%C1plT/zzY|"$>|s5/WM8Z T*U)M6dLN )5 t.1@K,!XSQ>|RfMzw0WNW*V%74=)i7>WwD1E&4kBPv ANu)36et="#d9zBAX:qS^KkC.^eb0<(Z]&l;n&6 6%0G/]JUZ$.-xRc*4B%_I7tCC DCC"$C0//1X1_&W9y]Mai?&Os0|Q@9-8q'#wc'KB7G=s?<- m[ ^pqydKBo(kJ[!0e{^6k$m1^]by~wu/XdU,oQJYn$97 >u>`u*lL@-=jY6mz6Y00`z6!z6!z"m@w`:uiY?$ ue=CAlar6z!1z!1YAh= #Zacaca~0>/:arca0=.>{f<0A!3lzdC`i0~3y0U7=0FO&yo(z D^Axx=1LD`0#70rooD0 Mi=1La=MskFaSdSd,,fmahRhRh=H3BZgDk01iatia6 iQ0a)ghx?&1{"sT `: f=C8fO&ski0~iff=IHs$rf/`=f=NsSDnSDnXOs9ax3=0qxq/0+Q}^aF6=0#70rd9d9=Y0Y>Aax{acG4g9H1q8P0_`2/0&}2L>{f#3}2L>Y&L4Nx? L Oz0'IOac21q9&_`2/0&maNz"&,ELzaqN&=,0&=Z`2S%K?s,$ @`@=~6z an(6P !Ck:EaR7 s 4 K=`@=` @}Ju2R=^izB@? N8my# iA?#xXh,+5!U$*><(?gJ#jA7t[vxyX4K8t ?qkdU>U695bqfvYwv?A[ah;q_l3U,J/Y6omv0D7mjeH=}6T9dHUa?_-O&2QODjfdjfoouOW7L;/|j~f(jPg#HqI?xKW`D|;H?;Hi'5H;1/ N44-[?QBn*EMhXC*7y'~7%_iW3[R5!zg"o{g"oyg((EQQQQMQQMQ;Cz;C;;CfgU}E!;!=!;!!;!;!;5!!!;=!;]]]Dv]]]uC3DQ?;C^w(wsy[>8}Q?VjG$^f2`H-aC.N@i!f` > +$Dv}qaA1j17 `pNAbYGZZ>L/Ql`9RbC'H;aAx~xI5u#pJ}P%@`dx}/Fp"@l<=;9%]?%+??j%NfbM]]2b{ |d;4k9w`#'nSlx>.Y/B? 6LD)v+]c4nXxA`^v}Kd^:9el--^ Xi C]nE.;s3xjN,lTC>74=Vg39BY P|1A BX57-U7:QtZ?"!I9A]NwA#`xGuSET8^"GN3w:';sn;.Q06n}< wA8Jg;8< #&+CGJ,.x9#U]?X*]*kp."KL 9Zeid-Y2HN<`PKY6s$?GR!K .?=x`},7T8_N+7l^WuLxEO5 B+&d="a7$18?>0b0n!xy8#h#=g8TThO66S~0J4Jl<%AkF'ZYGoXX_ p%S'3.amT$YHxOwhq5cmvf'f8>{^8#7nkq7Vp"16 '=J}n 4 Ln@/8$ |;0%[(Fj( {9>@& $pQ}=1 f3nnAS}!m%|Dw o#q'o]P UTfi4UPn/aq1g3|cr?0["Z_itE<&Is"4jcNT(!HE||nLZ c /-g0^Hn>>Sh/pD`1M={ $6cZb+W{y ]{1Jc:c,eN'xP`lDj!oE&BAaXYeq02gYS+ J'hp&M^@j+R[v;me6ZxN`Z#m62JUCr8 ]UN2T"J|}GLSK,|"ixed%-k*CCpehlru*CBR)P<2b{(C$7W`=R'aB>W3l@-hwiAmeR5Iio\*:LaWTP"8cA5R7SAAs,+n#_^0EX%fr/A,g5%i}2aqkBq%/>(ko~5c|%]:Z>C>2k(!r[# HJ+JZ b&a2 ]eZ o zF9u#D8'y$ArKIzC:pY>0!8jbaQ se&4G#9NIH2@b_!5?O!5)3[cx6,JH|/!w i}# E$:Ed)^*Vy2sr?R 5q,@,D*VE:AJh*U}j&D5$s&V[TQ] D1eZR9S)JRA!e)ti`4O$At#7tE.?r"PDMTk:aN+C*H%}.V!vOAL'gOam;.~ S&U=ya72|)9MI0 oavsuz/HhIhXOx=P?d{H kw28G*;6ZxV9vM]uMgU|w"h*m/ef5X%f5zO&P]?E?px7mU -[o}Jcn Rq/a#F7UY-d /8$+QfCXi%VQb4u +k'=4 P3h$ + zn&FzK<*B~:9m}N@s"@YO1]8:VsWp6kF34.X_fAB /a1BA.E?E?BB'fCrO.'biu-8P!++o-QG']NF6O^F1MyP5jN?xr2zAgb()/|A)tmi-_WtmWt~1v[Yf_S#AG9>i-_uPV_l=:]ucW%FX%;yAr[x$K#N$m nswNWCkmpsUuK/up$I3O7wD|`VxbU`oS At J*ho^7KuaX2w2>{*r~th}Xx,w~n,?P$UulXH}4pv'O")U<)/I;yl8]r.d)'+o k=Mo04a!Ok|0}{zn~atgG7EXf vT^0nH OTt_'e=w=6Cj[7$e:ib6z#]~W:o"gUi|r;om)Tv3:2C:*FG5NrojDO )i8O]G6BCG>N"G_B;'1T|cQ+^6}W#RxY|%#;s#! vQ!aUhw rF$ $Ie]C6I5? 3G%kV01f_z~k._d9>?CP;+ aP9Kh_IKE=(x3X /wn&7Vi49EGP*HBkQ-lDEG5U6 xH&:d=cKb<|_F6Dcqev83 Gf3s||{GOh1 Y{ -qlO`?V 8B^p@W.bC; zZ%pHt4+fBZin~N/ npS2P&SWl1R;rwBVLl9aXoj}| CHT4eXN=z{1S9kd.[SbfkBC0rS*7pN>t~w&8sw3MP$U%Ao*`NL%4uN)3,^tr8'^p0ph{%_A`r47 3 ?s 1"*kLmpRP.vP2y(D5m,a 1> !%.-UN(Ys>bvoqI=*4+J8yy)Fe),KsRmh3] M_VzzQUr|e/xUx2@.!X|MWX|69UQD/h07[He8qkEAv[8,7,zUBt%GP/go^PUZ|^nh SFx!v$]!FPA@rCXErMt=+G(TBn.%2 'K'*e)X;n6pz~zTV> !&L,#7}R@KDNcl&K#zyV#^P%#TbzJ"]FB}?@n^u+0/iZqf9zk8Vf>%y ~<f0o`4`bs$x}2Q>^!OPA8|'-Dp>*u]%SQ}hPp"Wd}{tIXT7CT_014Z9rCUL9ra{:}42lkc%!",skw9 $jhTHaG95Ch'*j/rCB'rVm#PV|z7=W7r0P3u)7sy8U-bNlAB po.Prq+y,XrTIA9s%rMj0UdEg-^[4'h9>06i:u],7@.k!vbeEex>dB]! -i9<`nY+,W'<3f|5R1.D.R?*!yDLYep1!O&YanI@jeN"r!KKG~3 rIT['~%,`)phF:^J,O%GJj/<` *a%Lc KW}; z8M]PN)k7@y[0lE}X!,Q(Gp(Wv=, 2m-%rY8_1v,D=yy%<,P`b(yiYtJk@TO<.9]3IW>/a8_DPTZI;?B+`lm k,Cae+$[]0Z:[K9(KailQ,*93S|< #Ks+-*|p'f /5"k0"xu2v OG5(u)Pv!knN;UpWM|PzEPAZ69E,"by6>R[a[f#f%i#Pk l&P_@#hi[nJVEw[;V'ei!zuy=<"+'LohQN'@Sj&SKL1wYn'MsrB=bHsOpEfz!L4aCtp}Y%:p9""BTxu+[VCS_(+Y8o9:B?iWxxXM|Mg3ps{S^QL)KrX&5 @)]Z`gSa%k}-Iwb(Wb!G;{HKq:6L{km7Q~y$z;14'l_)[W48|iZ*X1'BVTN+MXmQ)pzqVSS#gc_EB>yo~o4jyQ=1Sk9hLV&n=8k^F,4MR]b0u[fqA)!/viF WC uZ0[9Lea4FS2o2e{[aoJ ^`*1a[6AtZ)DoZmZ[M9j:B :4?%WbZ=~CA%ay.8>qZJ1UX!_a'RbxG,OCi])/ eGMUO+U/-$qT/0`5*0Rau+ yuxo1Z2]",I6Ul )?17:IWg~VTr K"un40+&1s b`bUU]q=E#SQVPW>TSeUs{wXQ j9.M=m..bZjQ2DI%xCRGEk])pV|Z7(g<a)tdik$J^Qla=%OfBVe*XJtIuZ/X?nqur.QM1K|,5TfIRW,Yzc2(QiNI^>)kre-g#)fRC]#*f_[6%vg6-{zS64:s<27*%/xu%8^.eSqhRzE7.+*.^zm VsmmN(![s`;APT&XVVX>^FVakd7yCmg|x2K"*HV.J50W+7c<>ElMu.o>Z?TOjEe1zW6HS#05w}).vx1^TRtxOb_~CY V` 7/^=}Iom9SH9Hgo{N>c"4Bl)*?6Z`itgS7r:ERPt6V)V||PSB$q)EMTWHp4 `M]^ ; =Jywo*Z 5mUm1a,hjOvRm)OJRCs>~RjGS7TH

EM3HM+% 'II}TSP}g )M!RP[5-)aSj^Z3SP!e nycMh9"A}xc'`: fP7.C]oX6pG#eA M `L9{_x*jlYZ;v|UzOHkQ{

N"f4qYV_ga~&o5ke}Ug_1ob_gV,o5x[?V~~Y[vU+ffF}Rj'>3I=:RTAjjll#llffo.W{X=eiWq'`nmsmcP|.`6F{mkYmY62SU/r< URH GpQl:${?-#Y}Zh:j);i^31j#0]Q.fvbv{6%i|aRX{NI.7Zb^-~lYY0L^oZ,tx:v-_-w*iEELUSMUps(_-AW`7lPKZ

Originally posted here:

China's New Supercomputer Puts the US Even Further Behind

Department Of Energy Invests $258 Million To Build An Exascale … – Forbes


Forbes

Read the original here:

Department Of Energy Invests $258 Million To Build An Exascale ... - Forbes

Fujitsu to provide Kyushu University with new supercomputer for AI – ZDNet

The Research Institute for Information Technology at Kyushu University in Fukuoka, Japan, has announced it will be receiving a new supercomputer system in October, which will be used by universities around the country to advance research in areas such as artificial intelligence (AI).

The university will use the new system from Fujitsu as a computational resource for the JHPCN Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures, which is a network of joint use locations made up of supercomputer facilities at Hokkaido University, Tohoku University, the University of Tokyo, Tokyo Institute of Technology, Nagoya University, Kyoto University, Osaka University, and Kyushu University, with the Information Technology Center at the University of Tokyo serving as the core location.

It will also be used by the high performance computing infrastructure (HPCI), a computing environment that connects the K computer and major supercomputers located in universities and laboratories across Japan.

Making the new supercomputer available to users both inside and outside of the university is expected to enhance the platform for academic research in Japan and contribute to the development of new academic research in areas such as AI.

The server system of the new supercomputer system from Fujitsu will consist primarily of a back-end subsystem, a front-end subsystem, and a storage subsystem.

In the back-end, the computational nodes will be made up of 2,128 Primergy CX400 systems, equipped with Intel Skylake Xeon processors, and boasting 433 terabytes of total memory capacity. 128 of the x86 servers will be each equipped with four Nvidia Tesla P100 GPU computing cards.

Front-end subsystem will comprise 160 basic front-end nodes featuring Intel Skylake Xeon processors and Nvidia Quadro P4000 graphics cards, as well as four high-capacity front-end nodes featuring 12 terabytes of memory each, in addition to other servers.

With a theoretical peak performance of about 10 petaflops, the system will also comprise a 24 petabyte storage system, 100Gbps InfiniBand EDR interconnect, and Fujitsu's scalable cluster file system, FEFS.

It will also be Japan's first supercomputer system featuring a large-scale private cloud environment constructed on a front-end sub system, linked with a computational server of a back-end sub system through a high-speed file system.

The Riken Center for Advanced Intelligence Project in Japan received its own deep learning supercomputer in April, which will be used to accelerate research and development into the "real-world" application of AI technology.

The system is comprised of two server architectures, with 24 Nvidia DGX-1 servers -- each including eight of the latest Nvidia Tesla P100 accelerators and integrated deep learning software -- and 32 Fujitsu Server Primergy RX2530 M2 servers, along with a high-performance storage system.

Its file system is also FEFS on six Fujitsu Server Primergy RX2540 M2 PC servers; eight Fujitsu Storage Eternus DX200 S3 storage systems; and one Fujitsu Storage Eternus DX100 S3 storage system to provide the IO processing demanded by deep learning analysis.

The Czech Hydrometeorological Institute also announced it would be receiving a supercomputer on Thursday, with NEC to provide the government agency with scale-out LX series compute servers for weather forecasting.

It is expected the NEC cluster will enable the Czech Hydrometeorological Service to increase the accuracy of numerical weather forecasting and related applications, such as warning systems.

The NEC system will deliver the computational power of more than 300 nodes, connected through a high-speed Mellanox EDR InfiniBand network, and containing Intel Xeon E5-2600 v4 product family dual socket compute nodes, with a total of over 3,500 computational cores.

The new system is more than 80 times faster than the currently used system, and will be operational come early 2018, the company said.

This HPC solution also consists of a high-performance storage solution based on the NEC LXFS-z parallel file-system appliance, with over 1 petabyte of storage capacity and bandwidth of more than 30Gbps, which are required to meet the production needs of the weather institute.

Original post:

Fujitsu to provide Kyushu University with new supercomputer for AI - ZDNet

The US government is asking tech giants to build a new supercomputer that’s 50 times more powerful – Fast Company

Facebook is unquestionably the largest social network the world has ever seen. Every month, 1.94 billion people use the service. Every day, 1.28 billion peopleabout one in seven on the entire planetuse it. With that scale comes all kinds of responsibilities.

That's why Facebook has decided to formally address what it calls the "hard questions," the things that it feels will most govern what it does, and how it should be governed, going forward.

In a blog post, Elliot Schrage, Facebook's vice president for public policy and communications, wrote that the company wants to talk "openly" about these "complex subjects:"

* How should platforms approach keeping terrorists from spreading propaganda online?

* After a person dies, what should happen to their online identity?

* How aggressively should social media companies monitor and remove controversial posts and images from their platforms? Who gets to decide what's controversial, especially in a global community with a multitude of cultural norms?

* Who gets to define what's false news and what's simply controversial political speech?

* Is social media good for democracy?

* How can we use data for everyone's benefit, without undermining people's trust?

* How should young internet users be introduced to new ways to express themselves in a safe environment?

Facebook recognizes that not everyone will be in lock-step with it on how it addresses those questions, and it knows people will think there are other hard questions that need to be looked at as well. So the company is inviting users to suggest additional questions at hardquestions@fb.com.

Meanwhile, the folks at TechCrunch have annotated Facebook's list with their thoughts on the context behind each of the seven initial questions. DT

Visit link:

The US government is asking tech giants to build a new supercomputer that's 50 times more powerful - Fast Company

Super computer predicts Premier League table after five games and it’s great news for promoted duo Newcastle and … – The Sun

Tottenham well outside the European places, with Stoke, Swansea and Crystal Palace sitting in the relegation spots

A SUPER computer has predicted the Premier League table after five games and its great news for promoted duo Newcastle and Huddersfield.

While the jury remains out on those two sides and Brighton after they broke into the top flight for 2017-18, the powerful machine has suggested theyll get off to a strong start.

Rex Features

Brighton, while only winning one of their opening five, only lose twice to reach 14th in the table in their debut Premier League season.

Keep up to date with ALL the football news, gossip, transfers and goals on our page plus fixtures, results and live match commentary.

Its even better news for Huddersfield also tasting the Prem for the first time who crack the top ten.

David Wagners side win two and lose two of their opening five games to squeeze above Southampton, Everton and West Hamin ninth place.

Meanwhile, Rafa Benitezs Newcastle are flying high in sixth spot, with two wins and two draws from their first five fixtures.

Reuters

Check out the full list of every Premier League side's fixtures, here.

In fact, the Toon are above last season's title challengers Tottenham - who sit only in seventh, also with eight points from their first handful of fixtures.

Arsenal are good for fifth with three wins and just the one loss from their first few games - level on points with fourth-placed Chelsea.

Manchester United are unbeaten according to the super computer with three wins and two draws for 11 points.

Getty Images

Rex Features

According to the machine, Liverpool and Manchester City are the teams to beat in 2017-18, both of whom win four of their opening five games for 13 points.

The game the duo draw in the simulation is the game against each other on September 9 - with Pep Guardiola and Jurgen Klopp set for a titanic battle in the upcoming season.

Reuters

Action Images Via Reuters

At the bottom of the table, Stoke are the only side yet to register a win, with FOUR losses in their opening five games.

Swansea and Crystal Palace are also in the bottom three, with Bournemouth, Watford and Burnley just a point above the drop zone.

See the original post:

Super computer predicts Premier League table after five games and it's great news for promoted duo Newcastle and ... - The Sun

Fujitsu Receives Order from Kyushu University for Top-Class … – HPCwire (blog)

TOKYO, June 15, 2017 Fujitsu today announced that it has received an order from the Research Institute for Information Technology at Kyushu University for a new supercomputer system, which will steadily ramp up operations starting in October 2017.

This system will consist of over 2,000 servers, including the Fujitsu Server PRIMERGY CX400, the next-generation model of Fujitsus x86 server. It is expected to offer top-class performance in Japan, providing a theoretical peak performance of about 10 petaflops(1). This will also be Japans first supercomputer system featuring a large-scale private cloud environment constructed on a front-end sub system, linked with a computational server of a back-end sub system through a high-speed file system.

The Research Institute for Information Technology will use this system as computational resources for JHPCN(2) and HPCI(3), as well as a variety of user programs. By making this system available to users both inside and outside of the university, it will enhance the platform for academic research in Japan and contribute to the development of new academic research including AI. Fujitsu will use the technology and experience it has developed as Japans top supercomputer maker to strongly support the activities of the Research Institute for Information Technology.

Background of the New System Implementation

As a center for education and research, Kyushu University is the largest public university in the Kyushu region, and the Research Institute for Information Technology is a nationwide joint-use facility visited by professors, graduate students, and researchers across Japan for academic research.

Currently, the Research Institute for Information Technology operates three systems: a supercomputer system (consisting of the Fujitsu Supercomputer PRIMEHPC FX10 system), a high-performance computational server system (made up of Fujitsu PRIMERGY CX400 x86 servers), and a high-performance applications server system. The three systems will be integrated as part of the new supercomputer system, creating an environment that can meet an even wider variety of needs, extending beyond the current large-scale computation and scientific simulations, to include usage and research that require extremely large-scale computation, such as AI, big data, and data science.

Features of the New System

1. Server System

The server system of the new supercomputer system will consist primarily of a back-end subsystem, a front-end subsystem, and a storage subsystem.

Back-end subsystem The computational nodes will be made up of 2,128 PRIMERGY CX400 systems, the next-generation model of Fujitsus x86 server, equipped with Intel Xeon processor Scalable family (codename: Skylake). 128 of these servers will be equipped with four NVIDIA Tesla P100 GPU computing cards each (a total of 512 cards/ NVIDIA NVLink (4) used to connect GPUs).

Front-end subsystem This system will consist of 160 basic front-end nodes featuring Intel Xeon processor Scalable family (codename: Skylake) and NVIDIA Quadro P4000 graphics cards, as well as four high-capacity front-end nodes featuring 12 terabytes of memory each, and other servers.

Storage subsystem Fujitsu will deploy a disk array system with an effective capacity of over 24 petabytes.

2. Interconnect

Using the latest high-speed interconnect EDR InfiniBand to connect between the servers, this system offers high parallel computation performance and availability.

3. File System

This supercomputer system will be built using FEFS(5), Fujitsus high capacity, high performance, high reliability distributed file system with a proven track record both inside and outside Japan.

4. Power Saving Feature

This will incorporate a system to monitor electricity usage. By using the Fujitsu Software Technical Computing Suite(6), the HPC middleware, this system will flexibly control electricity usage through such functions to limit the maximum electricity consumption for each system user.

Overview of the New System

(1) Petaflops Short for peta floating-point operations per second. Peta is an SI prefix indicating one quadrillion, or ten to the power of 15, so this indicates performance of one quadrillion floating point operations per second. (2) JHPCN Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures. A network of joint use and research locations made up of supercomputer facilities at Hokkaido University, Tohoku University, the University of Tokyo, Tokyo Institute of Technology, Nagoya University, Kyoto University, Osaka University, and Kyushu University, with the Information Technology Center at the University of Tokyo serving as the core location. (3) HPCI High performance computing infrastructure. A computing environment that connects the K computer and major supercomputers located in universities and laboratories across Japan in a network to meet the diverse needs of users. (4) NVIDIA NVLink A high-bandwidth interconnect with high energy efficiency. The next-generation model of PRIMERGY CX400 is equipped with the maximum of four GPU computing cards per server and by connecting them with NVIDIA NVLink, provides ultra-high-speed communications. (5) FEFS A high performance distributed file system that can be shared on the scale of a hundred thousand of nodes. (6) Fujitsu Software Technical Computing Suite HPC middleware that offers high execution performance for massively parallel applications through system management and job operation functionality, compilers, libraries and so forth.

About Fujitsu Ltd

Fujitsu is the leading Japanese information and communication technology (ICT) company, offering a full range of technology products, solutions, and services. Approximately 159,000 Fujitsu people support customers in more than 100 countries. We use our experience and the power of ICT to shape the future of society with our customers. Fujitsu Limited (TSE:6702; ADR:FJTSY) reported consolidated revenues of 4.7 trillion yen (US$41 billion) for the fiscal year ended March 31, 2016. For more information, please see http://www.fujitsu.com.

Source: Fujitsu

Read more:

Fujitsu Receives Order from Kyushu University for Top-Class ... - HPCwire (blog)

Atos Wins Contract for GENCI Supercomputer – HPCwire (blog)

PARIS, June 15, 2017 Atos, through its technology brand Bull, has won a contract with GENCI (Grand quipement National de Calcul Intensif) to deliver one of the most powerful supercomputers in the world, planned for the end of 2017. A successor of the Curie system installed at the TGCC (Trs Grand Centre de Calcul of the CEA in Bruyres-Le-Chatel), the Bull Sequana supercomputer has an overall power of 9 petaflops and can carry out 9 million billion operations per second. It will be used for research purposes in France and Europe. The announcement was formalised yesterday at the Ministry of Higher Education, Research and Innovation.

A supercomputer to speed up academic and industrial research

The new supercomputer will be made available to French and European researchers for use in academic and industrial fields that require extremely high computing and data processing power.

The applications for intensive computing are many and varied, such as climatology, where the supercomputer will help to model past, present and future meteorological conditions with incredible accuracy within the framework of international activities carried out by the Intergovernmental Panel on Climate Change (IPCC)[1]. When applied to life sciences, the high-performance computer will make it possible to work on the scale of basic chemical processes in molecular systems, thus paving the way for major advances in the personalisation of medicine. In the energy industry, the supercomputer will not only optimise the process of combustion in motors and turbines, but also develop alternatives based on electricity with new-generation batteries in the wind, tidal and, in future, fusion power sectors. In astrophysics, only a supercomputer with this kind of power is capable of simulating the entire universe, thus putting us in a position to better understand its origin and evolution.

The equivalent of 75,000 PCs connected together

Based on the platform of the latest generation of the Bull Sequana X1000, which will eventually be capable of achieving an exaflop (a billion billion operations per second), the first instalment of this supercomputer will have a peak computing power of 8.9 petaflops and a distributed memory capacity of almost 400 terabytes. An extension of its configuration is planned for 2019, when its computing capacity is set to increase to more than 20 petaflops.

Consisting of more than 124,000 computing cores, the supercomputer will benefit from the patented direct liquid cooling (DLC) technology used to cool the system down to room temperature, creating an energy saving of up to 40% compared to air cooling.

The design of Curies successor illustrates the expertise of Atos engineers, confirming Atos European leadership in the supercomputer domain. Following on from the Curie supercomputer, deliveredin 2011 and already amongst the most powerful in the world, we are proud of the renewed trust that GENCI has placed in us to accompany them each day in their development of academic and industrial research states Philippe Vannier, Group Advisor for Technology at Atos

GENCI welcomes the acquisition of a new Atos supercomputer, which will actively help us to maintain the scientific competitivity of our French researchers. This highly anticiapted acquisition indeed marks an acceleration in French investment in research organisations. This will also be of great advantage to f industrial companies, especially SMEs and start-ups. This high-performance equipment spearheads our commitment in the European research infrastructure PRACE (PartneRship for Advanced Computing in Europe) in which GENCI is highly involved in representing France says Philippe Lavocat, CEO of GENCI.

The Atos group, leader of the supercomputer in Europe, has 22 supercomputers in the TOP500 of the most powerful machines in the world.

Specifications of the new supercomputer:

Atos at ISC

Atos is a Platinum Sponsor at the ISC High Performance Annual Conference that takes place in Frankfurt from 19th-22nd June 2017, and will exhibit the full breadth of its High Performance Computing offer on booth #D-1126. Follow the developments online at @Atos, @Bull_com

About Atos

Atos is a global leader in digital transformation with approximately 100,000 employees in 72 countries and annual revenue of around 12 billion. The Europeannumber onein Big Data, Cybersecurity, High Performance Computing and Digital Workplace, The Group provides Cloud services, Infrastructure & Data Management, Business & Platform solutions, as well as transactional services through Worldline, the European leader in the payment industry. With its cutting-edge technologies, digital expertise and industry knowledge, Atos supports the digital transformation of its clients across various business sectors: Defense, Financial Services, Health, Manufacturing, Media, Energy & Utilities, Public sector, Retail, Telecommunications, Transportation. The Group is the Worldwide Information Technology Partner for the Olympic & Paralympic Games and operates under the brands Atos, Atos Consulting, Atos Worldgrid, Bull, Canopy, Unify and Worldline. Atos SE (Societas Europaea) is listed on the CAC40 Paris stock index.www.atos.net

Bull is the Atos brand for its technology products and software, which are today distributed in over 50 countries worldwide. With a rich heritage of over 80 years of technological innovation, 2000 patents and a 700 strong R&D team supported by the Atos Scientific Community, it offers products and value-added software to assist clients in their digital transformation, specifically in the areas of Big Data and Cybersecurity and Defense.www.bull.com| Follow @Bull_com Atos expertise in HPC:

Atos has an ambitious exascale program that aims to develop a new generation of supercomputers capable of achieving a performance of the exaflops by 2020, meaning more than one billion billion operations per second, while considerably reducing electricity consumption. A total of 22 Atos supercomputers worldwide are ranked in the TOP500. In Nov 2016 18 supercomputers by Atos are listed on the Green500 list (of which 6 in the TOP 50).

Spokespersons present during ISC:

Atos presentations:

About Atos

Atos is a global leader in digital transformation with approximately 100,000 employees in 72 countries and annual revenue of around 12 billion. The European number one in Big Data, Cybersecurity, High Performance Computing and Digital Workplace, The Group provides Cloud services, Infrastructure & Data Management, Business & Platform solutions, as well as transactional services through Worldline, the European leader in the payment industry. With its cutting-edge technologies, digital expertise and industry knowledge, Atos supports the digital transformation of its clients across various business sectors: Defense, Financial Services, Health, Manufacturing, Media, Energy & Utilities, Public sector, Retail, Telecommunications and Transportation. The Group is the Worldwide Information Technology Partner for the Olympic & Paralympic Games and operates under the brands Atos, Atos Consulting, Atos Worldgrid, Bull, Canopy, Unify and Worldline. Atos SE (Societas Europaea) is listed on the CAC40 Paris stock index.

Source: Atos

See the article here:

Atos Wins Contract for GENCI Supercomputer - HPCwire (blog)

The Largest Virtual Universe Ever Created Was Just Simulated in a Supercomputer – Futurism

In Brief Researchers at the University of Zurich have created the largest virtual universe inside a supercomputer. The simulation will help them explore the nature of dark matter and dark energy to prepare the Euclid satellite for its exploratory travels.

University of Zurich (UZH) researchers have simulated the way our Universe formed by creating the largest virtual universe with a large supercomputer. The team turned 2 trillion digital particles into around 25 billion virtual galaxies that together make up this virtual universe and galaxy catalogue. The catalogue will be used on the Euclid satellite, to be launched in 2020 to explore the nature of dark energy and dark matter, to calibrate the experiments.

The nature of dark energy remains one of the main unsolved puzzles in modern science, UZH professor of computational astrophysics Romain Teyssier said in a press release. Euclid will not be able to view dark matter or dark energy directly; the satellite will instead measure the tiny distortions of light of distant galaxies by invisible mass distributed in the foregrounddark matter. That is comparable to the distortion of light by a somewhat uneven glass pane, UZH Institute for Computational Science researcher Joachim Stadel said in the release.

The precise calculations that have allowed the team to create the virtual universe have also allowed them to simulate small concentrations of matter, dark matter halos, which may be the loci in which galaxies like ours form. The Euclids mission is to explore the dark side of the Universe, so part of the challenge of the virtual universe project was to accurately model galaxies only one-tenth the size of the Milky Way, within a massive volume the size of the observable Universe. The behavior observed in the virtual model will help Euclid know what to look for on its journey.

Euclid will perform a tomographic map of our Universe, tracing back in time more than 10-billion-year of evolution in the cosmos, Stadel said in the release. Researchers hope to learn more about dark energy from the Euclid data, but they are also eager to discover new areas of physics beyond the standard model, such as a new type of particle or a modified version of general relativity. Each bit of evidence along this journey may well take us one step closer to understanding the origins of our galaxy, and perhaps the entire Universe.

More:

The Largest Virtual Universe Ever Created Was Just Simulated in a Supercomputer - Futurism

A Supercomputer Has Created the Largest Virtual Universe Ever … – Atlas Obscura

This is just a portion of the unbelievable simulation. University of Zurich/Fair Use

Anyone thats played a modern open-universe video game can attest to how large simulated worlds can get, but theyve got nothing on an incomprehensibly large universe simulation that was recently created by a supercomputer at the University of Zurich.

Calculated using 2 trillion digital particles of information meant to represent dark matter, the simulated universe contains some 25 billion virtual galaxies, making it the largest universal simulation ever. Visually, it looks like an almost impossibly complex and chaotic web.

According to Science Alert, the simulation, which took three years of research to develop and implement, was mapped by the Piz Daint supercomputer at the Swiss National Supercomputing Centre. It took the powerhouse computer 80 hours to complete the calculations, which is actually considered pretty fast. The resulting simulation of our universe is both the largest and most accurate view of our universe (and its history) ever created.

The simulation was created for use with the Euclid satellite which is set to launch in 2020. The Euclids six-year-mission will be to look for and study evidence of dark matter and dark energy, those maddeningly hard-to-find forces that are thought to make up the majority of the universe. Using the simulation as a basis of comparison, the satellite will look for variations in the observed light of the universe, hoping to detect evidence of any influence by invisible dark forces.

Before the satellite launches, researchers will also study the simulation to see what they can learn just from their calculations. The mind boggles.

Read this article:

A Supercomputer Has Created the Largest Virtual Universe Ever ... - Atlas Obscura

Japan is building the fastest supercomputer ever made – CNN

The supercomputer is expected to run at a speed of 130 petaflops, meaning it is able to perform a mind-boggling 130 quadrillion calculations per second (that's 130 million billion).

Once complete (the target date is April 2018), the AI Bridging Cloud Infrastructure (ABCI) will be the most powerful supercomputer in the world, surpassing the current champion, China's Sunway TaihuLight, currently operating at 93 petaflops.

While the ABCI will not have a mouse or screen, it's not vastly different from a personal computer -- just souped-up, a whole lot faster, and much, much bigger.

"The current supercomputer system is one million times faster than your personal computers," explains Satoshi Sekiguchi, a director general at Japan's National Institute of Advanced Industrial Science and Technology.

Sekiguchi calculates that it would take 3,000 years for a personal computer to achieve what a supercomputer can do in just one day.

In terms of size, Japan's supercomputer will be comparable to a parking lot with space for 30 to 40 cars.

"The supercomputer that is currently under development would take up about 1,000 square meters of floor space," says Sekiguchi.

The ABCI could help Japanese companies develop and improve driverless cars, robotics and medical diagnostics, explains Sekiguchi.

"A supercomputer is an extremely important tool for accelerating the advancement in such fields," he says.

Its supersonic speed will also help Japan develop advances in artificial intelligence technologies, such as "deep learning."

But supercomputers are to thank for smaller everyday inventions too.

"The initial design of paper diapers was actually done using a supercomputer," explains Sekiguchi. "However, mothers continue to use them without knowing that fact."

Japan's Ministry of Economy, Trade and Industry will spend 19.5 billion yen ($173m) to build the ABCI and two research centers.

"They [the government] recognize that artificial intelligence will be a key to the future, or the key to the competitiveness of the industry," says Sekiguchi.

Japanese firms often turn to the likes of Amazon, Microsoft and Google when looking to crunch big numbers. But once it's running, Japanese researchers and companies will be able to pay to use the ABCI, rather than renting cycles on public clouds like Amazon Web Services or Microsoft Azure.

Japan's K computer, which runs at just over 10 petaflops, claimed the title of world's fastest supercomputer for six months in 2011, before it was outperformed by the United States and China.

But for Sekiguchi, it is not about the race to build the fastest supercomputer.

"Before, there was a competition in the computer industry itself, however, from now on, it is going to be more about what you can do with the computers," he said.

"It is no longer about which computer becomes the best in the world, but rather, creating an environment in which these new applications can be used properly."

Excerpt from:

Japan is building the fastest supercomputer ever made - CNN

NEC Supplies LX Supercomputer to Czech Hydrometeorological Institute – HPCwire (blog)

DUSSELDORF and TOKYO, June 14, 2017NEC Corporation (NEC; TSE: 6701) today announced that the Czech Hydrometeorological Institute (CHMI) in the Czech Republic selected NEC Deutschland GmbH to provide the next generation supercomputer system utilizing NECs scale-out LX series compute servers for their weather forecasts.

NECs scale-out LX series HPC cluster will enable the Czech Hydrometeorological Service to increase the accuracy of numerical weather forecasting and related applications, namely warning systems. Weather prediction models are increasingly complex, including rainfall, temperature, wind and related variables that have to be calculated as precisely as possible several days in advance. At the same time, regional peculiarities such as orography and terrain physiography need to be considered. In addition, the public must be made aware quickly of predictions of high impact weather events affecting daily life, including environmental risks linked to air pollution. High-performance computing (HPC) is therefore needed for running and completing weather and climate simulation jobs in time.

NEC will deliver the computational power of more than 300 nodes, connected through a high-speed Mellanox EDR InfiniBand network and containing the new Intel Xeon E5-2600 v4 product family dual socket compute nodes, with a total of over 3,500 computational cores.

The Supercomputer is configured for high availability, including redundant storage and power supplies, as operation is required 247.

Moreover, the computational peak performance of this HPC cluster will be more than 80 times faster than the currently used system.

This HPC solution also consists of a high-performance storage solution based on the NEC LXFS-z parallel file-system appliance, with more than 1 Petabyte of storage capacity and a bandwidth performance of more than 30 Gigabytes per second (GB/s), which are required to meet the production needs of the CHMI. This scalable ZFS-based Lustre solution also provides advanced data integrity features paired with a high density and high reliability design.

The new system is scheduled to be put into operation in early 2018.

Reliable HPC technology by NEC shall be important both for forecast production and innovation; after Mto-France, CHMI is the second largest contributor to the development of the ALADIN Numerical Weather Prediction System, currently used by 26 countries. Moreover, in this project, we have a specific goal to improve air quality trend forecasts in relation to meteorological conditions and the performance of air quality warning systems, said Dr. Radmila Brokov, head of the Numerical Weather Prediction Department, CHMI.

We are very happy that CHMI has selected NEC to deliver an HPC solution for their weather and climate simulations, as NEC has a very special expertise in meteorological applications. For years, we have been successfully collaborating with meteorological institutes, and we look forward to cultivating these partnerships further, said Andreas Gttlicher, Senior Sales Manager, NEC Deutschland.

NEC has a long-term track record in climate research and weather forecasting and holds a leading position in the supercomputer market.

About the Czech Hydrometeorological Institute

The Czech Hydrometeorological Institute is the Czech Republics central government institution for the fields of air quality, hydrology, water quality, climatology and meteorology, performing this function as an objective specialised service. It was established in 1919 as National Meteorological Institute. The present-day organization of the Institute covers hydrology and air quality as well. The Institute is run under the authority of the Ministry of the Environment of the Czech Republic and its main task is to establish and operate national monitoring and observation networks, create and maintain data bases of data on the condition and quality of the air and on sources of air pollution, on the condition and development of the atmosphere, and on the quantity and quality of water, and provide both climate and operating information about the condition of the atmosphere and hydrosphere, and forecasts and warnings alerting to dangerous hydrometeorological phenomena.

About NEC Corporation

NEC Corporation is a leader in the integration of IT and network technologies that benefit businesses and people around the world. By providing a combination of products and solutions that cross utilize the companys experience and global resources, NECs advanced technologies meet the complex and ever-changing needs of its customers. NEC brings more than 100 years of expertise in technological innovation to empower people, businesses and society. For more information, visit NEC at http://www.nec.com.

Source: NEC

View post:

NEC Supplies LX Supercomputer to Czech Hydrometeorological Institute - HPCwire (blog)

Premier League 2017/18: Super Computer predicts table after five games of new season – talkSPORT.com

The Premier League fixtures for 2017/18 have been announced and fans cannot wait to get the season started.

Kick-off may still be around two months away, but it does not stop supporters fromdreaming.

READ MORE: Premier League 2017-18 fixtures in full: Every team, every game

The first gameyou look for is usually the season opener, followed by the final match, as well as the derby clashes home and away, plusmeetings with the newly-promoted sides.

Another thing is the first month or so of fixtures - how your side's start could determine the way the whole season pans out, whether it could see them pushing for the title, a battle for places in the middle or scraping for points and playing catch up near the bottom.

Well, no fear - talkSPORT has done the hard work for you.

We have fed the data into the super computer, assessing the opening five rounds of Premier League fixtures, with predicted rankings.

Bear in mind plenty can change between now and the start of the season, as the transfer window opens and managers sort out squads.

The standings above have been collated just for fun it's interesting to speculate, but as we all know, football has a funny way of turning expectations on their head.

Click the right arrow, above, to see how the Premier League table might look after five games and comment with your predictions below...

See original here:

Premier League 2017/18: Super Computer predicts table after five games of new season - talkSPORT.com

Los Alamos lets users customize the supercomputer software stack – GCN.com

Los Alamos lets users customize the supercomputer software stack

For all their power, supercomputers require specialized software and applications, which makes it difficult for users running big data analyses which comes with its own set of frameworks and dependencies -- to take advantage of the hardware.

To make it easier for researchers working with big data to use supercomputers, developers at Los Alamos National Laboratory have created a program called Charliecloud that uses a container approach to lets users package their own software stacks. Those tailored stacks then run in isolation from the host operating system, according to Reid Priedhorsky, lead developer with the High Performance Computing Division at Los Alamos.

Charliecloud lets users easily run crazy new things on our supercomputers, he said.

Researchers install the open-source Docker product on their own system and customize the software stack as they wish. They then import the image to the designated supercomputer and execute their application with Charliecloud, which is independent of Docker. This maintains a convenience bubble of administrative freedom while protecting the security of the larger system, Los Alamos officials said.

This is the easiest container solution for both system administrators and users to deal with, said Tim Randles, co-developer of Charliecloud, also of the High Performance Computing Division. Its not rocket science; its a matter of putting the pieces together in the right way. Once we did that, a simple and straightforward solution fell right out.

Charliecloud is very small, only 800 lines of code, and is currently being used on two Los Alamos supercomputers, Woodchuck and Darwin.

Not only is Charliecloud efficient in compute time, its efficient in human time, Priedhorsky said. What costs the most money is people thinking and doing. So we developed simple yet functional software thats easy to understand and costs less to maintain.

More information on Charliecloud is available here.

About the Author

Susan Miller is executive editor at GCN.

Over a career spent in tech media, Miller has worked in editorial, print production and online, starting on the copy desk at IDGs ComputerWorld, moving to print production for Federal Computer Week and later helping launch websites and email newsletter delivery for FCW. After a turn at Virginias Center for Innovative Technology, where she worked to promote technology-based economic development, she rejoined what was to become 1105 Media in 2004, eventually managing content and production for all the company's government-focused websites. Miller shifted back to editorial in 2012, when she began working with GCN.

Miller has a BA from West Chester University and an MA in English from the University of Delaware.

Connect with Susan at smiller@gcn.com or @sjaymiller.

Go here to see the original:

Los Alamos lets users customize the supercomputer software stack - GCN.com

RIKEN Posts Extensive Study of Global Supercomputer Plans in Time for ISC 2017 – HPCwire (blog)

On the eve of ISC 2017 and the next release of the Top500 list, Japans RIKEN Advanced Institute for Computational Science has posted an extensive study by IDC that compares and contrasts international efforts on pre-exascale and early exascale plans. The study Analysis of the Characteristics and Development Trends of the Next-Generation of Supercomputers in Foreign Countries was contracted by RIKEN, completed last December, and posted last week on the RIKEN web site.

Much of the material is familiar to exascale race watchers but gathering it all in one place is fascinating and useful. RIKEN has made the report freely available and downloadable as a PDF from its website. Its worth noting the reports authors were formerly the core team of IDCs HPC research group and now are members of Hyperion Research which was spun out of IDC this year. Despite the studys length (70-plus pages) it is a quick read and the tables are well-worth scanning. Much of the focus is on the next round of leadership class computing supercomputers (pre-exascale machines) about which more is known but there is also considerable discussion exascale technology.

For supercomputer junkies, theres a table for nearly every aspect. Below are two sample: 1) systems covered in this report and their current/planned performance and 2) memory systems either in use or planned.

There are many more tables covering topics such as architecture and node design, MTBF, interconnect, compilers and middleware, etc. A complete list of the 30 tables is at the end of the article and is a good surrogate for the reports scope.

Heres the top line summary in an excerpt from the report:Looking at the strengths and weaknesses in exascale plans and capabilities of different countries:

As noted earlier, while the report breaks little new ground its comprehensive view of these leading 15 supercomputer systems and side-by-side comparison is a useful resource. The list of tables is below along with a link to the report.

Link to report: http://www.aics.riken.jp/aicssite/wp-content/uploads/2017/05/Analysis-of-the-Characteristics-and-Development-Trends.pdf

List of Tables

Table 1 The Supercomputers Evaluated in This Study

Table 2 System Attributes: Planned Performance

Table 3 System Attributes: Architecture and Node Design

Table 4 System Attributes: Power

Table 5 System Attributes: MTBF Rates

Table 6 System Attributes: KPIs (key performance indicators)

Table 7 Comparison of System Prices

Table 8 Comparison of System Prices: Whos Paying for It?

Table 9 Ease-of-Use: Planned New Features

Table 10 Ease-of-Use: Porting/Running of New Codes on a New Computer

Table 11 Ease-of-Use: Missing Items that Reduce Ease-of-Use

Table 12 Ease-of-Use: Overall Ability to Run Leadership Class Problems

Table 13 Hardware Attributes: Processors

Table 14 Hardware Attributes: Memory Systems

Table 15 Hardware Attributes: Interconnects

Table 16 Hardware Attributes: Storage

Table 17 Hardware Attributes: Cooling

Table 18 Hardware Attributes: Special Hardware

Table 19 Hardware Attributes: Estimated System Utilization

Table 20 Software Attributes: OS and Special Software

Table 21 Software Attributes: File Systems

Table 22 Software Attributes: Compilers and Middleware

Table 23 Software Attributes: Other Software

Table 24 R&D Plans

Table 25 R&D Plans: Partnerships

Table 27 Additional Comments & Observations

Table 28 IDC Assessment of the Major Exascale Providers: USA

Table 29 IDC Assessment of the Major Exascale Providers: Europe/EMEA

Table 30 Assessment of Exascale Providers: China 59

* No table 26

See more here:

RIKEN Posts Extensive Study of Global Supercomputer Plans in Time for ISC 2017 - HPCwire (blog)

Make a Simulated Universe with Only a Supercomputer and 2 … – Edgy Labs (blog)

Using a supercomputer and three years of work, Swiss astrophysicists from the University of Zurich were able to virtually model the formation of the Universe.

The idea that we already live inside a simulation or a holographic projection is the subject of a serious scientific debate. Figures as noteworthy as Elon Musk put stock in the idea of us living inside a computer simulation.

A panel of astrophysicists, theoretical physicists, and philosophers reflected on the issue at last years Isaac Asimov Memorial Debate, held at the American Museum of Natural History in New York.

Neil deGrasse Tyson, the moderator of the debate, estimates that the chances of human existence being a program on a hard disk are one in two.

Until we come to a definitive conclusion on whether we live in a simulation built for whatever purpose or not, scientists from Switzerland have contributed to the other end of the argument: they created a simulated universe with a supercomputer.

A group of astrophysicists from the University of Zurich spent three years working on designing the largest computer simulated universe. This simulation gathers a huge catalog of 25 billion galaxies made of 2 trillion digital particles.

To generate this whopping number of digital particles and arrange them into galaxies, the team developed a special code, named PKDGRAV3, to describe very precisely the dynamics of dark matter as well as the formation of large-scale structures in the Universe.

To run the code, designed to optimally use memory and modern supercomputer architectures, researchers used Piz Daint, a supercomputer at the Swiss National Supercomputing Center, requiring over 4,000 Nvidia Tesla P100 GPUs to run for 80 hours.

Researchers published their results in the journal Computational Astrophysics and Cosmology.

The simulation will be used during the Euclid mission to calibrate the experiments onboard the Euclid Telescope, scheduled to be launched in 2020 by the European Space Agency (ESA).

TheEuclid mission will tackle the dark side of the Universe, i.e., dark matter and dark energy, which is one of the most enduring enigmas of modern science.

In the space of only a few decades, humanity has passed from an elementary knowledge of the Universe to simulate a big part of it.

With supercomputers getting exponentially more powerful, how much time would it take before being able to recreate the entire Universe, and life within, in a simulation? If we are indeed close to that achievement, what does this mean for our own reality?

We take reality for granted.

We consider as self-evident what we perceive as real which is the reflection of an objective and physical reality. But perhaps our universe, from the atoms and molecules that make up our body to the largest galaxies, is nothing but a sophisticated simulation running on a supercomputer, and where we, the Sims, are building our own simulations.

See the original post here:

Make a Simulated Universe with Only a Supercomputer and 2 ... - Edgy Labs (blog)