Scientists Have Created The Largest Ever Virtual Universe Inside a Supercomputer – ScienceAlert

As well as studying what we can observe today, scientists rely on simulations to understand more about the past and future of the Universe and we have a new record for the largest Universe simulation ever computed.

A giant supercomputer has been used to model some 25 billion virtual galaxies, put together from calculations involving around 2 trillion digital particles. And it's holding a brain-popping amount of data.

The simulation, developed by astrophysicists at the University of Zurich in Switzerland, is going to be used to calibrate experiments on board the Euclid satellite due to launch in 2020 and tasked with investigating the mysteries of dark matter and dark energy.

"The nature of dark energy remains one of the main unsolved puzzles in modern science," explains one of the team, Romain Teyssier.

Part of the simulation, with dark matter halos shown as yellow clumps. Image: Joachim Stadel, UZH

Euclid won't be able to see dark matter directly it's notoriously elusive but it will be able to observe dark matter's effects on the rest of the Universe, and the calculations from our virtual model will help the satellite to know what it should be looking for.

"The more accurate these theoretical predictions are, the more efficient the future large scale surveys will be in solving the mysteries of the dark Universe," write the researchers.

To achieve the best possible accuracy for the simulation, scientists spent three years developing a new type of code called PKDGRAV3, specifically designed to run smoothly on the architecture of the latest supercomputers.

By carefully optimising the algorithms used to simulate the Universe, the team was able to cut down on the time needed to imagine some 13.8 billion years of deep space history since the Big Bang.

When tested on the Piz Daint supercomputer at the Swiss National Supercomputing Centre, the code came up with its record-breaking simulation in just 80 hours.

The calculations it runs analyse the way dark matter fluid evolves under its gravity, forming what are known as dark matter halos small concentrations of matter that astronomers think surround galaxies like our own Milky Way.

As Euclid makes its way through space in the next decade, it will capture the light from billions of galaxies, but what it will really be looking for is slight changes to that light caused by the invisible mass that is dark matter.

It's like looking for distortions of light through an uneven glass pane, say the researchers.

Based on observations of the way the expansion of the Universe is accelerating, scientists think dark matter and dark energy make up around 95 percent of our Universe, with the other 5 percent what we can actually see.

When it gets into orbit, Euclid will be taking a closer look, and the new simulation code from Switzerland will help it make sense of what it's measuring.

Thanks to the advances we're seeing in how far into space we can see and how quickly we can process the data we get back, the net is closing in the hunt for dark matter.

"Euclid will perform a tomographic map of our Universe, tracing back in time more than 10 billion year[s] of evolution in the cosmos," says one of the team, Joachim Stadel.

The research has been published in Computational Astrophysics and Cosmology.

See original here:

Scientists Have Created The Largest Ever Virtual Universe Inside a Supercomputer - ScienceAlert

Scientists use a supercomputer to build a simulation of the known … – Digital Trends

vH7)`Dj%J]^[V@$$&j)[|r9,(/rADr-w>{oo{l8nwJAT(mQGa?geqfsqv:CQWR|Fzgo Ux;2$ N3_c-m^tJO(vx= J^W*k;A}`)8Q18z/kaE0LQvAiV?jN|~I518 AiP}WnMQ?(nQ"}OAr}fIW/O88yP=Wz#? e6+bDw:Vc*AxM]M,gyV5!lN.UQT)mWK^?OfkjRe'ch6||NU|b hZA/M|+Q O_[<(@E|,WU-OR=Vak{x0080tP/3W7R8 !OLFJ~cjY C ch%1ZrnWwJgEYsKv0e3i~M8W Zge&ral4>i?H]8Hfqc"mz/NAAR{*{3Uf[lmaklfsfs?w0>ivm`~8Bwj_;_;A:H{)zY-9>Q$&|K v.~'6Q[} [qy?s`Q(z*ie=N':)A$GqSj BeF'rTA%]TPRU~UnAy8`c@'g ~Yg l{j4]4ymEx9ik@ik~ %/S7z8<[ke tRCN(7NR+i6oI)?rPigox)g8UsU<$YSd$e%Inl3#[5wl@.5Wn!FiOTgf!xqyNM&+V&:xXY5v=T>e2}#(w_:nY FHIG%P{G02bw(+;ehcC#a'CMfe(QGGa wWOwXq5H*r"J5O9)l3xw2}-|o@/Q5T1:gUiy$@No73WxCRJYJ]TM`pCm[z@'cJ{DtmY(z@;@qPTjTMT#@#(7S/TE|d%-Va5}f5R/DTiM^ktd^Q9Cd= (R0YZT_l f3#,M dveLw5uMw"yj/:qcX`aHVHD|I >?VNpcb9v+;3='9l*=0kb7+r"Q,iJ)#`$z5vCjH +9nZ g,1yR^*)?F(-E+;/v|.'vl.^x r1X>J79_[P=zx,|1:GyS4u%oW+r- @9+'} L6iI_}"b~D:~$>oHEVDVuk}%VZva[Z}S377NdP^ 0ph(] #&%L"z3fBc.fdYJ5Qp79:0W9Bf~QL81e0$pDbEz,I(>Vbg(X1S 47a[#/:R;'?Jxk*g,a2#IU$}O jBtlW-xJq] }*1cn=5.X]gZCHF?{>OA%L0,wUv`Xkhs~f60q_:b?Em }@N2?Wo_q]u='kG6t8wjn2@o;~a*%m?)(;D9=[luG9xD+A~3#=UeWn?u~Ci$P|Ca67!`0VA~#t31cl'i$;}K>Vf:ss!zaH ?Mpx~BpsAh=XM 'z]][ySqSQn)Q4TZuXAQw7LCJ4j;&r{K.HOV.qeW#d4>>8L:vubhQB)(hB-@_l?rlv+G8Z-mO-i|TmZdx*y2O3|% B5z['VW +j( 8aQC+^bRR!|jr[WLa~U*[ECR@ED;p;F 1z@&|MBNXJIQ*@?nf;d"WFvA;!QUBZ%RRDgnxd?pzD@{uPtfi<'L!CVls~#O=2(a1K%4w5M{,Zm;m1F0hAdJe )b--)_/_>S*(nX! ;sx]QP}o t.eY`~@9Ds7D}Dit Tw -q 2k~7,]>-/?`#|b;l,`G0Slez9>5`18BI`8$C9hF]'=J?tw->:n8mOF?YG[XbHJJFU; ^0MbV`Jc`PR50@4vwQl .6X+j*T5hW;fOfB=GV[ ^8js= Jv)W+=mw;sLq>}e85$:Rk=cV%.K0]1 0D,8q cP0T1TGs PDStg4aJSxJ'EyVr L)Ck=9(K}xOdNZ>~M;vvV;"|qohjc'R9aC) 7 %||w~anK(S0&D"_9xTx-X F:{hFrE Apkn<6 v`B9/ " !$xc:zbL@t?)HKWpB%k-j) ; : GGqrt@jLq.'@jc5d-BP*BuN{&b|:D,`;5z8`0c^-G ]^tTWG8sdWVv^kz OUs5siH]W]MpRV->,z9-RG5 9b`;OnW#U/#)$H(J|#jQ|:~_=e^=25(/dM4S/BrDIzR;y'nFJ8fo (g3$<7qo"(i0u?@L n8g{QigzS1"-cm+Q)qfg/8Y:!LJOh:?R@^Xy}naB9|WZK OJyp44T5AzvROqBsfj ;ROc qMMv]XL PmMTXx`cOUL?@X+Qv_l Ojq$h3l0i'OQDqt^wYmam5mKIw%[V!(7KBTf,nW_(HsD?]RG8ROD'Y,vrxNj6YMfB8 ^ZZk6ZnspM=)`%yRZo~7sZ,oJW,-Z)&t"rI>wNIl+"E !0&^wht'=pbCMk4yb'zps?/:zn75 jrUrReY9,75[=*pbLS+3ZqZASX%$ 7yZn;G%K,q5WkG;{.&[RAFXO+Jm6GtpdG%D,9Xh }w1W_Wr.Z Dj`i9jDZ L`,,"rta|;w "yJWI2LSrv%d]1v+~~<* !,=2x-Y103wdnEMAL:CY?>8M1^: TZ^{Zc"yRT:~3r{C%7]@wRnjEBCxiPIhs6:sN3j JRip x!c F88*hahc.uhRfTsg`d + 4Q0:hX0Cr/O#G%6FK`l!QAqM?}hr#v{=]A q}(Q[Jf7v6mhWg%[]K3>i7 b&I}3M2aP210b:7uNzC;$O1{~EYI`J] }&|(! Y3"):?lItF8R*Ih!2 =:PItWpJ:^#Vg6CAu_)G9FA3*VK4,;6 NDp|hD6a&c1'=MR{e`60HN^'_;Pt.3=]3)DnYEIzB9a+ ' ttgD,=w"p0[E~`Feu~Jf0J~-@Em}Asl[C_/?xsA||$OIs]`Uciwi~x_0K 8J/<[ r)Td pT?2Hk=(.E8"[4s;h6&8ZY]c5ka]uR.$.E/,h4F)MMI(r|r!eH| i_6/eGX}P."^Cu4J?R3Zdh^ 'AL'f ;ZHUjSZVT9NKo_?(/+O| W y,'eVrsJCi Vm3@{ QUUn0^ pr5BC4/FC!,m_I~g;2CZ964(`~*$5PexuS08&Ia~Wb[ajVj;k3#m!usl Cq;wC $CZhm5!W$rl4B[Tr AV4Rd`iyakqQ-vk*.-om7uvM$5=-nUWf@nYhTydzNgmC'V9r^JCm;IDuV>*X1598ooo#e9~^G}ajsl@;ipA E8G~1V&xs=hl#o[MK=; ;J 2p@Dntcb|m/:sD'^_69IyrA 878kp.S>AYG)U/[*O_ j;HP:3%O8U+kWpad*Zp=/`[qH-,)sP9l3xTH| g&8FqJ*e;xwvqsH%mSlNlex#4D"d@nZUn%(,75t.{i tB0NF`bKG!C|8}+)J&hO n_QOkk-nc{Mmlmn -VX[[[o[jJnoll[= [+Mx[6|JtFc{m{MH- |qm- ~wuSGn5ie{j2n&qmd}. )6 C8 a'a7 zJ~~_1RuQBtqO#wp!/?)xo>K?L| ;@&u{'`L?=#H u' Op]2!![WOwx 'Wd2n`1K 1th`d1b { ZKiS [kVW611==dI 2|L,x<3Z8{.q_@-pgOA_=e~QSw|VTj9L @I;{ B_/jX9Oz5:gA>06K3wF`39_MPi!8wz8c^_a}c{=_=KJ8L7,scj [$F-g~p4g< pJDKl2!(UrIJH6v+R- Q8,[mfNH+j,c+2m}k11_/gw^+k2~|__D>y=o -o=:.ZazJWZb]jZMg9L7zk++W :yhEd6^[km6{a<B$Wo[{-?DTnXb(Tr1$lO T&?z:p"xP|AA8+q$R h 6v!S[x~54 %hphnm;43

Read more:

Scientists use a supercomputer to build a simulation of the known ... - Digital Trends

Early Benchmarks on Argonne’s New Knights Landing Supercomputer – The Next Platform

June 12, 2017 Nicole Hemsoth

We are heading into International Supercomputing Conference week (ISC) and as such, there are several new items of interest from the HPC side of the house.

As far as supercomputer architectures go for mid-2017, we can expect to see a lot of new machines with Intels Knights Landing architecture, perhaps a scattered few finally adding Nvidia K80 GPUs as an upgrade from older generation accelerators (for those who are not holding out for Volta with NVlink ala the Summit supercomputer), and of course, it all remains to be seen what happens with the Tianhe-2 and Sunway machines in China in terms of new development.

While we are not expecting any major new architectural surprise shakeups on the Top 500 list when it is announced next Monday, there is progress for some of the pre-exascale machines being installed and put into early production, including the Cori supercomputer at NERSC (more on that later today) and the Theta system at Argonne National Lab. Both of these machines sport an Intel Knights Landing (and Haswell in Coris case) base with the Cray Aries interconnect via the XC40 supercomputer architectureand both are reporting early results with key applications and how they might help centers adapt to the much larger systems.

As we pointed out last week, there are some questions about the future of the Aurora supercomputer at Argonne, an Intel and Cray based system that has been expected to arrive at the lab in 2018 sporting Intels Knights Hill and Cray architecture. However, work has been progressing on one of the systems designed to prepare users and codes for such a scale-shiftthe stepping-stone Theta supercomputer, which will introduce the lab to the Cray XC40 architecture and help users make the shift from an IBM systems focus to a completely new approach altogethersomething we talked about with one of the labs leads when Aurora was announced.

Even though part of its purpose is to provide an on-ramp to the next-generation Intel architecture (Knights Hill) and Cray architecture after so many years as a BlueGene-centric lab, Theta is still very powerful. In terms of capability, it is very similar to Argonnes current leadership-class supercomputer, the 10 peak petaflop Mira supercomputeran IBM BlueGene machine at still holds steady at #9 on the Top 500 list alongside a few other IBM BlueGene systems that will be retired in the next couple of years, bringing an end to that architectural era. The Knights Landing and XC40 (Cray Aries network-based) combination will deliver (along peak Linpack benchmark performance lines) in 3,624 nodes what takes Mira 49,152 nodes (although the architecture differences dont allow for true apples-to-apples compare).

With Theta up and running now, we can presume in time for the upcoming Top 500 ranking (although some labs eschew this benchmark because of its lack of relevance to real-world HPC applications) researchers are running microbenchmarks to evaluate performance. On the list for a recent report were DGEMM for peak floating point performance metrics and for more component-centric evaluations, LAMMPS, MILC, and Nekbone were measured. If the team ran the Top 500 Linpack benchmark to obtain peak theoretical performance, we wont know until next week.

For DGEMM and the evaluation of the peak floating point performance of the KNL core and nodes, the team found that they were able to achieve 86% of the peakan impressive number considering each node was expected to reach 2.25 teraflops (35.2 Gflops per core).

The research team adds that while the KNL core has a theoretical peak throughput of 2 instructions per clock cycle), actual throughput can be limited by factors such as instruction width and power constraints. They explain that power measurements show better computational efficiency when using fewer hyperthreads. OS noise and the shared L2 cache contention have been identified as the sources of core to core variability on the node but note that Crays core specialization can target the noise issues that have an impact on the timing of microkernels.

Theta results on the DGEMM matrix multiplication kernel. This benchmark achieves over 1.9 teraflops on a Theta node, or 86% of peak for a relatively small matrix size. The team points out that on this compute-intensive benchmark, running more than one thread per core does not improve the performance. Further, using more than one hyperthread can issue the core limit of two instructions per cycle. While this is not the case with the DGEMM kernel,using more than one hyper-thread can in some cases reduce performance due to threads sharing resources such as L1 and L2 caches and instruction re-order buffers.

In terms of other trouble spots, it is actually OpenMP that introduced some of the latencies. The Barrier and Reduce construct was found to be related to the latency of main memory access due to the lack of shared last level cache. A simple performance model was developed to quantify the overhead of OpenMP pragmas which scale as the square root of the thread count, Argonne researchers note.

The team also ran the STREAM Triad benchmark to evaluate memory bandwidth. They found that considerable variation was found in memory bandwidth between the flat and cache memory mode configurations.

Included is the power consumption and efficiency for STREAM on one node in flatquadrant mode. The IPM and DDR4 are evaluated separately. For both tests, 15GB memory was used across 100 iterations. The thing to note here is that the IPM gets a 4.3X gain in memory bandwidth power efficiency and a 1.2X increase in overall power consumption.

LAMMPS, MILC, and Nekbone all showed positive scaling characteristics (for strong and weak scaling) on Theta and were comparable to what teams were able to achieve on Mira, which is known for scalability via the BlueGene architecture. In short, so far, KNL is delivering on its promises in the wildit will be interesting to see scaling, performance, and efficiency on real-world applications as these roll out by SC17 for Gordon Bell, for instance.

We can expect a number of stories leading into ISC around the early benchmark results and production tales from other supercomputers with similar architectures (Trinity, Cori, Stampede2, etc) and will write these up as we get them. The full benchmark results and details from Theta can be found here.

Categories: HPC, ISC17

Tags: Argonne, Aurora, ISC17, Knights Landing, Theta

Clever RDMA Technique Delivers Distributed Memory Pooling

Read more here:

Early Benchmarks on Argonne's New Knights Landing Supercomputer - The Next Platform

Even An AI Supercomputer Found This College Entrance Exam Tough – IFLScience

If you are getting stressed about upcoming exams then youre not alone, so is this artificially intelligent (AI) machine.

Last week, a top AI system was pitted against nearly 10 million students to face the maths paper for a much-feared Chinese university entrance exam, known as gaokao. Unfortunately for robotkind, its results were pretty mediocre.

The computer a humming tower of eleven servers with no Internet connection called AI-MATHS scored 105 points out of 150 points. On another version of the test, it scored 100. Although that beats the passing score of 90, humanities students had previously scored an average of 109 last year.

That said, the machine finished the exam in 10 minutes when humans are given two hours to complete the exam.

Scientists recently saidartificial intelligence will be able to beat humans at everything by 2060, whether that'squizzes, exams, chess, or the game Go. In response to the study, Elon Musk then tweeted that he believes AI-superiority will actually be earlier, around 2030 or 2040.

That doesnt mean this AI is slow off the mark, however. The computer itself would be able to deal with raw numbers with no problem. Instead, the purpose of this task was to understand the examination in terms of language, something that computers are not so sharp with at the moment.

"This is not a make-or-break test for a robot. The aim is to train artificial intelligence to learn the way humans reason and deal with numbers," said Lin Hui, CEO of Chengdu Zhunxingyunxue Technology, who developed the AI, according to Chinese news agencyXinhua.

For example, the robot had a hard time understanding the words 'students' and 'teachers' on the test and failed to understand the question, so it scored zero for that question.

Gaokao isinfamously rigorous and renowned for being overwhelming stressful for the young people that take it. Made up of four three-hour papers in Chinese, English, mathematics, and a choice of either sciences or humanities, the series of tests rely on an extensive range of knowledge, problem-solving skills, and obscure creative thinking. The mathematics exam itself is said to be about as tough as the same level college exam in the West.

Nevertheless, the researchers continue to work with China's Ministry of Science and Technology and remain optimistic their AI will improve in the exams in no time at all.

I hope next year the machine can improve its performance on logical reasoning and computer algorithms and score over 130," Lin added.

Go here to read the rest:

Even An AI Supercomputer Found This College Entrance Exam Tough - IFLScience

Tracking the World’s Top Storage Systems – TOP500 News

When assessing the fastest supercomputers in the world, system performance is king, while the I/O componentry that feeds these computational beasts often escapes notice. But a small group of storage devotees working on a project at the Virtual Institute for I/O (VI4IO) want to change that.

VI4IO is a non-profit organization, whose mission is to is build a global community of I/O enthusiasts devoted to lifting the visibility of high-performance storage and provide resources for both users and buyers. It does this through outreach and information exchanges at conferences like ISC and SC, and maintains a website to help spread the word.

An important element of VI4IOs mission now involves the creation of a High Performance Storage List (HPSL), also known as the IO-500. Like its TOP500 counterpart, the list purports to tracks the top systems in the world, but in this case from the perspective of storage. The TOP500, youll note, collects no information on this critical subsystem.

Essentially the IO-500 provides a collection of I/O metrics and otherdata associated withsome of the largest and fastest storage systems on the planet. The effort is being spearheaded by Julian Kunkel, a researcher at DKRZ (the German Climate Computing Center), along with Jay Lofstead, at Sandia National Labs, and John Bent, fromSeagate Government Solutions.

Both performance and capacity data is captured, as well as other relevant information.. Since the work to compile this data beganjust over a year ago, the list today contains a mere 33 entries. The eventual goal is to provide a knowledge base for 500 top storage systems and track them over time to provide a historical reference, as has been done in the TOP500 list.

Kunkel says the motivation to compile the list came from the desire to provide a central data repository for these big storage systems-- information that is now spread across hundreds of websites in different formats, languages, and levels of detail. Another incentive for the list was to create some standard I/O benchmarks that would be widely accepted by storage makers and users. According to Kunkel, a lot of people are doing great work in measuring and analyzing storage systems, but they tend to be isolated and work off their own metrics,

Although its loosely based on the TOP500 concept, the IO-500 data is compiled quite differently. For starters, there is no formal submission process. Individuals familiar with the storage at their own sites can input and edit the metrics and other data via a wiki website. So rather than a new list getting released every six months, the list is being continuously updated.

Such data, by definition, is difficult to verify, but the list makers encourage submitters to include references to web pages or public presentations to back up the credibility of their submission. The integrity of the people is the key, admitsKunkel.

The current list is very much a work in progress. Much of it has been compiled by Kunkel himself, along with some graduate student help, based on online material or correspondence with system owners. Even in the 33 current entries, none have complete profiles. Part of this is because many of these storage systems arent documented in much detail. But most of the missing data can be attributed to the fact that the list allows for just about any attribute you can think of from metadata rate and cache size to procurement costs and annual TCO.

Mandatory data is limited to things like the name of the institution, the name of the supercomputer, the storage capacity, and the storage system name (actually the file system name, since, unlike the supercomputers themselves, storage systems are usually unnamed).

One of the unique strengths of the IO-500 list is that its interactive. You can click on the site, the system, or the file system to reveal more detailed aspects of those areas. These secondary pages are not just a collection of metrics, but also can provided explanations of how those metrics were derived. You can also select non-mandatory data fields to be include in the list, like sustained read performance and cache size.

What is especially useful is the ability to re-sort the list by clicking on any one of the metrics-based fields storage capacity (the default) or any of the non-mandatory metrics selected. Even if youre not interested in storage per se, you can re-sort the list based on metrics like system peak performance or memory capacity.

Theres also a derived metrics page where you generate correlations between different storage aspects or other elements of the system. So, for example, you could compute things like the ratio of storage to memory capacity or the I/O performance per drive, and then sort on that metric. Theres a wealth of possibilities for different types of analysis.

The current weakness of the list, beside the paucity of entries, is the lack of standard metrics. Unlike the TOP500 with its High Performance Linpack (HPL), there is no standard set of storage performance benchmarks mandated by the list. Therefore, comparisons between the various submitted metrics, like peak I/O or sustained reads and writes, may not be directly comparable.

To rectify that, the IO-500 team have come up with three performance benchmarks: a metadata or small object benchmark, an optimistic I/O benchmark, and a pessimistic I/O benchmark. The pessimistic benchmark is still under development.

At some point, they would like to distill all this work into a single metric, which would likely combine the three benchmarks, weighted in some manner that made sense. That would provide a standard performance measurement on which to rank storage systems, analogous to HPL in theTOP500 rankings.

The immediate challenge, though, is to get more people involved in submitting storage entries, since their principle focus right now is to collect enough data to provide the list with enough critical mass to make it a worthwhile resource. Thats one reason why he and his two IO-500 cohorts are hosting a BoF session at the ISC High Performance conference next week. Also to be discussed will be how the standard benchmark efforts are progressing, although according to Kunkel, theres no rush to force something on the community.

There have been previous efforts at developing an I/O benchmark, and they all failed, he says. And thats why we are going a bit slower. We dont want this one to fail.

Go here to read the rest:

Tracking the World's Top Storage Systems - TOP500 News

Revolutionary Supercomputer Code Simulates Entire Cosmos –"25 … – The Daily Galaxy (blog)

Over a period of three years, a group of astrophysicists from the University of Zurich has developed and optimised a revolutionary code to describe with unprecedented accuracy the dynamics of dark matter and the formation of large-scale structures in the Universe. From the data, researchers will obtain new information on the nature of this mysterious dark energy, but also hope to discover new physics beyond the standard model, such as a modified version of general relativity or a new type of particle.

The researchers have simulated the formation of our entire Universe with a large supercomputer. A gigantic catalog of about 25 billion virtual galaxies has been generated from 2 trillion digital particles. This catalogue is being used to calibrate the experiments on board the Euclid satellite, that will be launched in 2020 with the objective of investigating the nature of dark matter and dark energy.

The image below shows a section of the virtual universe, a billion light years across, showing how dark matter is distributed in space, with dark matter halos the yellow clumps, interconnected by dark filaments. Cosmic void, shown as the white areas, are the lowest density regions in the Universe. (Joachim Stadel, UZH)

Over a period of three years, the astrophysicists from the University of Zurich has developed and optimised a revolutionary code to describe with unprecedented accuracy the dynamics of dark matter and the formation of large-scale structures in the Universe.

As Joachim Stadel, Douglas Potter and Romain Teyssier report in their recently published paper, the code (called PKDGRAV3) has been designed to use optimally the available memory and processing power of modern supercomputing architectures, such as the "Piz Daint" supercomputer of the Swiss National Computing Center (CSCS). The code was executed on this world-leading machine for only 80 hours, and generated a virtual universe of two trillion (i.e., two thousand billion or 2 x 1012) macro-particles representing the dark matter fluid, from which a catalogue of 25 billion virtual galaxies was extracted.

Thanks to the high precision of their calculation, featuring a dark matter fluid evolving under its own gravity, the researchers have simulated the formation of small concentration of matter, called dark matter halos, in which we believe galaxies like the Milky Way form. The challenge of this simulation was to model galaxies as small as one tenth of the Milky Way, in a volume as large as our entire observable Universe. This was the requirement set by the European Euclid mission, whose main objective is to explore the dark side of the Universe.

Indeed, about 95 percent of the Universe is dark. The cosmos consists of 23 percent of dark matter and 72 percent of dark energy. "The nature of dark energy remains one of the main unsolved puzzles in modern science," says Romain Teyssier, UZH professor for computational astrophysics. A puzzle that can be cracked only through indirect observation: When the Euclid satellite will capture the light of billions of galaxies in large areas of the sky, astronomers will measure very subtle distortions that arise from the deflection of light of these background galaxies by a foreground, invisible distribution of mass - dark matter. "That is comparable to the distortion of light by a somewhat uneven glass pane," says Joachim Stadel from the Institute for Computational Science of the UZH.

This new virtual galaxy catalogue will help optimize the observational strategy of the Euclid experiment and minimize various sources of error, before the satellite embarks on its six-year data collecting mission in 2020. "Euclid will perform a tomographic map of our Universe, tracing back in time more than 10-billion-year of evolution in the cosmos," Stadel says.

The Daily Galaxy via University of Zurich

Excerpt from:

Revolutionary Supercomputer Code Simulates Entire Cosmos --"25 ... - The Daily Galaxy (blog)

The Week in Photos: From Flooding in Tunisia to a Supercomputer in Germany – Pacific Standard


Pacific Standard
The Week in Photos: From Flooding in Tunisia to a Supercomputer in Germany
Pacific Standard
A Pakistani Muslim rests at a mosque during the holy month of Ramadan in Karachi on June 9th, 2017. (Photo: Asif Hassan/AFP/Getty Images). Indian Hindu devotees participate in a ceremony to mark Vat Savitri Purnima on the outskirts of Ahmedabad on ...

Continued here:

The Week in Photos: From Flooding in Tunisia to a Supercomputer in Germany - Pacific Standard

Supercomputer Organized by Network Mining (SONM) announces ICO Platform – The Merkle

New York, June 9, 2017 SONM (Supercomputer Organized by Network Mining), the universal fog supercomputer powered by blockchain technology, has announced its Initial Coin Offering, commencing June 15, 2017 and concluding July 15, 2017. By modifying the algorithms behind conventional cloud and grid networks, SONM will be the first distributed supercomputer for general purposes, from site hosting to DNA analysis to scientific calculations. SONM will issue tokens of the same name on the Ethereum blockchain.

SONM CEO Sergey Ponomarev said: Market demand for computing power is rising exponentially across a range of industries. SONMs unique offering directly responds to this demand by making use of idle standing computing power to solve non-deterministic tasks like hosting websites, the backend for mobile apps, and massively multiplayer online gaming (MMO instances). Any miner with a smartphone or supercomputer cluster can become a part of SONMs fog network and generate computing power to be used by others.

Fog computing with its greater innate potential, is widening the scope of what cloud computing can achieve by bringing solo miners, private datacenters, public clouds, and IoT into the network. The SONM platform, based on BTSyncs data transmission software and Yandex Technologies open source code for decentralized computation, is an integration of the most advanced solutions in the field of fog and cloud computation, he added.

By hybridizing fog computing with an open-source PaaS technology, the SONM platform will offer a full spectrum of services, including app development, scientific calculations, website hosting, video game server hosting, machine learning for neural networks, video and CGI rendering, augmented reality location-based games, and video streaming services.

SONM also offers an opportunity for miners to earn tokens efficiently by serving calculations for everyone in the network. Any smart device located anywhere in the world can take advantage by joining the fog network and selling computing power peer-to-peer via the SONM Application Pool.

Sergey Ponomarev said: In recent years, being a part of a pool has been the only way to profit from mining, but this method often doesnt even cover electricity costs associated with Proof-of-Work (PoW) mining. SONM will reduce miners costs by eliminating the need for PoW mining and by suggesting the most profitable applications and tasks for each miners hardware.

SONM tokens will be used by buyers of computing power to pay for the calculations executed via the smart contracts-based SONM platform. Tokens will be created exclusively during the crowdfunding period. No token creation, minting or mining will be available after the crowdfunding period. The funding cap will be confirmed according to the ETH/USD conversion rate before the ICO begins. A progressive bonus structure will exist for the first 70% of tokens sold in the ICO.

The funds raised in the crowdsale will be distributed as follows: 33% is reserved for marketing promotion, market growth, community, and expansion; 30% for research and development including team expansion, and advisers; 20% for the original SONM team; 7% for complementary technologies; 6% for technology infrastructure; and the remaining 4% for other indirect costs such as legal and office expenses.

Sergey Ponomarev said: The SONM platform can also be used for providing decentralized services in a heterogenous computing environment, which is expected to be a cornerstone of the future of computing. Given SONMs reputation system and intelligent agents, we expect SONM to be the smartest, cheapest, and largest decentralized computing system with strong rules regarding morality and loyalty.

Investors can participate in the SONM ICO using BTC, ETH and other major cryptocurrencies. The SNM token basic price is 1 ETH = 606 SNM. The amount of SNM tokens to issue for the other cryptocurrencies (except Ethereum) deposits will be calculated according to the current exchange rate of this cryptocurrency in Ethereum.

More information, the SONM whitepaper and business overview are available at sonm.io/.

SONM CEO Sergei Ponomarev is available for interview.

SONM (Supercomputer Organized by Network Mining) is a decentralized worldwide fog supercomputer for general purpose computing from site hosting to scientific calculations. SONM company is an effective way to solve a worldwide problem creating a multi-purpose decentralized computational power market. Unlike widespread centralized cloud services, SONM project implements a fog computing structure a decentralized pool of devices, all of which are connected to the Internet.

Disclaimer: This is a paid press release, the product / service mentioned is not endorsedby The Merkle, always do your own independentresearch. If you liked this article, follow us on Twitter @themerklenews and make sure to subscribe to our newsletter to receive the latest bitcoin, cryptocurrency, and technology news.

Originally posted here:

Supercomputer Organized by Network Mining (SONM) announces ICO Platform - The Merkle

Supercomputer Performs Largest-Ever Virtual Universe Simulation – R & D Magazine

Researchers from the University of Zurich have simulated the formation of our entire Universe with a large supercomputer. A gigantic catalogue of about 25 billion virtual galaxies has been generated from 2 trillion digital particles. This catalogue is being used to calibrate the experiments on board the Euclid satellite, that will be launched in 2020 with the objective of investigating the nature of dark matter and dark energy.

Over a period of three years, a group of astrophysicists from the University of Zurich has developed and optimised a revolutionary code to describe with unprecedented accuracy the dynamics of dark matter and the formation of large-scale structures in the Universe. As Joachim Stadel, Douglas Potter and Romain Teyssier report in their recently published paper, the code (called PKDGRAV3) has been designed to use optimally the available memory and processing power of modern supercomputing architectures, such as the "Piz Daint" supercomputer of the Swiss National Computing Center (CSCS). The code was executed on this world-leading machine for only 80 hours, and generated a virtual universe of two trillion (i.e., two thousand billion or 2 x 1012) macro-particles representing the dark matter fluid, from which a catalogue of 25 billion virtual galaxies was extracted.

Studying the composition of the dark universe

Thanks to the high precision of their calculation, featuring a dark matter fluid evolving under its own gravity, the researchers have simulated the formation of small concentration of matter, called dark matter halos, in which we believe galaxies like the Milky Way form. The challenge of this simulation was to model galaxies as small as one tenth of the Milky Way, in a volume as large as our entire observable Universe. This was the requirement set by the European Euclid mission, whose main objective is to explore the dark side of the Universe.

Measuring subtle distortions

Indeed, about 95 percent of the Universe is dark. The cosmos consists of 23 percent of dark matter and 72 percent of dark energy. "The nature of dark energy remains one of the main unsolved puzzles in modern science," says Romain Teyssier, UZH professor for computational astrophysics. A puzzle that can be cracked only through indirect observation: When the Euclid satellite will capture the light of billions of galaxies in large areas of the sky, astronomers will measure very subtle distortions that arise from the deflection of light of these background galaxies by a foreground, invisible distribution of mass - dark matter. "That is comparable to the distortion of light by a somewhat uneven glass pane," says Joachim Stadel from the Institute for Computational Science of the UZH.

Optimizing observation strategies of the satellite

This new virtual galaxy catalogue will help optimize the observational strategy of the Euclid experiment and minimize various sources of error, before the satellite embarks on its six-year data collecting mission in 2020. "Euclid will perform a tomographic map of our Universe, tracing back in time more than 10-billion-year of evolution in the cosmos," Stadel says. From the Euclid data, researchers will obtain new information on the nature of this mysterious dark energy, but also hope to discover new physics beyond the standard model, such as a modified version of general relativity or a new type of particle.

See the rest here:

Supercomputer Performs Largest-Ever Virtual Universe Simulation - R & D Magazine

Is HP Labs’ supercomputer the new hope for supersized data? – SiliconANGLE (blog)

With practically limitless data and applications demanding microseconds-fast insight, its poor timing that Moores law of perpetually increasing processor power is now AWOL.

How do we get back exponential scaling on supply to meet this unending, exponential demand? askedKirk Bresniker (pictured, right), fellow, vice president and chief architect at HP Labs, at Hewlett Packard Enterprise Co.

We will not regain it through the familiar technologies of the past three decades, nor a single point solution, Bresnikerstated in an interview duringHPE Discover in Las Vegas, Nevada.

This is borne out each day in HP Labs generally and in the companysongoing work on The Machine, its memory-driven compute program, according toAndrew Wheeler (pictured, left), fellow, vice president and deputy director of HP Labs.

Bresniker and Wheeler spoke withJohn Furrier (@furrier) andDave Vellante (@dvellante),co-hosts of theCUBE, SiliconANGLE Medias mobile live streaming studio, during HPE Discover.(* Disclosure below.)

After some mixed press for The Machine last December, HPE has been doggedly pushing it closer to prime time production, Wheelerexplained.

There are a lot of moving parts around it, whether its around the open-source community and kind of getting their head wrapped around, what does this new architecture look like? Wheeler said.

The Machine will require a chain of partners and ancillary parts to yield real use-cases, Wheeler added.

We had the announcement around DZNEas kind of an early example, he said, referring to the German Center for Neurodegenerative Diseases use of The Machine in analyzing massive medical data.

The Machine has also materialized what HPE calls the Computer Built for the Era of Big Data, a massive system running on a single memory.

Internet of Things data and, specifically, the intelligent edge are calling out for data training abilities like those in this supercomputer, according toBresniker.Presently, almost all data ingested at the edge is thrown away before its analyzed, let alone monetized, he added.

The first person who understands, OK, Im going to get one percent more of that data and turn it into real-time intelligence, real-time action that will unmake industries, and it will remake new industries, Bresniker concluded.

Watch the complete video interview below, and be sure to check out more of SiliconANGLEs and theCUBEs independent editorial coverage of HPE Discover US 2017.(* Disclosure: TheCUBE is a paid media partner for HPE Discover US 2017. Neither Hewlett Packard Enterprise Co. nor other sponsors have editorial control on theCUBE or SiliconANGLE.)

Continued here:

Is HP Labs' supercomputer the new hope for supersized data? - SiliconANGLE (blog)

P6T7 WS SuperComputer | Motherboards | ASUS Global

CUDA parallel computing power supported

The motherboard will achieve outstanding and dependable performance in the role of a Personal Supercomputer when working in tangent with discrete CUDA technologyproviding unprecedented return on investment. Users can count on up to 4 CUDA cards(One of them should be Quadro graphic card) that are plugged into P6T7 WS SuperComputer for intensive parallel computing on tons of data, which delivers nearly 4 teraflops of performance. It is the best choice to work as a personal supercomputer on your desk instead of a computer cluster in a room.

No matter what your preference is, seven PCI-E Gen2 x16 Slots gives you the sufficient I/O interfaces to fulfill your demand for graphic or computing solution. Youll be able to run both multi-GPU setups. The board features SLI on demand technology, not only supporting up to four graphics cards in a 4 way SLI but also supporting up to four double-deck GPU Graphics cards. Whichever path you take, you can be assured of jaw-dropping graphics at a level previously unseen.

Brightly and vividly lighting LEDs shine around the ASUS brand name on the motherboard after successful booting process. With the breath-like deep blue lighting shining in regular tempo, ASUS Heartbeat makes the motherboard as vivid as life.

P6T7 WS SuperComputer provides users with onboard SAS ports to support for hard drive upgration flexibility. SAS hard drive has safer, faster and more reliability for data trasfer and storage.

This motherboard is fully compatible with ASUS SAS card (the SASsaby card series, optional). Faster, safer and more stable, SAS will provide users with a better choice for storage expansion and upgrade needs.

Diag. LED checks key components (CPU, DRAM, VGA card, and HDD) in sequence during motherboard booting process. If an error is found, the LED next to the error device will continue lighting until the problem is solved. This usr-friendly design provides an intuitional way to locate the root problem within a second.

The Best Graphic Performance you EVER have The 4 PCIe run on the speed of 16 links gives you the fastest and the most reliable 4-Way SLI graphic performance you ever have when you are engaged in Mechanical/Architecture/Interior/Aircraft/Audio/Video Design or when you are playing games in leisure time.

Bundled with the P6T7 WS SuperComputer motherboard, the G.P. Diagnosis card assists users in system checking by effortlessly and quickly providing precise system checks right after they switch on their PCs.

Visit link:

P6T7 WS SuperComputer | Motherboards | ASUS Global

Apple’s New iMac Pro Is The Ultimate Supercomputer, Here’s Why – Forbes


Forbes
Apple's New iMac Pro Is The Ultimate Supercomputer, Here's Why
Forbes
For more than a decade, gaming and graphic designing have occupied spot as the most resource-intensive tasks performed by a personal computer. That, however, has changed quickly in the past one year with the sudden rise to prominence of virtual reality ...

and more »

Excerpt from:

Apple's New iMac Pro Is The Ultimate Supercomputer, Here's Why - Forbes

Ohio Supercomputer Center runs largest scale calculation ever – Phys.Org

June 6, 2017 The Owens Cluster is the most powerful supercomputer in OSC history. Credit: Ohio Supercomputer Center

The Ohio Supercomputer Center recently displayed the power of its new Owens Cluster by running the single-largest scale calculation in the Center's history.

Scientel IT Corp used 16,800 cores of the Owens Cluster on May 24 to test database software optimized to run on supercomputer systems. The seamless run created 1.25 Terabytes of synthetic data.

Big Data-specialist Scientel developed Gensonix Super DB, a software designed for big data environments that can use thousands of data-processing nodes compared to other database software that use considerably fewer nodes at a time. Scientel CEO Norman Kutemperor said Genosonix Super DB is the only product designed and optimized for supercomputers to take full advantage of high performance computing architecture that helps support big data processing.

"This is a wonderful testimonial of the capabilities of Genoxonix Super DB for Big Data," Kutemperor said. "The robust nature of the OSC Owens Cluster provided the reliability for this large parallel job."

To demonstrate the power of Genosonix Super DB, the Scientel team created a sample weather database application to run using OSC's Owens Cluster. For this rare large run, Scientel used 600 of the system's available 648 compute nodes. The Owens Cluster has additional nodes dedicated to GPU use and data analytics, for a total of 824 nodes on the Dell-built supercomputer. During the run, the Owens Cluster reached a processing speed of over 86 million data transactions per minute with no errors.

"As the largest run ever completed on OSC's systems, Scientel helped us demonstrate the power of the Owens Cluster," said David Hudak, Ph.D., OSC interim executive director. "Owens regularly delivers a high volume of smaller-scale runs, providing outstanding price performance for OSC's clients. The ability to scale calculations to this size demonstrates another unique capability of Owens not found elsewhere in the state and unmatched by our previous systems."

With satisfactory test results on the software, Scientel will take Genosonix Super DB to the forefront of technology to process large varieties of data and compute intense problems in areas such as cancer research, drug development, traffic analysis, and space exploration. A single application written for Genosonix Super DB can use more than 100,000 cores to handle multiple petabytes of data in real time.

"[The OSC staff] are extremely knowledgeable and very capable of understanding customer requirements, even when jobs are super scaled," Kutemperor said. "Their support and enthusiasm for projects of this nature are outstanding."

Explore further: Officials dedicate OSC's newest, most powerful supercomputer

J.C. "Jesse" Owens possessed both elite speed and raw power, which he honed and blended on his way to winning four Olympic gold medals in 1936.

A prototype part of the software system to manage data from the Square Kilometre Array (SKA) telescope has run on the world's second fastest supercomputer in China.

Fujitsu today announced the launch of FEFS (Fujitsu Exabyte File System), a scalable file system software package for building file systems for x86 HPC clusters in Japan.

For the third time in its history, Thomas Jefferson National Accelerator Facility is home to one of the world's 500 fastest supercomputers. The SciPhi-XVI supercomputer was just listed as a TOP500 Supercomputer Site on November ...

The hybrid cluster Xiuhcoatl at the Center for Research and Advanced Studies (CINVESTAV) performs 250 trillion operations per second. The potential of this supercomputer is now available to researchers, companies and industry, ...

The Ohio Supercomputer Center's newest system would fall within the top half of the list of the world's most powerful supercomputers based purely on speed, but the cluster would rank even higher ninth in the United ...

Using Earth-abundant materials, EPFL scientists have built the first low-cost system for splitting CO2 into CO, a reaction necessary for turning renewable energy into fuel.

While convenient, Siri, WeChat and other voice-based smartphone apps can expose you to a growing security threat: voice hacking.

Apple nodded to several up-and-coming technology trends, unveiling a new "smart" home speaker and device features touching on virtual reality, online privacy and a form of artificial intelligence called machine learning.

A wireless, battery-less pacemaker that can be implanted directly into a patient's heart is being introduced by researchers from Rice University and their colleagues at the Texas Heart Institute (THI) at the IEEE's International ...

Apple appears poised to unveil a voice-activated, internet-connected speaker that would create a new digital pipeline into people's homes.

Functional magnetic resonance imaging reveals changes in blood-oxygen levels in different parts of the brain, but the data show nothing about what is actually happening in and between brain cells, information needed to better ...

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

See the original post here:

Ohio Supercomputer Center runs largest scale calculation ever - Phys.Org

New Five Nanometer Transistor Unveiled by IBM and Cohorts – TOP500 News

IBM and its partners have developed a novel technology to build 5nm chips, based on silicon nanosheet transistors. Compared to 10nm chips using FinFET transistors, the new technology promises to deliver a 40 percent performance increase, a 75 percent power savings, or some combination of the two.

5nm silicon nanosheet transistors. Credit: IBM

Dr. Huiming Bu, Director of Silicon Integration and Device Research at IBM Research says the approach involves placing the nanosheets in horizontal layers during chip fabrication. The change from todays vertical fin architecture to horizontal layers of silicon opened a fourth gate on the transistor that enabled superior electrical signals to pass through and between other transistors on a chip, Dr. Bu told TOP500 News.

Scientists at IBM Research and its partner SUNY Polytechnic Institute (Colleges of Nanoscale Science and Engineerings NanoTech Complex) have been working on nanosheet semiconductors in the lab for more than 10 years. But this weeks announcement appears to put the technology on a glide path to commercialization. According to researchers, this is the first time anyone has demonstrated the feasibility of building chips with these nanosheets that will outperform comparable devices built with FinFET technology.

The 5nm chips will employ the same Extreme Ultraviolet (EUV) lithography used for IBMs 7nm test node chips that the company unveiled in 2015. That technology would deliver 20 billion transistors on a chip, while this new nanosheet approach would increase that to 30 billion.

According to the announcement, the technology also provides an extra benefit. Researchers have found a way to use EUV technology to adjust the width of the nanosheets within a single manufacturing process or chip design. The practical effect of this technique is described as follows:

This adjustability permits the fine-tuning of performance and power for specific circuits something not possible with todays FinFET transistor architecture production, which is limited by its current-carrying fin height. Therefore, while FinFET chips can scale to 5nm, simply reducing the amount of space between fins does not provide increased current flow for additional performance.

FinFET, short for Fin Field Effect Transistor, is the 3D semiconductor technology Intel began using in commercial chips in 2012, and adopted over the following couple of years by GlobalFoundries and TSMC, among others. Most chip manufacturers plan to use some version of FinFET through the 7nm node. But as the transistor pitch shrinks, taller and thinner fin structures are needed, which makes their manufacture increasingly difficult.

But if all goes as planned, Samsung and Globalfoundaries will be able ditch FinFET and move to nanosheet technology for the 5nm node. As IBM alliance partners, both of these chipmakers will have full access to this technology, since they share patents associated with the nanosheet transistor structure and fabrication.

Chip wafer with 5nm silicon nanosheet transistors. Credit: Connie Zhou

Keep in mind that this initial work represents a feasibility demonstration only. Mass manufacturing of nanosheet transistors could take place within a few years, but would depend upon moving these techniques into a production environment.

This latest development is likely to reignite arguments about the viability of Moores Law. But the more immediate goal of shrinking transistor sizes down to 5nm whether it happens at a Moores Law rate or not at least now has what appears to be a viable path. Moreover, Dr. Bu says they forsee a way for nanosheet transistors to scale beyond 5nm, perhaps using different materials.

The economics of building nanosheet fabs still has to be worked out. Intel says it will spend $7 billion to build its 7nm fab in Arizona, while GlobalFoundaries says it will shell out several billion for new tools to update its Malta, New York facility for 7nm production. Its hard to imagine a 5nm facility based on a novel technology would be less expensive than either of those.

Nevertheless, the escalation in computing demand is making some of the economic arguments against continued transistor shrinkage irrelevant. Even a $10 or $20 billion fab could be a viable investment, given the insatiable appetite for computing from areas like artificial intelligence, virtual reality, the internet of things (IoT) and mobile devices, not to mention supercomputing. At the same time, energy for computing is becoming more expensive, both in operating costs and from the perspective of environment impact. Given that, anything demonstrating 40 percent better performance or 75 percent better energy efficiency is likely to find its way into the market.

Details of the technology will be presented at the 2017 Symposia on VLSI Technology and Circuits conference being held this week in Kyoto, Japan.

Read more:

New Five Nanometer Transistor Unveiled by IBM and Cohorts - TOP500 News

The New iMac Pro Is Apple’s Most Bonkers Computer Ever – WIRED

PMkqTD]u*PmtdB2,']cgX'c sc)yt>4Oz5.`jT4#B^bk49SomjP|tQ|1;2.l.^^qO43Opet`)J"3#t"eN6vyy_6"zZsJ9p0 VRZ+.jZ8T*r,c]>214h,@P`}9CE`7.m'v&QrGf,YU`D"6~~m |b.Cg"*=[eWY1Mp3*fV_:o_EsV* ;OhXLpPXOT;@S<=j 9d #9|%=95uK(i]c0(Jq 91 QAY2v886-bz*fb_b#I+nHr9#L AQ7nO -MJ*E[nl,#DgC/| GbtMPc5A tp#Zjm!5X6KTZcZ[,u}kCcQgb{>+6$/Kb;d/|EX"PPF/z3d6qcvTVIoAAW'7<&0qQW}wCS=p1g<1 -s{12!Fm*z}^?]2(8!$f< :#X.JsDy,>CAhO1'1_&3QOI]M-}|gX8a q(G.{:d|35(#@pG Kz^krtD()P 'b#}ot<[ NHQb47GVA!,8gHe@?">7g@?9,_yUHg% |Na5n6.(y+Y|S(4X$<0TY9i|[8as259KmMy tG/ R7<8bFOZ.S,dF/(Phn"Hemk+91fs+>h@D!ItZJP2&{wP{9@Wcjt3$vSK|r8K_@c`( :mpd5;Poz+zfQ(s'7J4@~`xQb'U~A1a},>N^}YEf1+^%'*$H,oF -V [`6+4:`zpJ_z`x'dZvGiq$r`,tPauFndia~UDKyCrF";Mf9a}D797DD2 HDAoq>jfgq)|bDuvLo>%_F<}xg3T` +q:1`@Wd= Pp#c?+T,d;_ taYo$jPDnWwJzGy MCj abNPG]hhE'f y^@40+rTm,H[/*iq$6'5sXDrFmU.ASMOB TQ %4VvO!O::0<@2r2f*_uQGNO>f@Se W5hC+SF|A^DOB3Y.AYH)ZzrUHGY!#G)sX*Qhj6>g^=Ie[&U?hNdfHEI_ib6XzeA+2jU$f_2n KKI~.AafMZ,eG55)}M) (sT4hK>-XBpO>)gB|KV ZJ`= =D12-B|7w%mj:tTu-yXgw7B.0iK$xrHWXIo$rmL(#5SU(}1dqu; 8:3H~"/* cA}k)Q!TN:x9fVjhhA*V1%+B _/JcU1d:-d6E_6D8_e~}BeLZk{=flOe2Gpy7j

Continue reading here:

The New iMac Pro Is Apple's Most Bonkers Computer Ever - WIRED

University of Bristol Launches 600-Teraflop Supercomputer – TOP500 News

The University of Bristols newest supercomputer, Blue Crystal 4 (BC4), is three times faster than its predecessor and promises to accelerate the work of more than 1,000 researchers and engineers.

Targeted for applications in paleobiology, biochemistry, physics, molecular modeling, life sciences, and aerospace engineering, BC4 will provide 602 peak teraflops of raw computing horsepower. Early testing indicates application performance for simulations and advanced analytics in these domains has trebled compared to its older sibling, Blue Crystal Phase 3.

According to Dr. Christopher Woods, EPSRC Research Software Engineer Fellow at the University of Bristol, research that used to take a month, now takes a week, and what took a week, now takes only a few hours.

"We have researchers looking at whole-planet modeling with the aim of trying to understand the earth's climate, climate change and how thats going to evolve, as well as others looking at rotary blade design for helicopters, the mutation of genes, the spread of disease and where diseases come from, he added.

The new system was also used to support a 1.8 million study looking into the evolution of the Ebola virus, and how its impacting diagnostics and treatment. Dr. David Matthews, Senior Lecturer in Virology at the University of Bristol, who led the Bristol component of the study noted that Blue Crystal was a critical tool for that research.

We used it to analyze raw data on the Ebola virus in 179 patient blood samples to determine the precise genetic make-up of the virus in each case, he said. This allowed the team to examine how the virus evolved over the previous year, informing public health policy in key areas such as diagnostic testing, vaccine deployment and experimental treatment options."

BC4 is a Lenovo NeXtscale cluster powered principally by 14-core Intel Broadwell Xeon processors. Each node is equipped with two of these processors, along with 128 GB of memory. The system also has 32 GPU-accelerated nodes, each of which includes two NVIDIA P100 Tesla processors. A visualization node equipped with NVIDIA Grid vGPUs is provided as well. Inter-node connectivity is supplied by Intels Omni-Path fabric, running at 100 Gbps.

The plan is to replicate applications running on the older Blue Crystal machine on BC4 in order to allow researchers to scale up their codes, as well as develop new ones. Due to the similarity in architecture, applications are expected to migrate easily.

BC4 was installed in 2016 and is currently ranked 301 on the TOP500 list. It was officially launched at a special symposium in Bristol on May 24, 2017.

See the article here:

University of Bristol Launches 600-Teraflop Supercomputer - TOP500 News

Researchers measure the coherence length in glasses using the supercomputer JANUS – Phys.Org

May 31, 2017 Janus II FPGA modules. Credit: janus-computer.com/galery-janusII

The JANUS supercomputer has enabled researchers to reproduce the experimental protocol of equilibrium dynamics in spin glasses. The success of the simulation connects theoretical and experimental physical developments using this new generation of computers.

One common characteristic of certain systems such as polymers, supercooled liquids, colloids or spin glasses is that they take a long time to reach equilibrium. They are determined by very slow dynamics at low temperatures. The dynamic is so slow that thermal equilibrium is never attained in macroscopic samples. This type of dynamic is characterised by a correlation or coherence length that indicates that particles situated at a shorter distance are highly correlated.

Theoretical physicists can calculate this microscopic correlation length by simulating a large number of particles and following their individual behaviour in a supercomputer. These kinds of studies cannot be carried out experimentally because it is impossible to track all the particles of a system, but it is possible to calculate a macroscopic correlation length by applying external fields on the system that modify the energy barriers between the different states.

Thanks to the JANUS II supercomputer, researchers from Spain and Italy have refined the calculation of the microscopic correlation length and have reproduced the experimental protocol, enabling them to calculate the macroscopic length. The success of the simulation confirmed that both microscopic and experimental (macroscopic) length are equal.

"This study provides a theoretical basis for studies in these physical systems, and the results obtained allow us to directly connect theoretical developments to the experimental ones. We took spin glasses as a reference, because they are cleaner to study as a reference system," explains Juan Jess Ruiz Lorenzo, a theoretical physicist at the UEx and one of the authors of this study which has been published in the magazine Physical Review Letters.

JANUS computer

The JANUS II computer is a new generation of supercomputer located in the Institute of Biocomputation and Physics of Complex Systems. "Thanks to this 'dedicated' computer, we are able to simulate one second of the experiment, within the range of the experimental times," says Juan Jess Ruz Lorenzo. JANUS II is a dedicated supercomputer based on reconfigurable FPGA processors.

The researchers have reproduced a landmark experiment on the Janus I and Janus II supercomputers that measures the coherence length in spin glasses. The coherence (correlation) length value estimated through analysis of microscopic correlation functions is quantitatively consistent with its measurements via macroscopic response functions

Explore further: Revealing the fast atomic motion of network glasses with coherent X-rays

More information: M. Baity-Jesi et al, Matching Microscopic and Macroscopic Responses in Glasses, Physical Review Letters (2017). DOI: 10.1103/PhysRevLett.118.157202

Journal reference: Physical Review Letters

Provided by: University of Extremadura

The remarkable properties of glasses are due to their dynamical arrested state in which relaxations occur on time scales too large to be observed, or so it was believed. Now researchers have discovered the existence of unexpected ...

Janus was a Roman god with two distinct faces. Thousands of years later, he inspired material scientists working on asymmetrical microscopic sphereswith both a magnetic and a non-magnetic halfcalled Janus particles. ...

Cassini's narrow angle camera captures Saturn's tiny irregular moon Janus surrounded by the vast, dark expanse of the outer solar system.

Fermions are ubiquitous elementary particles. They span from electrons in metals, to protons and neutrons in nuclei and to quarks at the sub-nuclear level. Further, they possess an intrinsic degree of freedom called spin ...

Glasses are amorphous (non-crystalline) solids that are widely used in everyday life and in technological instruments. It is important to understand the behavior of materials that form glasses; that is, to study the dynamics ...

In physics, confinement of particles is such an important phenomenon that the Clay Mathematics Institute has even pledged an award of a million dollars to anyone who can give a convincing and exhaustive scientific explanation ...

(Phys.org)Decision-making is typically thought of as something done by intelligent living things and, in modern times, computers. But over the past several years, researchers have demonstrated that physical objects such ...

In response to popular demand, materials scientists at Duke University have resurrected an online cookbook of crystalline structures that started when the World Wide Web was Netscape Navigator and HTML 1.0.

The emerging field of plasmonics could bring advances in chemical manufacturing, usher in new clean and sustainable technologies and desalination systems to avert a future global water crisis.

(Phys.org)For the first time, researchers have demonstrated that shining a nanosecond pulsed laser at the base of a 100-m-long diamond needle can significantly enhance electron emission from the tip of the needle. The ...

(Phys.org)A large team of researchers with members from China, the U.K., the U.S. and Japan has developed a material that can switch between multiple phases with distinct electronic, optical and magnetic properties. In ...

A ripe apple falling from a tree has inspired Sir Isaac Newton to formulate a theory that describes the motion of objects subject to a force. Newton's equations of motion tell us that a moving body keeps on moving on a straight ...

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Visit link:

Researchers measure the coherence length in glasses using the supercomputer JANUS - Phys.Org

Machine Learning on Stampede2 Supercomputer to Bolster Brain Research – The Next Platform

May 31, 2017 Donna Loveland

In our ongoing quest to understand the human mind and banish abnormalities that interfere with life weve always drawn upon the most advanced science available. During the last century, neuroimaging most recently, the Magnetic Resonance Imaging scan (MRI) has held the promise of showing the connection between brain structure and brain function.

Just last year, cognitive neuroscientist David Schnyer and colleagues Peter Clasen, Christopher Gonzalez, and Christopher Beevers published a compelling new proof of concept inPsychiatry Research: Neuroimaging. It suggests that machine learning algorithms running on high-performance computers to classify neuroimaging data may deliver the most reliable insights yet.

Their analysis of brain data from a group of treatment-seeking individuals with depression and heathy controls predicted major depressive disorder with a remarkable 75 percent accuracy.

Making More of MRI

Since MRI first appeared as a diagnostic tool, Dr. Schnyer observes, the hope has been that running a person through a scanner would reveal psychological as well as physical problems. However, the vast majority of MRI research done on depression, for example, has been primarily descriptive. While it tells how individual brains differ across various characteristics, it doesnt predict who might have a disorder or who might be vulnerable to developing one.

To appreciate the role the software can play, consider the most familiar path to prediction.

As Dr. Schnyer points out, researchers might acquire a variety of scans of individuals at a single time and wait 20 years to see who develops a disorder like depression. Then theyd go back and try to determine which aspects of their neuroimaging data would predict who ended up becoming depressed. In addition to the obvious problem of long duration, theyd face the challenge of keeping test subjects in the study as well as keeping biases out.

In contrast, machine learning, a form of artificial intelligence, takes a data analytics approach. Through algorithms, step-by-step problem-solving procedures, machine-learning applications adapt to new information by developing models from sample input. Because machine learning enables a computer to produce results without being explicitly programmed, it allows for unexpected findings and, ultimately, prediction.

Dr. Schnyer and his team trained a Support Vector Machine Learning algorithm by providing it sets of data examples from both healthy and depressed individuals, labeling the features they considered meaningful. The resulting model scanned subsequent input, assigning the new examples to either the healthy or depressed category.

With machine learning, as Dr. Schnyer puts it, you can start without knowing what youre looking for. You input multiple features and types of data, and the machine will simply go about its work to find the best solution. While you do have to know the categories of information involved, you dont need to know which aspects of your data will best predict those categories.

As a result, the findings are not only free of bias. They also have the potential to reveal new information. Commenting on the classification of depression, Dr. Schnyers colleague Dr. Chris Beevers says he and the team are learning that depression presents itself as a disruption across a number of networks and not just a single area of the brain, as once believed.

Handling the Data with HPC

Data for this kind of research can be massive.

Even with the current studys relatively small number of subjects, 50 in all, the dataset was large. The study analyzed about 150 measures per person. And the brain images themselves comprised hundreds of thousands of voxels, a voxel being a unit of graphic measurement essentially a three-dimensional pixel in this case, the image of a 2mm x 2mm x 2mm portion of the brain. With about 175,000 voxels per subject, the analysis demanded computing far beyond the power of desktops.

Dr. Schnyer and his team found the high-performance computing (HPC) they needed at the Texas Advanced Computing Center (TACC), hosted by the University of Texas at Austin, where Dr. Schnyer is a professor of psychology.

TACCs machine, nicknamed Stampede, wasnt some generic supercomputer. Made possible by a $27.5 million grant from the National Science Foundation (NSF) and built in partnership with Dell and Intel Corporation, Stampede was envisioned and has performed as one of the nations most powerful HPC machines for scientific research.

To appreciate the scale of Stampedes power, consider its 6,400 nodes, each of them featuring high-performance Intel Xeon Phi coprocessors. A typical desktop computer has 2 to 4 processor cores; Stampedes cores numbered 522,080.

Top left panel Whole brain white matter tractography map from a single representative participant. Bottom left panel A hypothetical graphic application of support vector machine algorithms in order to classify 2 categories. Two feature sets can be plotted against one another and a hyperplane generated that best separates the groups based on the selected features. The maximum margin represents the margin that maximizes the divide between groups. Cases that lie on this maximum margin define the support vectors. Right panel Results of the SVM classification accuracy. Normalized decision function values are plotted for MDD (blue triangles) and healthy controls (HC, red squares). The zero line represents the decision boundary.

Moving Onward

In announcing Stampede, NSF noted it would go into full production in January 2013 and be available to researchers for four years, with the possibility of renewing the project for another system to be deployed in 2017. During its tenure Stampede has proven itself, running more than 8 million successful jobs for more than 11,000 users.

Last June NSF announced a $30 million award to TACC to acquire and deploy a new large scale supercomputing system, Stampede2, as a strategic national resource to provide high-performance computing (HPC) capabilities for thousands of researchers across the U.S. In May, Stampede2 began supporting early users on the system. Stampede2 will be fully deployed to the research community later this summer.

NSF says Stampede2 will deliver a peak performance of up to 18 Petaflops, over twice the overall system performance of the current Stampede system. In fact, nearly every aspect of the system will be doubled: memory, storage capacity, and bandwidth, as well as peak performance.

The new Stampede2 will be among the first systems to employ cutting edge processor and memory technology in order to continue to bridge users to future cyberinfrastructure. It will deploy a variety of new and upcoming technology, starting with Intel Xeon Phi Processors, previously code-named Knights Landing. Its based on the Intel Scalable System Framework, a scalable HPC system model for balancing and optimizing the performance of processors, storage, and software.

Future phases of Stampede2 will include next-generation Intel Xeon processors, all connected by Intel Omni-Path Architecture, which delivers the low power consumption and high throughput HPC requires.

Later this year the machine will integrate 3D XPoint, a non-volatile memory technology developed by Intel and Micron Technology. Its about four times denser than conventional RAM and extremely fast when reading and writing data.

A Hopeful Upside for Depression

The aim of the new HPC system is to fuel scientific research and discovery and, ultimately, improve our lives. That includes alleviating depression.

Like the Stampede project itself, Dr. Schnyer and his team are expanding into the next phase, this time seeking data from several hundred volunteers in the Austin community whove been diagnosed with depression and related conditions.

Its important to bear in mind that his published work is a proof of concept. More research and analysis is needed before reliable measures for predicting brain disorders find their way to a doctors desk.

In the meantime, promising advances are happening on the software side as well as in hardware.

One area where machine learning and HPC are a bit closer to reality, in his terms, is cancer tumor diagnosis, where various algorithms classify tumor types using CT (computerized tomography) or MRI scans. Were trying to differentiate among human brains that, on gross anatomy, look very similar, Dr. Schnyer explains. Training algorithms to identify tumors may be easier than figuring out fine-grained differences in mental difficulties. Regardless, progress in tumor studies contributes to advancing brain science overall.

In fact, the equivalent of research and development in machine learning is underway across commercial as well as scientific areas. In Dr. Schnyers words, theres a lot of trading across different domains. Googles Deep Mind, for example, is invested in multi-level tiered learning, and some of that is starting to spill over into our world. The powerful aspect of machine learning, he continues, is that it really doesnt matter what your data input is. It can be your shopping history or brain imaging data. It can take all data types and use them equally to do prediction.

His own aims include developing an algorithm, testing it on various brain datasets, then making it widely available.

In demonstrating what can be discovered with machine learning and HPC as tools, Dr. Schnyers powerful proof of concept offers a hopeful path toward diagnosing and predicting depression and other brain disorders.

Categories: Analyze, HPC

Tags: Brain, Stampede2, TACC, Xeon Phi

Unifying Oil and Gas Data at Scale

See the original post:

Machine Learning on Stampede2 Supercomputer to Bolster Brain Research - The Next Platform

Hyperion Research: Supercomputer Growth Drives Record HPC Revenue in 2016 – HPCwire (blog)

FRAMINGHAM, Mass., April 7, 2017 Worldwide factory revenue for the high-performance computing (HPC) technical server market grew 4.4% in full-year 2016 to a record $11.2 billion, up from $10.7 billion in 2015 and from the previous record of $11.1 billion in exceptionally strong 2012, according to the newly released Hyperion Research Worldwide High Performance Technical Server QView. Hyperion Research is the new name for the former IDC HPC group.

Each quarter for the last 27 years, Hyperion Research analysts have conducted interviews with major hardware original equipment manufacturers (OEMs) in the technical computing space to gather information on their quarterly sales. Specifically, Hyperion collects data on the number of HPC systems sold, system revenue, system average selling price (ASP), the price band segment that a system falls into, architecture of the system, average number of processor packages per system, average number of nodes for each system sold, system revenue distribution by geographical regions, the use of coprocessors, and system revenue distribution by operating systems. We complement this supply-side data with extensive and intensive worldwide demand-side surveys of HPC user organizations to verify their HPC resources and purchasing plans in detail.

The 2016 year-over-year market gain was driven by strong revenue growth in high-end and midrange HPC server systems, partially offset by declines in sales of lower-priced systems.

Fourth Quarter 2016

2016 fourth-quarter revenues for the whole market grew 7.4% over the prior-year fourth quarter to reach $3.1 billion, while Supercomputers segment fourth-quarter revenues were up 45.6% over the same period in 2015. Hyperion Research expects the worldwide HPC server market to grow at a healthy 7.8% rate (CAGR) to reach $15.1 billion in 2020.

HPC servers have been closely linked not only to scientific advances but also to industrial innovation and economic competitiveness. For this reason, nations and regions across the world, as well as businesses and universities of all sizes, are increasing their investments in high performance computing, said Earl Joseph, CEO of Hyperion Research. In addition, the global race to achieve exascale performance will drive growth in high-end supercomputer sales.

Another important factor driving growth is the market for big data needing HPC, which we call high performance data analysis, or HPDA, according to Steve Conway, Hyperion Research senior vice president for research. HPDA challenges have moved HPC to the forefront of R&D for machine learning, deep learning, artificial intelligence, and the Internet of Things.

Vendor Highlights

The Hyperion Research Worldwide High-Performance Technical Server QView presents the HPC market from various perspectives, including by competitive segment, vendor, cluster versus non-cluster, geography, and operating system. It also contains detailed revenue and shipment information by HPC models.

For more information about the Hyperion Research Worldwide High Performance Technical Server QView, contact Kevin Monroe at kmonroe@hyperionres.com.

About Hyperion Research

Hyperion Research is the new name for the former IDC high performance computing (HPC) analyst team. IDC agreed with the U.S. government to divest the HPC team before the recent sale of IDC to Chinese firm Oceanwide.

Source: Hyperion Research

Read the rest here:

Hyperion Research: Supercomputer Growth Drives Record HPC Revenue in 2016 - HPCwire (blog)

Smallest Dutch supercomputer – Phys.org – Phys.Org

April 6, 2017 A team of scientists from the Netherlands has built a supercomputer the size of four pizza boxes. The Little Green Machine II has a computing power of more than 10,000 ordinary PCs. Credit: Simon Portegies Zwart (Leiden University)

A team of Dutch scientists has built a supercomputer the size of four pizza boxes. The Little Green Machine II has the computing power of 10,000 PCs and will be used by researchers in oceanography, computer science, artificial intelligence, financial modeling and astronomy. The computer is based at Leiden University (the Netherlands) and developed with help from IBM.

The supercomputer has a computing power of more than 0.2 Petaflops. That's 200,000,000,000,000 calculations per second. Thereby this supercomputer equals the computing power of more than 10,000 ordinary PCs.

The researchers constructed their supercomputer from four servers with four special graphics cards each. They connected the PCs via a high-speed network. Project leader Simon Portegies Zwart (Leiden University): "Our design is very compact. You could transport it with a carrier bicycle. Besides that we only use about 1% of the electricity of a similar large supercomputer."

Unlike its predecessor Little Green Machine I the new supercomputer uses professionalized graphics cards that are made for big scientific calculations, and no longer the default video cards from gaming computers. The machine isn't based on the x86 architecture from Intel anymore either, but uses the much faster OpenPower architecture developed by IBM.

Astronomer Jeroen Bdorf (Leiden University): "We greatly improved the communication between the graphic cards in the last six months. Therefore we could connect several cards together to form a whole. This technology is essential for the construction of a supercomputer, but not very useful for playing video games."

To test the little supercomputer the researchers simulated the collision between the Milky Way and the Andromeda Galaxy that will occur in about four billion years from now. Just a few years ago the researchers performed the same simulation at the huge Titan Computer (17.6 petaflops) at Oak Ridge National Laboratory (USA). "Now we can do this calculation at home," Jeroen Bdorf says, "That's so convenient."

Little Green Machine II is the successor of Little Green Machine I that was built in 2010. The new small supercomputer is about ten times faster than its predecessor which is retiring as of today. The name Little Green Machine was chosen because of its small size and low power consumption. In addition, it is a nod to Jocelyn Bell Burnell who discovered the first radio pulsar in 1967. That pulsar, the first ever discovered, got nicknamed LGM-1 where LGM stands for Little Green Men.

Explore further: China to develop prototype super, super computer in 2017

China plans to develop a prototype exascale computer by the end of the year, state media said Tuesday, as it seeks to win a global race to be the first to build a machine capable of a billion, billion calculations per second.

The new "L-CSC" supercomputer at the GSI Helmholtz Centre for Heavy Ion Research is ranked as the world's most energy-efficient supercomputer. The new supercomputer reached first place on the "Green500" list published in ...

A Chinese supercomputer is the fastest in the world, according to survey results announced Monday, comfortably overtaking a US machine which now ranks second.

(PhysOrg.com) -- NVIDIA has built the worlds fastest supercomputer using 7,000 of its graphics processor chips. With a horsepower equivalent to 175,000 laptop computers, its sustained performance is equivalent to 2.5 ...

Cray Inc. said it has sealed a deal to overhaul the US Department of Energy's "Jaguar" supercomputer, making it faster than any other machine on the planet.

China's ambitions to become a major global power in the world of supercomputing were given a boost when one of its machines was ranked second-fastest in a survey.

Uber is scoffing at claims that its expansion into self-driving cars hinges on trade secrets stolen from a Google spinoff, arguing that its ride-hailing service has been working on potentially superior technology.

A sci-fi staple for decades, laser weapons are finally becoming reality in the US military, albeit with capabilities a little less dramatic than at the movies.

Facebook on Thursday launched its digital assistant named "M" for US users of its Messenger application, ramping up the social network's efforts in artificial intelligence.

YouTube TV, Google's new streaming package of about 40 television channels, is the tech industry's latest bid to get cable-shunning millennials to pay for live TV over the internet. It offers intriguing advantages over rivals, ...

Researchers at the University of California, Riverside have found an innovative new use for a simple piece of glass tubing: weighing things. Their glass tube sensor will help speed up chemical toxicity tests, shed light on ...

Proteins are the most abundant substance in living cells aside from water, and their interactions with cellular functions are crucial to healthy life. When proteins fall short of their intended function or interact in an ...

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Go here to see the original:

Smallest Dutch supercomputer - Phys.org - Phys.Org