Lenovo builds 11.1 petaflop MareNostrum 4 supercomputer – DatacenterDynamics

Lenovo has delivered what it says isthe worlds largest, next-generation Intel-based supercomputerat the Barcelona Supercomputing Center (BSC).

The 11.1 petaflop high performance computing (HPC) system, called MareNostrum 4, is at the Chapel Torre Girona at the Polytechnic University of Catalonia, Barcelona, Spain, one of the most beautiful data centers in the world.

MareNostrum 1

Source: Barcelona Supercomputing Center

TheHPCsystem will be used for science research includinghuman genome research, bioinformatics and biomechanics to weather forecasting and atmospheric composition.

It features 3,400 nodes of Lenovos next-generation servers, with Intel Xeon scalable processors, interconnected by more than 60 kilometers of Intel Omni-Path Technology 100 Gbps network cabling.

While there are plans to expand the system, it is currently the 13th most powerful supercomputer according to theTOP500list.

The fast delivery, installation and optimization of the MareNostrum 4 system at BSC, showcases Lenovos end-to-end, high-performance computing strength, Kirk Skaugen, the recently-appointed president of Lenovo data center group,said.

Building on our 25 years of history in x86 server computing and our number one position in x86 server customer satisfaction and reliability, our goal at Lenovo is to be the largest supercomputing company on earth helping solve [humanitys] biggest challenges through the rapid advancement of technology and innovation.

Madhu Matta, VP & GM of HPC and AIat Lenovo, added:From the lab to the factory, to the on-site implementation teams, the delivery of a system of this size and complexity demands a superior level of integration and skill.

It requires a focus on a holistic customer experience that very few companies are capable of delivering.

The company, which this week refreshed its data center lineup, also announced plans to upgrade its Global HPC Innovation Center in Stuttgart, Germany with 6,000 cores of the next-generation Intel Xeon scalable processors and Nvidia GPUs.

Read the rest here:

Lenovo builds 11.1 petaflop MareNostrum 4 supercomputer - DatacenterDynamics

Simply Beautiful: MareNorstum4 Supercomputer Sports 13.7 Petaflops – insideHPC

Over at Lenovo, Gavin OHara writes that the worlds most beautiful supercomputer center now sports a 13.7 Petaflop system so novel in design that it has captured the attention of the global HPC community.It landed at #13 on the TOP500 this week, and thats just the beginning.

In a converted 19th-century church on the outskirts of Barcelona sits a computer so overwhelmingly powerful, it could someday save us all.

Save us from what? Were not sure yet. But one day soon a scientific or medical research breakthrough will happen and its origins will be traced back to a glass-encased room inside the Torre Girona Chapel. Sitting within is a hulking mass of supercomputing power: a whopping 3,400 servers connected by 48 kilometers of cable and wire.

Torre Girona, nestled inside the Barcelona Supercomputing Center on the campus of the Polytechnic University of Catalonia, was used as a Catholic Church until 1960. The church was deconsecrated in the 1970s but, the longer you spend here seeing how supercomputing speed can enable lightning-fast insight, the more you start to sense the presence of a higher power.

This is technology at its inquisitive best. And it all starts with the specs of the monster they call MareNostrum.

Specifications

To consider the sheer power and scale of MareNostrums High Performance Computing capabilities is to test your own knowledge of large-scale counting units. You see, for supercomputing nerds its all about FLOPs, or Floating Point Operations/Second. The original MareNostrum 1, installed in 2004, had a calculation capacity of 42.35 teraflops/second. Which meant 42.35 trillion operations/second. Not bad, I guess, until you consider that the 2017 version (MareNostrum 4) blows that out of the waterit possesses 322 times the speed of the original.

The new supercomputer has a performance capacity of 13.7 petaflops/second and will be able to carry out 13,677 trillion operations per second, says Lenovo VP Wilfredo Sotolongo as we gaze upwards inside the chapel. Sotolongo not only works closely with the BSC, he actually lives near Torre Girona in Barcelona.

As I try to get my head around all these unfamiliar units of measure, Sotolongo lays it out for me: In computing, FLOPs are a measure of computer performance. Node performance My mind wanders a bit before I tune back in. A petaflop is a measure of a computers processing speed and can be expressed as a quadrillion, or thousand trillion, floating point operations per second. A thousand teraflops. 10 to the 15th power FLOPs. Etc etc.

He sees my head spinning so, mercifully, he simplifies it. Basically, MareNostrum 4 is 10 times more powerful than MareNostrum 3. OK, I can relate to that but I one-up him anyway: How many times more powerful is it than my 2016 ThinkPad X1 Carbon laptop? He laughs. About 11,000 times. Gulp.

Its Really About the Workloads

What kinds of workloads require the type of computing power found in the MareNostrum cluster? There are a lot, it turns out. Because HPC systems deliver results in a fraction of the time of a single desktop or workstation, they are of increasingly vital interest to researchers in science, engineering and business. They are all drawn by the possibility of solving sprawlingly complex problems in their respective fields.

Over the years, MareNostrum has been called on to serve more than 3,000 such projects. On any given day, as the Catalonian sun streams through the stained-glass windows of Torre Girona, MareNostrum manages mountains of data and spits out valuable nuggets of insight to a staff of more than 500 that could someday help solve some of humanitys greatest challenges.

Gavin OHara leads Lenovos Global Social Content & Community team. Hes been with Lenovo since 2005 and, in 2010, became the second person in the company to do social media. He is a big believer in unselfish brand storytelling and lives by the mantra people before products. As Lenovos chief storyteller, he scours the Earth in search of the inspiring and the unexpected. In a previous life, he worked as a writer, journalist and musician. Gavin is a Virginia native, a Syracuse University graduate and a long-time North Carolina resident.

Sign up for our insideHPC Newsletter

More here:

Simply Beautiful: MareNorstum4 Supercomputer Sports 13.7 Petaflops - insideHPC

AMD Challenges Intel’s Datacenter Dominance with New EPYC Processors – TOP500 News

For the first time in several years, AMD has brought a server chip to market that provides some real competition to Intel and its near total domination of the datacenter market. The new AMD silicon, known as the EPYC 7000 series processors, come with up to 32 cores, along with a number of features that offer some useful differentiation against its Xeon competition.

The new AMD processors are broadly aimed at the cloud and datacenter markets, including the high performance computing space. With regard to the latter, EPYC is going to have some challenges in HPC environments, but AMD definitely has a case to make for its use there. Before we dive into that subject, lets look at the feature set of the new products.

The EPYC processors launched this week come with 8 to 32 cores, and like their Xeon rivals, can execute two threads per core. AMD has decided to offer only single-socket and dual-socket versions, leaving the much smaller quad-socket-and-above market to Intel.

Clock frequencies dont vary all that much across the range of EPYC SKUs; they start at 2.0 GHz and top out at 2.4 GHz. As youll note from the tables below, the frequencies arent necessarily higher at the lower core counts, as one might expect. The same holds true for the max boost clock frequencies.

EPYC also features a new interconnect known as the Infinity Fabric, which takes the place of AMDs HyperTransport bus on the old Opterons. Except in this case, the fabric is used to connect the internals of the EPYC MCM the individual dies that make up the chip as well as the memory and the processors themselves (in a dual-socket setup). Socket-to-socket communication is up to 152 GB/second, while memory bandwidth tops out at 171 GB/sec.

Across the EPYC product set, AMD is claiming significantly higher integer performance 21 to 70 percent higher compared to comparably priced Xeon Broadwell processors, based on SPECint_rate_base2006. And for the top-end 32-core EPYC 7601 chip, AMD says its floating point performance is 75 percent higher than that of Intels Broadwell E5-2699A v4 processor, based on SPECfp_rate_base2006.

No doubt, some of the better performance is due to the generally higher counts of the EPYC parts compared to the comparably priced Xeon Broadwell SKUs. But thats sort of beside the point. The real issue is that, for the most part, EPYC processors will not be competing Broadwell, but rather against Intels new Skylake Xeon processors, which are expected to launch in July.

The Skylake design should offer better overall performance than Broadwell. More importantly, Skylake will support the AVX-512 instruction set, which will boost vector math performance (both integer and floating point) significantly compared to its predecessor. So AMDs performance-per-dollar comparisons will have to be revisited once Skylake launches, but its reasonable to assume that Intels top-end chips will outrun the EPYC 7601 in floating point performance, even if AMD manages to offer more value.

AMD does appear to have a clear advantage in memory support. Each EPYC processor is equipped with eight memory channels, which supports up to 16 DIMMs of DDR4 DRAM of speeds up to 2,666 MHz. So each socket can access up to 2 TB. On a dual-socket system, that doubles to 4 TB. Two EPYC 7601 processors in a server delivers 146 percent more bandwidth on the STREAM benchmark than a comparable Broadwell Xeon box. And even though Skylake Xeons will supposedly support six memory channels to Broadwells four, it looks like EPYCs memory advantage will prevail for the time being.

EPYCs support for a bigger memory footprint, and by extension, higher bandwidth, is designed to offer more performance for data-demanding applications, which are particularly sensitive to the worsening bytes/flops (or ops) ratio of modern processors. AMDs calculation here is that is that for most datacenter applications these days, memory access, rather than compute, is the limiting factor. The bigger memory footprint also makes the single-socket EPYC solution more attractive, since many customers often populate the second socket solely for the purpose of adding more memory.

The EPYC processor also offers an ungodly amount of PCIe support 128 lanes per socket, as compared to the expected 48 lanes for the Skylake Xeon processor. 128 lanes is enough to attach four to six GPUs or up to 24 NVMe drives. This also buttresses the case for single-socket servers, since, once again, you can avoid using the other socket to get access to additional devices. In fact, in a dual-socket configuration, you get the same 128 PCIe links, since the Infinity Fabric uses 64 of the PCIe links to connect to the other processor.

In summary, while even the fastest EPYC processors are unlikely to outperform the top Skylake parts in pure computational horsepower, from a performance per dollar or performance per watt per dollar, they may be extremely competitive. And for memory capacity and performance, as well as PCIe connectivity, they will outshine their Intel counterparts. Apparently, that was enough to attract Baidu and Microsoft, who are early customers of record

For HPC use, EPYC may appear to be something of a tradeoff. Its worth considering, though, that in 2017, the cheapest and most efficient flops are found on GPUs or other manycore processor, and not on multicore CPUs (with the caveat that not all flops are equally accessible to every application across these platforms). In addition, for many HPC applications, memory access is the most critical bottleneck.

With that in mind, AMD does have a high performance story to tell. Its regrettable that the company did not use the recent ISC conference to tell it. Instead, the EPYC launch was announced in Austin, Texas, during the companys Financial Analyst Day on June 20, and no one from the server side was dispatched to Frankfurt, Germany this year. (AMD did talk about their new Radeon Instinct GPUs for deep learning work at ISC, and well be reviewing those in an upcoming article.)

Its certainly understandable the AMD is focusing on the cloud and hyperscale space for the initial EPYC launch, given that it represents a bigger and faster growing market than that of HPC. But as Intel discovered awhile ago, being a leader at the high end of the market has downstream benefits as well.

The next time the HPC faithful are gathered in large numbers will be in November at SC17, and by that time the Skylake Xeon processors will be available for head-to-to-head comparisons on real applications. It would serve AMD well to be ready to talk about their HPCambitions for EPYC at the Denver event.

More here:

AMD Challenges Intel's Datacenter Dominance with New EPYC Processors - TOP500 News

Lenovo unveils world’s largest Intel-based supercomputer – BetaNews – BetaNews

Lenovo has revealed what it says is a part ofthe next-generation of supercomputers.

At the International Supercomputing Conference in Frankfurt, the company confirmed it has completed the delivery and implementation of the worlds largest, Intel-based supercomputer at the Barcelona Supercomputing Center (BSC).

Called MareNostrum 4, the 11.1 petaFLOP supercomputer will be housed in the worlds "most beautiful data center" at the Chapel Torre Girona at the Polytechnic University of Catalonia in Barcelona. There it will be used to power a number of scientific investigation processes, including human genome research, bioinformatics and biomechanics to weather forecasting and atmospheric composition.

The system is powered by more than 3,400 nodes of Lenovos next-generation servers, featuring Intel Xeon scalable processors, interconnected with more than 60 kilometers of high-speed, Intel Omni-Path Technology 100 Gb/s network cabling.

The new system has already struck a claim to be one of the biggest in the world, currently listed at number 13 on theTOP500list, released today, and Lenovo says it will also continue to grow over time.

"From the lab to the factory, to the on-site implementation teams, the delivery of a system of this size and complexity demands a superior level of integration and skill," says Madhu Matta, VP & GM of High Performance Computing and Artificial Intelligence at Lenovo. "It requires a focus on a holistic customer experience that very few companies are capable of delivering."

Published under license from ITProPortal.com, a Future plc Publication. All rights reserved.

More here:

Lenovo unveils world's largest Intel-based supercomputer - BetaNews - BetaNews

DAVIDE Supercomputer Named to TOP500, Green500 Lists – HPCwire (blog)

FRANKFURT, Germany, June 21, 2017 At the International Supercomputing Conference 2017, IBM Business Partner and OpenPOWER Foundation member, E4 Computer Engineering, the Italian technology provider of leading-edge solutions for HPC, data analytics and AI, announced that D.A.V.I.D.E. (Development for an Added Value Infrastructure Designed in Europe), a multi-node cluster powered by IBM POWER8 processor technology with NVIDIA Tesla P100 GPU accelerators and NVIDIA NVLink interconnect technology, entered the prestigious TOP500 list.

Twice a year, Top500.org publishes the TOP500 and Green500 lists. The TOP500 ranks supercomputing environments by performance capabilities, as determined by the Linpack benchmark, and recognizes the vendors and technologies that power the most powerful data intensive environments in the world. The Green500 list ranks the top 500 supercomputers in the world by energy efficiency.

D.A.V.I.D.E., developed within the Partnership for Advanced Computing in Europe (PRACE), provides a compelling solution for workloads with highly parallelized code and demanding memory bandwidth requirements such as weather forecasting, QCD, machine learning, computational fluid dynamics and genomic sequencing.

The supercomputer represents the third generation of the Pre-Commercial Procurement project for the development of a Whole-System Design for Energy Efficient HPC, and its innovative design uses the most advanced technologies to create a leading edge HPC cluster that provides powerful performance, low power consumption and ease of use.

D.A.V.I.D.E. was built with best-in-class components. The machine has a total of 45 nodes connected via Infiniband, with a total peak performance of 990 TFlops and an estimated power consumption of less than 2kW per node. Each node is a 2U form factor and hosts two IBM POWER8 Processors with NVIDIA NVLink and four Tesla P100 data center GPUs, with the intra-node communication layout optimized for best performance. Nodes are connected with an efficient EDR 100 Gb/s networking.

The multi-node cluster was fully configured in April 2017 at the E4s facility in order to perform initial testing, running baseline performance, power and energy benchmarks using standard codes in an aircooled configuration. D.A.V.I.D.E. is currently available for a select number of users for porting applications and profiling energy consumption.

A key feature of the multi-node cluster is an innovative technology for measuring, monitoring and capping the power consumption of the node and of the whole system, through the collection of data from the relevant components (processors, memory, GPUs, fans) to further improve energy efficiency. The technology has been developed in collaboration with the University of Bologna.

We are delighted to have reached this prestigious result to be included in the TOP500 list. The team worked very hard to design and develop this prototype and is very proud to see the system up and running; we look forward to seeing it fully available to the scientific community, said Cosimo Gianfreda, CTO, E4 Computer Engineering. With our work we have demonstrated that it is possible to integrate cost effective technologies to achieve high performance and significantly improve energy efficiency. We thank all our partners for the close collaboration that contributed to this great achievement.

HPC and AI are converging and the D.A.V.I.D.E. supercomputer will help the scientific community to run both kinds of workloads on an accelerated system, said Stefan Kraemer, Director of HPC for EMEA at NVIDIA: Engery-efficient accelerated computing is the only way to reach the ambitious goals Europe has set for its HPC future.

About E4 Computer Engineering

Since 2002, E4 Computer Engineering has been innovating and actively encouraging the adoption of new computing and storage technologies. Because new ideas are so important, we invest heavily in research and hence in our future. Thanks to our comprehensive range of hardware, software and services, we are able to offer our customers complete solutions for their most demanding workloads on: HPC, Big-Data, AI, Deep Learning, Data Analytics, Cognitive Computing and for any challenging Storage and Computing requirements. E4. When Performance Matters.

Source: E4 Computer Engineering

Visit link:

DAVIDE Supercomputer Named to TOP500, Green500 Lists - HPCwire (blog)

US Slips in New Top500 Supercomputer Ranking – IEEE Spectrum

Photo: CSCS The Piz Daint supercomputer, housed at the Swiss National Supercomputing Center, edged U.S. supercomputers from any of the top three positions.

In June, we can look forward to two things: the Belmont Stakes and the first of the twice-yearly TOP500 rankings of supercomputers. This month, a well-known gray and black colt named Tapwrit came in first at Belmont, and a well-known gray and black supercomputer named Sunway TaihuLightcame in first on Junes TOP500 list, released today in conjunction with the opening session of the ISC High Performance conference in Frankfurt. Neither was a great surprise.

Tapwrit was the second favorite at Belmont, and Sunway TaihuLight was the clear pick for the number-one position on TOP500 list, it having enjoyed that first-place ranking since June of 2016 when it beat out another Chinese supercomputer, Tianhe-2. The TaihuLight, capable of some 93 petaflops in this years benchmark tests, was designed by theNational Research Center of Parallel Computer Engineering & Technology(NRCPC) and is located at theNational Supercomputing Center in Wuxi, China. Tianhe-2, capable of almost 34 petaflops, was developed by Chinas National University of Defense Technology (NUDT), is deployed at the National Supercomputer Center in Guangzho, and still enjoys the number-two position on the list.

More of a surprise, and perhaps more of a disappointment for some, is that the highest-ranking U.S. contender, the Department of Energys Titan supercomputer (17.6 petaflops) housed at Oak Ridge National Laboratory, was edged out of the third position by an upgraded Swiss supercomputer called Piz Daint (19.6 petaflops), installed at the Swiss National Supercomputing Center, part of the Swiss Federal Institute of Technology (ETH) in Zurich.

Not since 1996 has a U.S. supercomputer not made it into one of the first three slots on the TOP500 list. But before we go too far in lamenting the sunset of U.S. supercomputing prowess, we should pause for a moment to consider that the computer that bumped it from the number-three position was built by Cray and is stuffed with Intel processors and NVIDIA GPUs, all the creations of U.S. companies.

Even the second-ranking Tianhe-2 is based on Intel processors and co-processors. Its only the TaihuLight that is truly a Chinese machine, being based on the SW26010, a 260-core processordesigned by the National High Performance Integrated Circuit Design Centerin Shanghai.And U.S. supercomputers hold five of the 10 highest ranking positions on the new TOPS500 list.

Still, national rivalries seem to have locked the United States into a supercomputer arms race with China, with both nations vying to be the first to reach the exascale thresholdthat is, to have a computer that can perform a 1018 floating-point operations per second. China hopes to do so by amassing largely conventional hardwareand is slated to have a prototype system ready around the end of this year. The United States, on the other hand, is looking to tackle the problems that come with scaling to that level using novel approaches, which require more research before even a prototype machine can be built. Just last week, the U.S.Department of Energy announced that it was awarding Advanced Micro Devices, Cray, Hewlett Packard, IBM, Intel, and NVIDIA US $258 million to support research toward building an exascale supercomputer. Who will get there first, is, of course, up for grabs. But one things for sure: Itll be a horse race worth watching.

IEEE Spectrums general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Piz Daint masters both speed and efficiency by keeping data close to its processors 27Jan2014

Silicon Valleys top employers made big staffing changes, according to Silicon Valley Business Journal 15Jun

Neuroscience will give us what weve sought for decades: computers that think like we do 2Jun

Why the merger of the Raspberry Pi and CoderDojo foundations makes senseand why it doesnt 2Jun

Large-scale brainlike systems are possible with existing technologyif were willing to spend themoney 1Jun

Massive efforts to better understand the human brain will deliver on the original promise of computer science: machines that think like us 31May

Modeling computers after the brain could revolutionize robotics and big data 31May

Researchers in this specialized field have hitched their wagon to deep learnings star 29May

Artificial intelligence might endow some computers with self-awareness. Heres how wed know 25May

By the end of 2017, Google hopes to make a 49-qubit chip that will prove quantum computers can beat classical machines 24May

Fujitsus new cooling system promises easy server maintenance while using less power and taking up less space 18May

Scott Borg, director of the U.S. Cyber Consequences Unit, says hardware design engineers hold the future of cybersecurity in their hands 15May

Simulations suggest old ICs should consume less power than they did in their youth 12May

All the coolest 3D printing projects from CHI 2017 11May

All the weirdest computer interfaces from CHI 2017 9May

The headlines show big hiring sprees for Amazon, GE Healthcare, and other software-focused operations. Hardware? The news isnt as good. 9May

This short video was shot at SXSW this year, where the IEEE held a panel to discuss the future of computing 1May

Fake hardware could open the door to malicious malware and critical failures 24Apr

The mysterious XPoint memory in Intels new Optane solid-state drive is a step toward universal memory 19Apr

Avegant is confident enough about its light-field-based mixed-reality technology that it's willing to show and tell 18Apr

See original here:

US Slips in New Top500 Supercomputer Ranking - IEEE Spectrum

Championship 2017/18: Super Computer predicts table after five games of new season – talkSPORT.com

The Championship fixtures for 2017/18 have been announced and, here at talkSPORT, we cannot wait to get the season started.

Kick-off may still be around two months away, but it does not stop supporters fromdreaming about how their side will start the campaign.

READ MORE:EFL Championship fixtures 2017-18 in full: Every team, every match

The first gameyou look for is usually the season opener, followed by the final match, as well as the derby clashes home and away, plusmeetings with the newly-promoted sides.

Another thing is the first month or so of fixtures - how your side's start could determine the way the whole season pans out, whether it could see them pushing for the automatic spots, a battle for a play-off placeor scraping for points and playing catch up near the bottom.

Well, no fear - talkSPORT has done the hard work for you.

READ MORE: talkSPORT becomes the new home of the English Football League

We have fed the data into the super computer, assessing the opening five rounds of the second tier, with predicted rankings. Bear in mind plenty can change between now and the start of the season, as the transfer window opens and managers sort out squads.

According to our system, Sunderland will feel the full force of a late managerial appointment, play-off finalists Reading will have a slow start while Harry Redknapp will have his Birmingham side well prepared.

Of course, the standings above have been collated just for fun it is interesting to speculate, but as we all know, football has a funny way of turning expectations on their head.

Click the right arrow, above, to see how the Championship table might look after five games and comment with your predictions below...

talkSPORT and talkSPORT 2 have exclusive radio rights to the Sky Bet EFL Championship,League OneandLeague Twofor the next three seasons.

The talkSPORT network will be the only place to hear 110 regular season EFL matches as well as the play-off semi-finals and finals - read more here.

Visit link:

Championship 2017/18: Super Computer predicts table after five games of new season - talkSPORT.com

US Falls Behind China and Switzerland in Supercomputer Race – Fortune

Staff analyze the Tianhe-1 supercomputer at the National Supercomputing Center on Nov. 2, 2010 in Tianjin, China. VCG VCG via Getty Images

The U.S. may need a more powerful supercomputer.

Two Chinese supercomputers and an upgraded supercomputer in Switzerland rank ahead of the U.S. in a biannual list of top supercomputers released Monday by the TOP500 organization, which tracks supercomputer speeds.

It is only the second time that the U.S. absent from the top 3 most powerful supercomputers since the organization started compiling the rankings 24 years ago. In the previous ranking, published in November, the top U.S. supercomputerlocated at Oak Ridge National Laboratory in Oak Ridge, Tenn.was No. 3.

Get Data Sheet , Fortunes technology newsletter.

The only other time this occurred was in November 1996, when three Japanese systems captured the top three spots, the organization said in a statement.

But it wasnt all bad news for the U.S.

The U.S. has five of the top 10 supercomputers on the list, the most of any other country. Additionally, the U.S. has 169 supercomputers in the top 500, followed by China with 160.

As for the companies supplying the parts for the supercomputers, Intel ( intc ) is the biggest with 464 of the top supercomputers using its processors. IBM ( ibm ) and its Power processors are installed in 21 supercomputers, followed by AMDs ( amd ) chips, which are used in six supercomputers.

For more about technology and finance, watch:

Nvidias ( nvda ) GPU chips, which are specialized for heavy data crunching like deep learning, are being used in 91 supercomputers to make them more powerful beyond what the typical chips used inside. For example, the Swiss National Supercomputing Center outfitted its supercomputer with Nvidias chips, which caused the machine to double its performance and climb from No. 8 to the No. 3 in the supercomputer rankings.

Read the original here:

US Falls Behind China and Switzerland in Supercomputer Race - Fortune

China still has the world’s fastest supercomputer, but the US wants to change that – Recode

China holds the top two spots for fastest computers in the world, and Switzerland holds the third, with the U.S. in the fourth, fifth and sixth spots.

The Top500 list of the most powerful supercomputers in the world was released yesterday at the 2017 International Supercomputing Conference in Frankfurt, Germany.

But the U.S. might not miss its top spot for long. The Department of Energy awarded six companies a total of $258 million last Thursday to further the research and development of the worlds first exascale supercomputer. There are no computers that powerful today.

The U.S. formerly held the third spot, but this time it was edged out by a system from the Swiss National Supercomputing Centre, which moved up from eighth place. This is only the second time in 24 years of compiling the Top500 list that the U.S. did not have a computer place in one of the top three positions.

These computers process at petascale speeds, meaning their capabilities are measured in terms of one quadrillion (1,000,000,000,000,000) calculations per second. To put that in perspective, consumer laptops now operate at gigascale, which is one billion calculations per second.

The U.S. companies that received government funding Hewlett Packard, Intel, Nvidia, Advanced Micro Devices and Cray will all work to solve problems in energy efficiency, reliability and overall performance of a national exascale computer system.

An exascale computer is capable of processing a quintillion (1,000,000,000,000,000,000) calculations per second. Thats about a trillion times more powerful than a consumer laptop.

Exascale-level computing would allow scientists to make extremely precise digital simulations of biological systems, which could uncover answers to pressing questions like climate change and growing food that can withstand drought.

As you develop models that are more sophisticated that include more of the physics, chemistry and environmental issues that are important in predicting the climate, the computing resources you need increases, said Thom Dunning, a chemistry professor at the University of Washington and the co-director of the Northwest Institute for Advanced Computing.

Chemists are leading a lot of the advances in computing power, since advanced biological modeling requires really powerful processing. With more detailed biological modeling, chemists can, for example, learn how plant cells react to drought, which can help to better engineer crops a project Dunning is working on with his research group.

The more powerful the computer, the more realistic the models are, which in turn provide scientists with more reliable predictions about the future and more concrete recommendations about what companies and governments need to do.

Exascale computing would also have a tremendous impact on the countrys national security. The National Security Agency and other law enforcement organizations collect more data in their dragnet digital surveillance operations than can often be processed in a timely, meaningful way, according to Dunning. With higher processing power, that data can be analyzed quickly to assess and predict potential threats.

The companies awarded the grants will cover at least 40 percent of the cost of the research projects themselves.

Creating an exascale computer is well beyond anything that a private company can do on its own, said Dunning, who added that building an exascale computer is a multibillion-dollar effort.

U.S. investment in building an exascale machine will have benefits beyond just finishing the computer itself. The research and development gleaned along the way will flow down into lower-level systems that will give the U.S. a competitive advantage in terms of making powerful computing much more affordable and accessible, Dunning said.

Heres a list of the Top 10 most powerful supercomputers in the world. The U.S. holds the most spots on the list, with five supercomputers that made the cut.

Go here to read the rest:

China still has the world's fastest supercomputer, but the US wants to change that - Recode

The US falls farther down supercomputer rankings than it’s been in over 20 years – BGR


BGR
The US falls farther down supercomputer rankings than it's been in over 20 years
BGR
The United States is competing with China on so many fronts it's impossible to name them all, but one of the most visible rivalries between the two countries is based on computing power. In the newest TOP500 ranking of the world's most powerful ...
Swiss supercomputer edges US out of top spot - BBC NewsBBC News
US Slips in New Top500 Supercomputer RankingIEEE Spectrum
America's Fastest Computer Just Got Beat. Again.Popular Mechanics
Le News -ITworld -TOP500 News -TOP500 News
all 30 news articles »

Follow this link:

The US falls farther down supercomputer rankings than it's been in over 20 years - BGR

The US Is Investing $258 Million to Build a More Powerful Supercomputer – Futurism

In Brief For the first time since 1996, the United States is no longer home to one of the three fastest supercomputers in the world. To combat this, the DOE has announced plans to invest $258 million to help develop the next generation device. New Tech Race

The 20th century space race ushered in some of the most significant scientific discoveries of the era. Now, the efforts of private companies like SpaceX, Virgin Galactic, and Blue Origin, as well as traditional governmentalagencies like NASA, have sparked a new space racethats bringing about next-level space technologies.

However, the Space Race 2.0 isnt the only technological competition in the world today the smartest minds across the globeare competing to create the most powerful supercomputer on the planet.

Since 1996, the United States has consistently been home to one of the three fastest supercomputers in the world. Unfortunately for the U.S., that streak has ended as the Department of Energys (DOE) Titan supercomputer has been bumped to thenumber four slot. The Swiss National Supercomputing Centres Piz Daintnow holds the bronze following an upgrade involving the addition of Nvidia GPUs.

The U.S. is not taking this bump to fourth place lying down. Last week, the DOE announced that it was making $258 million availableto help fund the next big supercomputer.

According to MIT Technology Review, the U.S. government expects to have a system that can performone quintillion operations per second by 2021. That would be 50 times faster than Titan and 10 times faster thanChinas TaihuLight, the current world leader.

Of course, the rest of the world wont spend the next four years content with what theyve already created. China is looking to further cement its place at the top of the supercomputing heap by heavily investing in the next generation of supercomputers. The nation is even setting a more ambitious goal for itself than the U.S. they believe their more powerful machine will be ready by 2020.

Ultimately, this race for the worlds most powerful supercomputer will benefit us all, as the devices will help humanity with everything from healthcare to predicting the weather. Truly, there are no losers when innovation is the goal.

Excerpt from:

The US Is Investing $258 Million to Build a More Powerful Supercomputer - Futurism

Swiss supercomputer edges US out of top spot – BBC News – BBC.com – BBC News


BBC News

Read the original here:

Swiss supercomputer edges US out of top spot - BBC News - BBC.com - BBC News

DoE Awards $258M for Exascale Supercomputer Research | News … – PCMag

AMD, Cray, HPE, IBM, Intel, and Nvidia receive funding to push ahead with energy efficient exascale supercomputers that use tens not hundreds of megawatts.

The fastest supercomputer in the US today is Titan (currently third fastest in the world). Located at the Oak Ridge National Laboratory in Tennessee, it utilizes a hybrid architecture consisting of AMD CPUs and Nvidia GPUs to offer 20+ petaflops of performance requiring 8.2 megawatts of power. That may be fast today, but the future is exascale supercomputers, which achieve 1,000+ petaflops of performance.

The Department of Energy (DoE) realizes exascale supercomputers are "critical for U.S. leadership in areas such as national security, manufacturing, industrial competitiveness, and energy and earth sciences." But energy efficiency is of great concern. If Titan requires 8.2 megawatts to achieve 20 petaflops, imagine what 1,000 petaflops would require without some major breakthroughs. So the DoE created the PathForward program to focus on energy efficient exascale computing research.

Last week, the DoE chose six technology companies to receive $258 million of funding over a three-year period. Those companies are AMD, Cray, HPE, IBM, Intel, and Nvidia. Each company will also add at least 40 percent additional funding, taking the three-year total investment to $430 million split between hardware, software, and application development research.

The overall goal is to see a huge increase in computing power over today's best supercomputers (50x increase) without a huge increase in energy consumption. Nvidia gets more specific, stating the DoE's ambitious goal is, "to achieve exascale performance using only 20-30 megawatts." The company also points out that attempting to achieve an exascale computer with CPUs alone would take gigawatts of energy.

This isn't a new initiative for the DoE. AMD was awarded a $32 million grant back in 2014 to research exascale computing. There's also competition from China to consider. The two fastest supercomputers in the world reside in China (Tianhe-2 and Sunway TaihuLight). China has also promised to have a prototype of an exascale supercomputer ready before the end of 2017.

Matthew is PCMag's UK-based editor and news reporter. Prior to joining the team, he spent 14 years writing and editing content on our sister site Geek.com and has covered most areas of technology, but is especially passionate about games tech. Alongside PCMag, he's a freelance video game designer. Matthew holds a BSc degree in Computer Science from Birmingham University and a Masters in Computer Games Development from Abertay University. More

Continued here:

DoE Awards $258M for Exascale Supercomputer Research | News ... - PCMag

Atos Reveals First Commercial ARM-Based Supercomputer – Top500 – TOP500 News

On the opening day of the ISC High Performance conference, Atos announced the Bull Sequana X1310, an ARM-based variant of the companys Sequana X1000 supercomputer line.

Bull Sequana is Atoss flagship HPC blade platform, which up until now was powered primarily by Intel x86 silicon either Xeon or Xeon Phi processors. Blade options for NVIDIA GPU or Xeon Phi coprocessors are also available. The most distinctive feature of the platform is the Bull eXascale Interconnect (BXI), a proprietary high-performance network designed for massive parallelism.

The new Sequana X1310 blade is comprised of three compute nodes, each outfitted with Caviums ThunderX2 processors, the chipmakers second-generation ARM v8 server chip. The new system will be available in the second quarter of 2018.

The addition of an ARM blade product places Atos in rare company. Penguin Computing also announced its own ThunderX2-powered cluster platform today. That product, known as the Tundra ES Valkre, can be ordered now and will ship in the third quarter of 2017. Going further back, E4 Computer Engineering, an Italian computer-maker, started offering ThunderX-based clusters backin 2015, under its ARKA brand. Those first generation Cavium chips could be paired with GPUs for additional computational horsepower.

Other OEMs also appear to be moving towardcommercial offerings.Cray delivered an ARM-based supercomputer, known as Isambard, to the GW4 HPC alliance in the UK earlier this year. That system is supposedly based on Crays CS400 cluster platform, but the company has yet to announce any product plans for the ARM variant. Lenovo, HPE, Dell, Eurotech and Cirrascale have also been fiddling with ARM servers for the HPC market, and a bunch of prototypes have been constructed based on either Cavium or Applied Micro chips.

For its part, Atos has been involvedin ARM-powered HPC for a few years now. One of the early systems built for the Mont-Blanc exascale research project was based on an ARM-based prototype of a Bull blade. In the third phase of the project, Atos is supplying a more advanced platform, which will be the basis of the Bull Sequana X1310 product that will ship next year.

The original premise of bringing ARM into the HPC ecosystem is its energy-efficiency. The architectures energy-sipping RISC design has certainly served it well for the mobile and embedded computing space, where minimizing the power draw is a critical factor. But it remains to be seen whether a 64-bit ARM architecture with more performant behavior can exhibit the same sort of efficiency relative to a conventional x86 chip.

The less-talked about goal for injecting ARM into the HPC space (and the broader server market in general) is to offer an alternative to Intel and its dominant x86 Xeon product line. ARMs most obvious advantage here is the ability of multiple vendors to license the chip and construct an array of different implementations targeted to specific types of workloads.

At some point, we may see Atos and other OEMs licensing the ARMv8-A Scalable Vector Extension (SVE) architecture and building a supercomputer based on this much more powerful ARM variant. This is the strategy Fujitsu has undertaken for its Post-K exascale supercomputer.

With Atos and Penguin now testing the waters with the Cavium ThunderX2 in commercial products, we may soon see other HPC server-makers jumping in as well. Watch this space.

View original post here:

Atos Reveals First Commercial ARM-Based Supercomputer - Top500 - TOP500 News

Cray Awarded $18M Supercomputer Contract from New Zealand’s NIWA – HPCwire (blog)

SEATTLE, June 19, 2017 Global supercomputer leader CrayInc. today announced the company has been awarded a contract with the National Institute of Water and Atmospheric Research (NIWA) valued at more than $18 million to provide NIWA and its partner, the New Zealand eScience Infrastructure (NeSI), with three Cray supercomputers two CrayXC50 supercomputers and a Cray CS400 cluster supercomputer.

Headquartered in Auckland, NIWA is New Zealands largest and preeminent provider of climate and atmospheric research, and freshwater and ocean science. NIWAs mission is to enhance the economic value and sustainable management of New Zealands aquatic resources and environments, to provide understanding of climate and the atmosphere, and increase the resilience to weather and climate hazards to improve safety and wellbeing of New Zealanders. Also hosted in Auckland, NeSI is a collaborative partnership between the University of Auckland, the University of Otago, Landcare Research, and NIWA, which delivers supercomputing services to researchers nationally.

The new Cray systems will be used for climate research, numerical weather prediction, data analytics, and general scientific research in a range of fields including computational chemistry, engineering, and biomedicine. The systems will be located at a NIWA facility in Wellington and at the University of Auckland.

Our new Cray supercomputers will enable our scientists including the largest team of weather and climate scientists in the country to provide better information on hugely important issues, such as how climate change will affect New Zealand, said NIWA Chief Executive John Morgan. The ability of the new Cray systems to process vast amounts of data in very short spaces of time will also enable us to build more precise forecasting tools to help farmers and environmental managers make more informed decisions using the best information available.

Working with our partners, NeSI and Cray designed this new platform to power our research system in approaching national grand challenges and discovery science goals, said NeSI Director Nick Jones. The breadth of the Cray platform is impressive, allowing NeSI to broaden the services we offer. Were looking forward to sharing Crays impressive technologies with New Zealands scientists, underpinning their important work in managing our rich and unique ecology, planning and responding to natural disasters, exploring the early universe, and discovering the inner workings of biological systems.

Cray continues to strengthen its leadership position in the weather forecasting and climate research communities as an increasing number of the worlds leading centers rely on Cray supercomputers and storage systems to run their complex meteorological models. More than two thirds of the World Meteorological Organizations Long Range Modelling Centers run Cray supercomputers for numerical weather prediction, and NIWA is the latest environmental science organization to deploy Cray systems for climate research and numerical weather prediction.

NIWA and NeSI are taking significant steps forward in advancing their scientific computing capabilities, and we are honored Cray supercomputers will power their wide array of weather, climate, and research models, said Peter Ungaro, president and CEO of Cray. Producing more accurate weather forecasts today requires the ability to process and analyze an ever-increasing amount of data in a challenging workflow, and our commitment to building powerful and reliable tightly-integrated supercomputers is reflective of our leadership position in this space.

Consisting of products and services, the multi-year contract is valued at more than $18 million USD. The systems are expected to be put into production in early 2018.

For more information on the Cray XC supercomputers, the Cray CS series of cluster supercomputers, and please visit the Cray website at http://www.cray.com.

About Cray Inc. Global supercomputing leader Cray Inc. (Nasdaq:CRAY) provides innovative systems and solutions enabling scientists and engineers in industry, academia and government to meet existing and future simulation and analytics challenges. Leveraging more than 40 years of experience in developing and servicing the worlds most advanced supercomputers, Cray offers a comprehensive portfolio of supercomputers and big data storage and analytics solutions delivering unrivaled performance, efficiency and scalability. Crays Adaptive Supercomputing vision is focused on delivering innovative next-generation products that integrate diverse processing technologies into a unified architecture, allowing customers to meet the markets continued demand for realized performance. Go to http://www.cray.com for more information.

Source: Cray Inc.

See more here:

Cray Awarded $18M Supercomputer Contract from New Zealand's NIWA - HPCwire (blog)

Lenovo and Intel team up for huge new super-computer – ITProPortal

Lenovo has revealed what it says is a part of the next-generation of supercomputers.

At the International Supercomputing Conference in Frankfurt, the company confirmed it has completed the delivery and implementation of the worlds largest, Intel-based Supercomputer at the Barcelona Supercomputing Center (BSC).

Called MareNostrum 4, the 11.1 petaFLOP supercomputer will be housed in the worlds most beautiful data centre at the Chapel Torre Girona at the Polytechnic University of Catalonia in Barcelona. There it will be used to power a number of scientific investigation processes, including human genome research, bioinformatics and biomechanics to weather forecasting and atmospheric composition.

The system is powered by more than 3,400 nodes of Lenovos next-generation servers, featuring Intel Xeon scalable processors, interconnected with more than 60 kilometres of high-speed, Intel Omni-Path Technology 100 Gb/s network cabling.

The new system has already struck a claim to be one of the biggest in the world, currently listed at #13 on the TOP500 list, released today, and Lenovo says it will also continue to grow over time.

From the lab to the factory, to the on-site implementation teams, the delivery of a system of this size and complexity demands a superior level of integration and skill, said Madhu Matta, VP & GM of High Performance Computing and Artificial Intelligence at Lenovo. It requires a focus on a holistic customer experience that very few companies are capable of delivering.

View original post here:

Lenovo and Intel team up for huge new super-computer - ITProPortal

Super computer predicts how Premier League table will look after first five matches – Daily Star

A SUPER computer has predicted how the Premier League table will look after the first five matches.

A SUPER computer has predicted how the Premier League table will look after the first five matches.

1 / 20

20. Stoke - W0 D1 L4 PTS: 1

The Premier League fixture list was announced earlier this week.

And anticipation is already beginning to build among supporters ahead of the big kick-off on August 12.

The 2017/18 season promises to be one of the most competitive in living memory, with a number of top teams competing for the title.

And ahead of the campaign, the talkSPORT super computer has crunched the numbers and worked out how the table might look after the opening five games.

So where might your team place in the table after the first few weeks of action?

Click through the gallery above to see the Premier League predicted table after five matches.

Read more here:

Super computer predicts how Premier League table will look after first five matches - Daily Star

Blockchain based supercomputer project SONM hits ICO goal of $42 million – CryptoNinjas

SONM (Supercomputer Organized by Network Mining), the universal fog supercomputer powered by blockchain technology, has announced its Initial Coin Offering (ICO) has successfully reached its $42 million USD cap with 8774 participants, closing just four days into the sale which commencedJuly 15, 2017. SNM tokens are now listed on Chinese exchange HitBTC and EtherDelta, a smart-contract based exchange platform.

With a renewed sense of vigor heightened by our communitys strong demand for our tokens, the SONM team is excited to progress the project which we believe will revolutionize the computing market.

Investors participated in the SONM ICO using ETH, BTC, Dash, and other major cryptocurrencies. A total of 331,360,000 SNM were minted in the ICO. Token creation is now permanently closed.

SONMs ICO included a progressive bonus structure for the first 80% of tokens sold. The funds raised in the crowdsale will be distributed as follows: 33% is reserved for marketing promotion, market growth, community, and expansion; 30% for research and development including team expansion, and advisers; 20% for the original SONM team; 7% for complementary technologies; 6% for technology infrastructure; and the remaining 4% for other indirect costs such as legal and office expenses.

SONMs Board of Advisors includes Lisk CEO and President of the Lisk Foundation Max Kordek and former Coinsetter and Cavirtex CEO Jaron Lukasiewicz, and ChronoBank CEO Sergei Sergienko.

By hybridizing fog computing with an open-source PaaS technology, the SONM platform will offer a full range of services, including app development, scientific calculations, website hosting, video game server hosting, machine learning for neural networks, video and CGI rendering, augmented reality location-based games, and video streaming services.

SONM also provides miners the ability to gain tokens efficiently by conducting calculations for all members of the network. Smart devices located anywhere in the world are able to participate in the fog network and sell computing power peer-to-peer through the SONM Application Pool.

More information is available on the SONM whitepaper.

Read more from the original source:

Blockchain based supercomputer project SONM hits ICO goal of $42 million - CryptoNinjas

US drops $258m on supercomputer development to chase down China – ZDNet

(Image: Top500)

The United States will spent $258 million over three years in an effort to develop a supercomputer capable of hitting one exaflop.

Secretary of Energy Rick Perry announced on Thursday that Washington would be awarding the money to AMD, Cray, HPE, IBM, Intel, and Nvidia to develop hardware, software, and applications.

The companies will be kicking in at least 40 percent of the costs, taking the total investment for the program to in excess of $430 million.

"Continued US leadership in high-performance computing is essential to our security, prosperity, and economic competitiveness as a nation," Perry said.

The funding will come from the Department of Energy's Exascale Computing Project (ECP) and falls under the PathForward program, with the goal to create a one-exascale system by 2021.

"The work funded by PathForward will include development of innovative memory architectures, higher-speed interconnects, improved reliability systems, and approaches for increasing computing power without prohibitive increases in energy demand," ECP director Paul Messina said.

"It is essential that private industry play a role in this work going forward: Advances in computer hardware and architecture will contribute to meeting all four challenges."

In recent years, a pair of Chinese supercomputers have held the top two spots in the Top500 supercomputer list.

Both computers are run by the Chinese National Supercomputing Center, with the Sunway TaihuLight machine in Wuxi rated at 93 petaflops, and Tianhe-2 in Guangzhou claiming 34 petaflops.

This did not stop the US Department of Energy claiming a form of leadership in a statement.

"While the US has five of the 10 fastest computers in the world, its most powerful -- the Titan system at Oak Ridge National Laboratory -- ranks third behind two systems in China," it said.

"The US retains global leadership in the actual application of high-performance computing to national security, industry, and science."

Not to be left out, the Japanese National Institute of Advanced Industrial Science and Technology intends to create a 130-petaflop computer for AI development.

It is expected Japan will spent approximately 19.5 billion yen on the computer.

Originally posted here:

US drops $258m on supercomputer development to chase down China - ZDNet

A Supercomputer Has Just Created The Biggest Virtual Universe Ever – Wall Street Pit

In a study that was recently published in the journal Computational Astrophysics and Cosmology, researchers from the University of Zurich report that with the help of a large supercomputer, they have been able to simulate how our Universe was formed thats a total of 25 billion virtual galaxies and they were able to do it in just 80 hours, making use of about 2 trillion digital particles in the process.

To make the simulation possible, the researchers used a code referred to as PKDGRAV3 which they specifically developed for use with the superior processing power and memory of a supercomputer. For this particular purpose, they used the Piz Daint supercomputer at the Swiss National Computing Center (CSCS).

Owing to the extreme precision of their calculation that featured how dark matter fluid might have evolved under its own gravity, the researchers were able to simulate the formation of small concentrations of matter called dark matter halos. This is crucial because scientists believe that galaxies like our own Milky Way Galaxy were formed within such halos.

There was nothing easy about arriving at those calculations, especially because they were dealing with dark matter, and they had to incorporate its potential effects and influence on other parts of the Universe. As Professor Joachim Stadel, one of the studys co-authors, told Universe Today, part of the task done in Barcelona under the direction of Pablo Fossalba and Francisco Castander also involved integrating galaxy features such as their expected colors, spatial distribution and the emission lines.

The virtual universe created is meant to help guide and calibrate experiments being conducted on the Euclid satellite, scheduled for launching in 2020 with the primary mission of exploring the dark side of our Universe.

Based on the little we know, nearly 95% of our Universe is comprised of dark material 23% dark matter and 72% dark energy. Dark here means literally dark. We cant really see it, and its existence can only be inferred through indirect observation and its effects on surrounding observable matter. In the case of the Euclid satellite, researchers will be measuring the distortions that result as light emanating from countless galaxies is deflected by the presence of dark matter.

Through the data collected, it is hoped that Euclid will be able to provide new information that can help expand our understanding of the Universe its history, how it evolved into what it is today, and what it will be like in the future. Maybe, it can also lead to new discoveries that may refine or alter some of the physics models we know today, such as Einsteins Relativity Theory. Helping discover new types of particles will be an awesome bonus too.

Read the original:

A Supercomputer Has Just Created The Biggest Virtual Universe Ever - Wall Street Pit