Niwa’s number crunching supercomputer gets a $18 million upgrade … – Stuff.co.nz

MATT STEWART

Last updated13:07, August 13 2017

MONIQUE FORD/STUFF

Niwa's supercomputer - FitzRoy - has been involved in some of NZ's most important climate forecasting.

It'sthedata-crunching behemoth that helped forecasta scorching new climate for Wellington and Wairarapa by 2090, but Niwa's supercomputer - FitzRoy - is retiring and about to be usurped by a model many times more powerful.

The 18-tonne computer is housed in a specially-constructed room at the National Institute for Water and Atmospheric Research base at Greta Point on the edge of Wellington Harbourand is designed to withstand severe earthquakes, tsunami and fire.

Installed in 2010 the supercomputer has helped drive some of our most important climate forecasting, includingtwo years processing data to create ultra-long range models that predict a blazing, parchedWairarapa and Sydney-style heatfor the capital by 2090.

MONIQUE FORD/STUFF

Niwa high performance computing systems systems engineer Aaron Hicks with the retiring supercomputer dubbed FitzRoy.

Niwa's chief climate, atmosphere and hazards scientist Sam Dean said FitzRoy allows scientists to do high-resolution climate modelling that covers the whole country - a situation unique in the world.

READ MORE: *CuriousCity: The raw materials that have built Wellington *CuriousCity: The inner workings of Wellington's cable car *Wellington could beas hot as Sydney unless action is taken on climate change *Curious City: Oriental Bay's famous fountain under the spotlight

"We don't just run weather models - rain never did anything until it hit the ground or landed on a person's head - what we do is we join other models to that weather model."

MONIQUE FORD/STUFF

NIWA chief climate, atmosphere and hazards scientist Sam Dean said while it will take time for the science to catch up with the new supercomputer it will expand the institute's vision.

Thisincludes hydrological modelling keeping tabs on 66,000 waterwaysas well as storm surges, ocean levelsand wave action that is fused with weather forecasting, allowing meteorologists to forecast hazardous events like flooding, or high waves in the Cook Strait.

"It's a little bit like playing God - these models mimic everything the earth does - thathas a beauty whichis quite amazing and that really inspires me," Dean said.

But just as you might get a new PCafter seven years,a new - as yet nameless - $18 million suite of three supercomputers (equivalent to about 16,000 laptops) is set to take FitzRoy's place come November.

NIWA

NIWA's supercomputer has helped drive some of our most important climate forecasting, including two years processing data to create ultra-long range models that predict a blazing, parched Wairarapa and Sydney-style heat for the capital by 2090.

One of these - the Cray XC50 - has a theoretical peak equivalent to 1.4 trillion calculations per second and will keep Niwain possession of one of the world's top 500 supercomputers.

Another backup disaster recovery machine containing 1340harddriveswill be taken to Auckland with all of FitzRoy's existing dataon what Dean describes as "NZ'sbiggest USB stick"becausesending the data via the network would take fouryears.

At 23 tonnes the entire upgrade suite will be about 13times more powerful than FitzRoy, while using about two-thirds of the electricity, and will expand Niwa's forecasting to a higher resolution, all while giving a better handle on forecast precision.

"Every time we've bought a new supercomputer it's challenged our concepts of what's possible- it takes us a couple of years for our science to catch up with the technology - it expands our vision," Dean said.

-Stuff

Read the original:

Niwa's number crunching supercomputer gets a $18 million upgrade ... - Stuff.co.nz

Premier League 2017/18: Final table predicted by super computer … – Daily Star

THE Premier League returns this week, but who will be crowned champions next May? A super computer may have the answer.

THE Premier League returns this week, but who will be crowned champions next May? A super computer may have the answer.

1 / 21

Super computer predicts final 2017/18 Premier League table where will your team finish?

English football fans will be delighted to see top flight action return after the long summer break.

Following Arsenals clash with Leicester last night, there are some mouthwatering matches to look forward to over the course of the weekend.

Liverpool travel to Vicarage Road for the first of Saturdays matches, before Manchester City face newly-promoted Brighton in the late kick-off.

Well also get to see Manchester United in action against West Ham tomorrow at Old Trafford.

Theres so much to look forward to this year, with the 2017/18 season promising to be one of the most competitive in living memory.

And, earlier this week - before any Premier League games had taken place - talkSPORT fired up their super computer to predict where each team will finish in the final standings.

So where does it reckon your side will end up?

Click through the gallery above to see the super computers predicted final Premier League table.

View post:

Premier League 2017/18: Final table predicted by super computer ... - Daily Star

Cheyenne Supercomputer is named after Cheyenne for a special reason – KGWN

CHEYENNE, Wyo -- The debut of "Cheyenne" the super computer has been unveiled for the 150th celebrations for a very generous reason. The people who have been working on the facility felt it was appropriate to coincide with Cheyenne's sesquicentennial by naming the computer Cheyenne not only because of the home of the facility, but also because of the hospitality and the appreciation they have received since starting the project in 2010.

It took 7 years to design the facility, 3 years to procure the system, and about 4 to 5 years to have it become operational. The open house is Saturday, August 12 starting at 10 am to 4 pm, there will be interactive events for people of all ages and it's free to see the computing system.

If you can't make it to the open house, they will be holding tours every Friday afternoon, you can either drop in or specially request a tour. They do however, depend on what you are wanting to see there are weather tours, engineering tours, or tours just to view the facility.

This supercomputer facility is an atmospheric and geoscience. It is the only one like it in the country, others deal with things like medical or aircraft designs. This will study weather, pollution and work especially close to wild fires.

For more information and to see how much Gary New appreciates Cheyenne the City check out the interview from the morning show.

Read more:

Cheyenne Supercomputer is named after Cheyenne for a special reason - KGWN

New Supercomputer Receives Dedication Ceremony – Wyoming Public Media

The new supercomputer known as Cheyenne was officially dedicated at a ceremonyTuesdayin the city it was named after. Governor Matt Mead, University of Wyoming President Laurie Nichols and Cheyenne Mayor Marian Orr were all in attendance, among other state leaders.TonyBusalacchiis the President of the University Corporation for Atmospheric Research or UCAR. He said Cheyenne is the 22ndmost powerful in the world and three times stronger than the Yellowstone supercomputer its replacing.

He said having such powerful computers in Wyoming has already had a positive impact on the states economy.

It already is helping to diversify the economy and the talent base in the state. The fact that Cheyenne and the Wyoming supercomputing center is there has contributed to the growth of high tech companies in Cheyenne and literally creators of hundreds of new high paying jobs, said Busalachi.

He said its also useful tool for diversifying the states economy. For instance, by researching carbon capture technology.

Its the technology to take carbon out of the atmosphere. And then how do you sequester it in the deep earth? And what do you need to know about the subsurface geology of the earth? he asked. These are all high performance computing grand challenges and require people from across many different disciplines to work together.

Busalachi said scientists will also be able to use the supercomputer to predict weather three months in advance, instead of only one week in advance, something important for national security and many industries.

See the rest here:

New Supercomputer Receives Dedication Ceremony - Wyoming Public Media

New supercomputer welcomed during Cheyenne’s 150th birthday … – Wyoming Tribune

CHEYENNE The new Cheyenne supercomputer is the 22nd-fastest in the world and can help scientists predict the weather days, or even months, in advance.

The connection between the new supercomputer and city of Cheyenne became stronger Tuesday during a program at the National Center for Atmospheric Research-Wyoming Supercomputing Center.

The center is located near the western edge of Cheyenne at 8120 Veta Drive in a Cheyenne LEADS-owned business park.

The $30 million Cheyenne supercomputer has been up and running at the local NCAR facility since January. But on Tuesday, it was inaugurated as part of the citys 150th anniversary activities.

The supercomputer is named for both the city and the Native American tribe of the same name.

The new supercomputer has three times the power of the original Yellowstone supercomputer and is three times as efficient, Gov. Matt Mead said.

The Yellowstone supercomputer was installed five years ago, when the NCAR facility was built here. Since then, it has been available for use by University of Wyoming researchers and students and other scientists.

UW researchers have led nearly 80 scientific projects on the Yellowstone supercomputer, UW President Laurie Nichols said, noting the universitys relationship is ongoing.

UW is so fortunate to be part of this incredible partnership, she said.

The Cheyenne supercomputer can help diversify the economy by attracting various companies to the state, Mead said.

When we think about the opportunity to recruit companies here because we can point to this as an example of the direction Wyoming is going, that is incredible, Mead said.

National Center for Atmospheric Research Director James Hurrell said supercomputer models can help predict snowstorms.

The research that we conduct as a community, powered through this computing center, relates to every sector of our economy, every part of our country and, indeed, to the entire Earth system, Hurrell said.

Cheyenne (the supercomputer) will help us push the boundaries of science even further.

It will give scientists a much better understanding of how solar disturbances affect the planet, he added. This understanding can lead to ways to better protect satellites, communication systems and power grids from solar storms.

The Yellowstone supercomputer is still operating at the NCAR facility. But the new supercomputer will replace it completely in several months.

Cheyenne Mayor Marian Orr said she likes to tell people outside of the Capital City about its supercomputer connections.

I do enjoy the look of surprise when I meet with visitors, not only from out-of-state, but around the world, when they ask what Cheyenne is home to.

When I say that were home to one of the worlds largest supercomputers, theyre amazed, she said. I love the look of disbelief and awe.

The research and information produced here will give resource managers and policy experts the knowledge they need to best protect and advance us in the next 150 years, Orr said.

Follow this link:

New supercomputer welcomed during Cheyenne's 150th birthday ... - Wyoming Tribune

ORNL starts installing Summit, could be world’s most powerful supercomputer – DatacenterDynamics

TheOak Ridge National Laboratory has begun to install what could become the worlds most powerful supercomputer - Summit.

The IBM-developed 10MW system will take six months or more to install, and will then be available toDepartment of Energy researchers and a few universities, before becominggenerally available to scientific users by January 2019.

Summit installation

Source: ORNL

Top500 estimates the supercomputer will enter its ranking list by next June, at which point it has a strong chance of beating out the current worlds most powerful supercomputer, TaihuLight, for the top spot.

TaihuLight currently has apeak performance of 125.4 petaflops and a Linpack result of 93 petaflops.Summit, meanwhile, isexpected to be five to 10 times as powerful as ORNLs current top system, Titan.

Titan is the fourth most powerful computer on Top500s list, with apeak performance of 27.1petaflops and a Linpack result of 17.6 petaflops. Summitwill be comprised of approximately 4,600 nodes, each with six7.5-teraflop Nvidia V100 GPUs and twoIBM Power9 CPUs, whichTop500 estimates will lead to an aggregate peak performance well over 200 petaflops.

The Department of Energy awarded$325 million in late 2014 to build both Summit for ORNLand the slightly less powerful Sierra for the Lawrence Livermore National Laboratory in California.

Sierra wil be used by The National Nuclear Security Administration to ensure the safety, security and effectiveness of the nations nuclear deterrent without testing.

After Summits launch, the next big US supercomputer for ORNL is likely to be an exascale machine - a target that the US, China and EU are all racing towards.

In June, the DOE announced it would award$258 million in funding to AMD, Cray, HPE, IBM, Intel and Nvidia over three years as part of the new PathForward program, itself part of DOEs Exascale Computing Project (ECP).

Continued US leadership in high performance computing is essential to our security, prosperity, and economic competitiveness as a nation, US Secretary Rick Perry said at the time.

The Chinese government, meanwhile,is set to develop a prototype of an exascale computer by the end of this year.

A complete computing system of the exascale supercomputer and its applications can only be expected in 2020, and will be 200 times more powerful than the countrys first petaflop computer Tianhe-1, recognized as the worlds fastest in 2010, Zhang Ting, application engineer at the Tianjin-based National Supercomputer Center,told state-publication Xinhuain June.

The EU plans to build its own exascale prototype based on the ARM architecture,built by French IT giant Atos.

See the rest here:

ORNL starts installing Summit, could be world's most powerful supercomputer - DatacenterDynamics

Premier League 2017/18 table: Super computer predicts end of season final standings – talkSPORT.com

Wednesday, August 9, 2017

In just a couple of days' time, Premier League football makes its long awaited return.

PLAY THE talkSPORT PREDICTOR GAME FOR FREE FOR YOUR CHANCE TO WIN A TRIP TO LAS VEGAS... AND 1MILLION!

And to add to the already growing excitement ahead of kick off on Friday, talkSPORT.com has turned to the Super Computer.

The data has been fed and a predicted 2017/18 Premier League table has reached us, with it suggesting that Chelsea will have their crown removed...

Check out how the super computer has predicted the final standings for the incoming season by scrolling through the gallery above.

The table, of course, should be taken with a pinch of salt. As we know, anything can happen throughout the season, but it is always fun to speculate!

Where do you think your team will finish? Share your thoughts by leaving a comment below...

Visit link:

Premier League 2017/18 table: Super computer predicts end of season final standings - talkSPORT.com

Nimbus Data Unveils 50TB Flash Drive, on Path to 500TB – TOP500 News

Flash storage specialist Nimbus Data has announced ExaDrive, an SSD that offers more capacity than any commercial hard disk drive (HDD) available today.

ExaDrive is available in 25TB and 50TB capacities and is designed to be a drop-in replacement for nearline HDDs. With a 3.5 form factor and supporting a standard SAS interface, the new offering represents the densest spinning disk alternative currently available on the market, and the only SAS-based SSD offering this level of capacity.

Nimbus Data says ExaDrive offers five times the density of nearline disk drives (based on 10TB HDDs), while drawing the same amount of power. The technology also provides inline deduplication and compression, which delivers a 3:1 data reduction, multiplying the capacity benefit by an additional 3x.

Other claims include 100 times more write IOPs (500 times for reads), 98 percent lower latency, and 65 percent less cooling again, all compared to high-capacity hard drives. Due to the superior storage performance, the company points out that applications will enjoy better CPU utilization with ExaDrive, since less time will be spend waiting for I/O.

Longevity is often an issue with flash-based storage, but Nimbus maintains its product supports up to 10 years of write endurance (with a write guarantee of 5 years), which would put it on par with HDDs. Thanks to the lack of moving parts, the drive supports up to 2 million hours of MTBF, which the company claims is 50 percent longer than a typical nearline HDD. All of that, despite the fact the Nimbus relies on the more error-prone MLC NAND technology for these drives.

The higher capacity and robustness of the ExaDrive is made possible by the use of multiple processors and a software-defined architecture. The Nimbus press release describes the rationale for such a design as follows:

Conventional SSDs are based on a single flash controller. As flash capacity increases, this monolithic architecture does not scale, overwhelmed by error correction operations and the sheer amount of flash that must be managed. In contrast, ExaDrive is based on a distributed multiprocessor architecture. Inside an ExaDrive-powered SSD, multiple ultra-low power ASICs exclusively handle error correction, while an intelligent flash processor provides wear-leveling and capacity management in software.

The release goes on to say that that the design will enable SSDs as large as 500 TB by the year 2020, achieving up to 600 petabytes in a single rack. Not surprising, Nimbus is targeting application workloads demanding high-capacity distributed storage that needs to scale quickly. These include cloud computing, digital imaging, technical computing, tactical environments, and artificial intelligence.

The ExaDrive is available today via Nimbus partners Viking Technology and SMART Modular Technologies. Viking offers the drive via its UHC-Silo product, which comes in 25TB and 50TB sizes. SMART offers those same capacities in its recently announced Osmium Drive. ExaDrive reference designs are available for other interested parties.

Image: 50TB ExaDrive. Source: Nimbus Data

Excerpt from:

Nimbus Data Unveils 50TB Flash Drive, on Path to 500TB - TOP500 News

New supercomputer seen as big boost for science, Wyoming – ABC News

One of the world's fastest supercomputers is helping scientists better understand the sun's behavior and predict weather months in advance but also got touted Tuesday as an important tool for diversifying Wyoming's economy, which has seen better days.

The new supercomputer named Cheyenne, located at a National Center for Atmospheric Research facility on the outskirts of Wyoming's capital city, is the world's 22nd fastest. Put to work earlier this year, Cheyenne is three times faster yet three times more efficient than its predecessor, a machine called Yellowstone.

The NCAR-Wyoming Supercomputing Center housing both machines is an important tool for recruiting tech businesses and keeping students interested in computers from leaving Wyoming to seek their fortunes elsewhere, Gov. Matt Mead said at a dedication for the new machine Tuesday.

The facility also is an important tool for research into hydrology, ways to trap carbon dioxide emitted by coal-fired power plants and other science important to Wyoming, he said.

"What it shows in Wyoming is that we're not only trying to broaden and diversify the economy, we care about the results," Mead said.

Wyoming produces about 40 percent of the nation's coal. In 2016, the U.S. coal industry had its worst year in four decades amid competition from cheaper and cleaner-burning natural gas as utilities' preferred fuel for generating electricity. Meanwhile, renewables such as wind and solar are increasingly competitive.

The coal downturn has hit Wyoming's economy hard. But the NCAR-Wyoming Supercomputing Center completed five years ago has helped attract other types of business including a Microsoft data center just across the street, Mead said.

"For me, when we think about the economic benefits, they're tremendous. When we think about the pride in Wyoming citizens, it's tremendous," Mead said.

Early work on the new supercomputer includes modeling of space weather flares ejected by the sun that can affect satellites, communications and even the power grid. Scientists using the machine also are developing ways to better predict weather up to three months out, said University Corporation for Atmospheric Research President Antonio Busalacchi.

"This timescale is critical for businesses, agriculture and for our military, who need reliable forecasts of longer-term weather forecasts," Busalacchi said.

Cheyenne and Yellowstone will operate side-by-side until the National Center for Atmospheric Research retires Yellowstone later this year.

Follow Mead Gruver at https://twitter.com/meadgruver

More here:

New supercomputer seen as big boost for science, Wyoming - ABC News

IBM Pushes Envelope in Deep Learning Scalability – TOP500 News

This week IBM demonstrated software that was able to significantly boost the speed of training deep neural networks, while improving the accuracy of those networks. The software achieved this by dramatically increasing the scalability of these training applications across large number of GPUs.

Source: IBM Research

In a blog posted by IBM Fellow Hillery Hunter, director of the Accelerated Cognitive Infrastructure group at IBM Research, she outlined the motivation for the work:

For our part, my team in IBM Research has been focused on reducing these training times for large models with large data sets. Our objective is to reduce the wait-time associated with deep learning training from days or hours to minutes or seconds, and enable improved accuracy of these AI models. To achieve this, we are tackling grand-challenge scale issues in distributing deep learning across large numbers of servers and GPUs.

The technology they developed to accomplish this, encapsulated in their Distributed Deep Learning (DDL) software, delivered a record 95 percent scaling efficiency across 256 NVIDIA Tesla P100 GPUs using the Caffe deep learning framework for an image recognition application. That exceeds the previous high-water mark of 89 percent efficiency achieved by Facebook for training a similar network with those same GPUs on Caffe2.

Source: IBM Research

The quality of the training was also improved by the DDL software, which delivered an image recognition accuracy of 33.8 percent for a network trained with a ResNet-101 model on a 7.5-million image dataset (ImageNet-22k). The previous best result of 29.8 percent accuracy was achieved by Microsoft in 2014. But in the case of the IBM training, its level of accuracy was achieved in just 7 hours of training, while the Microsoft run took 10 days.

It should be noted that the Microsoft training was executed on a 120-node HP Proliant cluster, powered by 240 Intel Xeon E5-2450L CPUs, while the IBM training was executed on a 64-node Power8 cluster (Power Systems S822LC for HPC), equipped with 256 NVIDIA P100 GPUs. Inasmuch as those GPUs represent more than two petaflops of single precision floating point performance, the IBM system is about two orders of magnitude more powerful than the commodity cluster used by Microsoft.

That doesnt negate the importance of the IBM achievement. As was pointed out by Hunter in her blog, scaling a deep learning problem across more GPUs is made much more difficult as these processors get faster, since communication between them and the rest of the system struggles to keep pace as the computational power of the graphics chips increase. She describes the problem as follows:

[A]sGPUs get much faster, they learn much faster, and they have to share their learning with all of the other GPUs at a rate that isnt possible with conventional software. This puts stress on the system network and is a tough technical problem. Basically, smarter and faster learners (the GPUs) need a better means of communicating, or they get out of sync and spend the majority of time waiting for each others results. So, you get no speedupand potentially even degraded performancefrom using more, faster-learning GPUs.

IBM Fellow Hillery Hunter. Source IBM

At about 10 single precision teraflops per GPU, the NVIDIA P100 is one of the fastest GPUs available today. The NVIDIA V100 GPUs, which are just entering the market now, will offer 120 teraflops of mixed single/half precision performance, further challenging the ability of these deep learning applications to scale efficiently.

The IBM software is able to overcome the compute/communication imbalance to a great extent by employing a multi-dimensional ring algorithm. This allows communication to be optimized based on the bandwidth of each network link, the network topology, and the latency for each phase. This is accomplished by adjusting the number of dimensions and the size of each one. For server hardware with different types of communication links, the software is able to adjust its behavior to take advantage of the fastest links in order to avoid bottlenecks in the slower ones.

Even though this is still a research effort, the DDL software is going to be available to customers on a trial basis as part of IBMs PowerAI, the companys deep learning software suite aimed at enterprise users. DDL is available today in version 4 of PowerAI, and according to IBM, it contains implementations at various stages of development, for Caffe, Tensorflow, and Torch.

An API has been provided for developers to tap into DDLs core functions. The current implementation is based on MPI IBMs own Spectrum MPI, to be specific which provides optimizations for the companys Power/InfiniBand-based clusters. IBM says you can also use DDL without MPI underneath if desired, but presumably your performance will vary accordingly. IBM is hoping that third-party developers will start using this new capability and demonstrate its advantages across a wider array of deep learning applications.

View post:

IBM Pushes Envelope in Deep Learning Scalability - TOP500 News

Best Raspberry Pi cases: Protect your tiny supercomputer – Pocket-lint.com

The Raspberry Pi is a marvel of modern computing. The English made device packs some seriously impressive specs onto a board and even gives you change from 40. But where costs have been saved is in protection.

If you don't have a case on your Raspberry Pi, then all the components are left in the open and susceptible to damage. Fortunately, there are loads of choices out there when it comes to adding a protective layer to your beloved super-ish computer.

We've scoured the web to find what we believe to be some of the best, for the Model B and Zero, taking into account price, looks and how well it actually looks after your Pi.

We'll continually update this feature whenever we come across anything new, so make sure to keep checking back.

See current price on Amazon UK / Amazon US

It's always best to start with the official case, not just for the Raspberry Pi, but for other electronics including mobile phones. You know they'll fit your device perfectly and they're usually of a high quality.

That's certainly the case for this official case, designed specifically for the Raspberry Pi Model 3 B. It gives you access to all the ports on your Pi Model 3 B and looks rather fetching in its red and white colour finish, complete with Raspberry Pi logo.

If you'd rather have something a little more sleek and that can camouflage itself amongst your sea of home entertainment kit, Raspberry Pi also make an all-black case, again only available for the Model 3 B.

See current price onAmazon UK/Amazon US

If you want to show off your Raspberry Pi, all while keeping it protected, this clear acrylic case is worth your consideration. Its makers say it's easy to put the 9 layers of acrylic sheeting together, using four bolts.

It's available in a range of colours, offers access to all the ports, and a supplied Micro USB cable has an on/off switch on it, so you can turn your Pi on and off with ease.

See current price on Amazon UK /Amazon US

If it's a clear case you're after, but you want to fit it to your Pi in a matter of seconds, SB Components' case could be your best bet. It's made up of two pieces of injection-moulded ABS that clip together quickly and easily. It's available in a range of colours, provides access to all ports and can be used with any version of the Model B.

See current price on Amazon UK

OneNineDesign's case is a little different to your 'regular' case. It's available either as a standalone case, or as a bundle with a touchscreen included. The touchscreen version will set you back 90, but you get a 7-inch, high-quality screen as part of the package.

There a number of operating systems available for the Pi which enable it to be run as a mini-computer, all of which can be controlled via the touchscreen.

See current price onAmazon UK

If it's serious protection you want for your Pi, GorillaPi wants to draw your attention to its tough black case. Compatible with all versions of the Model B, the GorllaPi case comprises two pieces of strengthened plastic, held together by four screws. Rest assured your Pi won't be able to move inside, while the rounded edges help to distribute any impact if you happen to drop it.

See current price on Amazon UK/Amazon US

Lego fans rejoice, for there is a case for your Pi designed to look like a large Lego brick. It's officially endorsed by Lego, so you know any bricks you have will attach to it. There are holes on the bottom so you can build a platform for your Pi, and studs on top for building on top. As we all know with Lego, the possibilities for the Pi-Blox case are only limited by your imagination.

See current price on Amazon UK/Amazon US

Let's not forget the Raspberry Pi Zero W. The Zero W has lower specs than the Model 3, but then it also costs a fraction of the price, in fact, it costs less than the official case! Raspberry Pi has adopted the same red and white colour finish as the Model 3 official case, and comes with three different lids. These either completely cover the Pi Zero, offer access to the GPIO, or the Pi Camera (sold separately).

See current price on Amazon UK/Amazon US

Pimoroni has produced its own case that's compatible with both the Pi Zero and Zero W. It features a clear acrylic top, leaving the main board visible to admire. There are cut outs for all the ports and best of all, it's made in the UK.

2003 - 2017 Pocket-lint Limited PO Box 4770, Ascot, SL5 5DP. All rights reserved. England and Wales company registration number 5237480.

See the article here:

Best Raspberry Pi cases: Protect your tiny supercomputer - Pocket-lint.com

ORNL Begins Construction of Summit Supercomputer – TOP500 News

Oak Ridge National Laboratory has begun to install Summit, the IBM-NVIDIA-powered system that is likely to become the most powerful supercomputer in the world when completed.

The news comes courtesy of Oak Ridge Today, which reported that the first cabinets for Summit arrived last Monday (July 31). According to ORNL spokesperson Morgan McCorkle, once the crates are unpacked, they will begin installing the internal computational and networking components and hook them into the power and cooling infrastructure at the Oak Ridge Leadership Computing Facility (OLCF).

Installation is expected to take six months of more, with the system expected to become generally available to scientific users by January 2019. However, select application developers at the Department of Energy and a handful of universities will get a crack at it well before that. McCorkle told TOP500 News that the pre-production Summit will be available via the Center for Accelerated Application Readiness, an early-access program designed to allow developers to port and optimize grand challenge codes for Summits new CPU-GPU architecture.

All of that suggests that the system may not be up and running until well into 2018, and will not turn up in the TOP500 list until next June. At that point, absent another surprise from China, it still has an excellent chance of unseating the current supercomputing champ, TaihuLight. That system has a peak performance of 125.4 petaflops and a Linpack result of 93 petaflops. Later this year, China is expected to deploy Tianhe-2a, a supercomputer expected to deliver around 100 petaflops, although, as we reported back in January, that number could rise in concert with Chinas ambition to own the number one spot.

Officially, Summit is expected to be 5 to 10 times as powerful as Titan, ORNLs current top system. Titan is currently ranked as number four on the TOP500, with a Linpack mark of 17.6 petaflops (from 27.1 peak petaflops). Given that Summit will be comprised of approximately 4,600 nodes, each containing six 7.5-teraflop NVIDIA V100 GPUs, and two IBM Power9 CPUs, its aggregate peak performance should be well north of 200 petaflops. The GPUs alone provide this level of performance.

Another possibility is that ORNL will run Linpack on a partially completed Summit in October or November, which at that point may be large enough to recapture the top supercomputing spot for the US. A possible glitch is that IBM has not officially launched its Power9 processor, and is not expected to do so until early 2018. But some number of chips will certainly be available before that, and, in fact, its unlikely that IBM would be shipping crates of Power9 servers to Oak Ridge without their CPUs.

Regardless of who is at the top of the supercomputing heap, Summit will be a unique resource for the DOE and its research community. Besides providing unprecedented amounts of computational capacity for traditional HPC applications, it will offer the largest platform in the world for deep learning workloads. Assuming the system is configured as advertised, it will deliver something like 3.3 exaflops of deep learning performance (mixed 16/32-bit precision math). Thats thanks to the Tensor Cores in the V100 GPUs, which were specifically bred to accelerate the type of matrix operations involved in this kind ofsoftware. As a result, Summit will be an exceptional resource for testing the limits of the neural networks models used for deep learning.

Summit is also the last stop on the way to exascale, at least for the gang at Oak Ridge. Given the cadence of supercomputer upgrades at the DOE, the next big deployment at ORNL will almost certainly be an exascale machine perhaps the first in the US. Whether that turns out to be a future implementation of Summits CPU-GPU architecture, or something else entirely, remains to be seen.

Image source:Oak Ridge Leadership Computing Facility (OLCF), distributed under Creative Commons license

More here:

ORNL Begins Construction of Summit Supercomputer - TOP500 News

Durham Uni builds supercomputer from secondhand parts – The Register

COSMA6 can be used for galactic simulations

Durham University has built itself a secondhand supercomputer from recycled parts and beefed up its contribution to DiRAC (distributed research utilising advanced computing), the integrated facility for theoretical modelling and HPC-based research in particle physics, astronomy and cosmology.

The Institute for Cosmological Computing (ICC) at Durham, which is in North East England, runs a COSMA5 system as its DiRAC contribution.

There are five DiRAC installations in the UK, which is a world leader in these HPC fields:

The Durham cluster listed above is a COSMA5 system, which features 420 IBM iDataPlex dx360 M4 servers with a 6m720 2.6 GHz Intel Sandy Bridge Xeon E5-2670 CPU cores. There is 53.76TB of DDR3 RAM and Mellanox FDR10 Infiniband in a 2:1 blocking configuration.

It has 2.5PB of DDN storage with two SD12K controllers configured in fully redundant mode. It's served by six GPFS servers connected into the controllers over full FDR and using RDMA over the FDR10 network into the compute cluster. COSMA5 uses the GPSF file system with LSF as its job scheduler.

The ICC and DiRAC needed to strengthen this system and found that the Hartree Centre at Daresbury had a supercomputer it needed rid of. This HPC system was installed in April 2012 but had to go because Daresbury had newer kit.

Durham had a machine room with power and cooling that could take it. Even better, its configuration was remarkably similar to COSMA5.

So HPC, storage and data analytics integrator OCF, and server relocation and data centre migration specialist Technimove dismantled, transported, and rebuilt the machine at the ICC. The whole exercise was funded by the Science and Technology Facilities Council.

COSMA6 arrived at Durham in April 2016, and was installed and tested at the ICC. It now extends Durham's DiRAC system as part of DiRAC 2.5.

COSMA6 has:

The Lustre filesystem and SLURM are used for its job submission system.

COSMA6 racks

Lydia Heck, ICC technical director, said: "While it was quite an effort to bring it to its current state, as it is the same architecture and the same network layout as our previous system, we expect this to run very well."

Durham now has both COSMA5 (6,500 cores) and COSMA6 (8,000 cores) contributing to DiRAC and available for researchers.

Find out how to access and use DiRAC here.

Sponsored: The Joy and Pain of Buying IT - Have Your Say

Read the original:

Durham Uni builds supercomputer from secondhand parts - The Register

Supercomputer Theta, Open for Research – Machine Design

Supercomputer Thetais officially ready to operatealongside the IBM Blue Gene/Q supercomputer,Miraat theArgonne Leadership Computing Facility (ALCF).The supercomputers are to be usedexclusively for projects in engineering and research that require high computing power. They lead up to the opening of aneven bigger supercomputer, Aurora.

The computers will be used to explore topics inclimate science, particle-accelerators, biological sciences, materials, transportation efficiency, chemistry, cosmology, and energy storage.Just last year, companyVERIFI was awarded 60-million core hours to workwith Miraon creating and analyzing up to 100,000 engine simulations at a time.

Theta will be used to support several projects in the 2017-2018 DOE Advanced Scientific Computing Research (ASCR) Leadership Computing Challenge (ALCC) programs. Itcontains more than 230,000 parallel cores and almost 700 TB of memory, alone. With a performance of 9.65 Petaflops, it can perform close to 10quadrillionoperations every second, same as Mira.

Since the computer is anIntel-Craysystem, it uses a processor based off the 2nd generationIntel Xeon Phi. The processor uses MCDRAM--a high-bandwidth DRAM on a 3D chip made of computationally interconnected stacked wafers.It can becombined with DDR4 RAM tosupply max speeds of300-450 GB/s.DRAM stands for dynamic random access memory; itcan store massive amounts of data butloses it when power is removed.

Here is the original post:

Supercomputer Theta, Open for Research - Machine Design

Video: Scientists explore ocean currents through supercomputer … – Phys.Org

August 4, 2017

Scientists are trying a new, interactive way to understand ocean current data with the help of high-resolution global ocean simulations. In the part of the global visualization shown, the Loop Current, the origin of the Gulf Stream, features prominently. Surface water speeds are shown ranging from 0 meters per second (dark blue) to 1.25 meters per second (cyan). The video is running at one simulation day per second.

A team from the NASA Advanced Supercomputing (NAS) facility, at Ames Research Center in Silicon Valley, has developed a new visualization tool that is being used by researchers from the Estimating the Circulation and Climate of the Ocean (ECCO) project to study the behavior of ocean currents.

The new visualization tool provides high-resolution views of the entire globe at once, allowing the scientists to see new details that they had missed in previous analyses of their simulation, which was run on NASA's Pleiades supercomputer.

The visualization is shown on a 10 by 23-foot, 128-screen hyperwall at the NAS facility. By switching the hyperwall view from one global image to displays of single regions, properties such as temperature, surface wind stress, density, and salinity can be clearly identified with high-contrast colors that can be changed instantly.

Pleiades and the high-capacity network bandwidth and data processing capabilities of the hyperwall is one of the most powerful visualization systems in the world, and ECCO scientists use it to discover new ocean features and their effect on the larger ocean system.

The visualization project is a collaboration between NASA's Ames Research Center in Silicon Valley, NASA's Jet Propulsion Laboratory in Pasadena, and the Massachusetts Institute of Technology in Boston.

The video will load shortly

Explore further: NASA views our perpetually moving ocean

More information: For more information about the global ocean simulation, visit http://www.nas.nasa.gov/publications/ ature_ocean_vis.html

Scientists at the University of California, Riverside investigating the composition of particulate matter (PM) and its sources at the Salton Sea have found that this shrinking lake in Southern California is exposing large ...

Major changes in agricultural practices will be required to offset increases in nutrient losses due to climate change, according to research published by a Lancaster University-led team.

The first review of the global impact of humans on tropical forests in the ancient past shows that humans have been altering these environments for at least 45,000 years. This counters the view that tropical forests were ...

In South Asia, a region of deep poverty where one-fifth of the world's people live, new research suggests that by the end of this century climate change could lead to summer heat waves with levels of heat and humidity that ...

Rising carbon dioxide levels from global warming will drastically reduce the amount of protein in staple crops like rice and wheat, leaving vulnerable populations at risk of growth stunting and early death, experts warned ...

The largest remaining ice shelf on the Antarctic Peninsula lost 10% of its area when an iceberg four times the size of London broke free earlier this month.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Follow this link:

Video: Scientists explore ocean currents through supercomputer ... - Phys.Org

Video: HPE Powers 1 Petaflop QURIOSITY Supercomputer at BASF – insideHPC

In this video, HPE showcases their new supercomputer at BASF.

BASFs strategic goal is to decisively take advantage of the enormous opportunities that digitalization offers along the entire value chain. In doing so, research and development play a key role when it comes to further increasing innovative strength and competitiveness by using new technologies. With 1.75 petaflops, our supercomputer QURIOSITY offers around 10 times the computing power that BASF currently has dedicated to scientific computing. In the ranking of the 500 largest computing systems in the world, the BASF supercomputer is currently number 65.

The new system will make it possible to answer complex questions and reduce the time required to obtain results from several months to days across all research areas. As part of BASFs digitalization strategy, the company plans to significantly expand its capabilities to run virtual experiments with the supercomputer. It will help BASF reduce time to market and costs by, for example, simulating processes on catalyst surfaces more precisely or accelerating the design of new polymers with pre-defined properties.

In todays data-driven economy, high performance computing plays a pivotal role in driving advances in space exploration, biology and artificial intelligence, said Meg Whitman, President and Chief Executive Officer, Hewlett Packard Enterprise. We expect this supercomputer to help BASF perform prodigious calculations at lightning fast speeds, resulting in a broad range of innovations to solve new problems and advance our world.

With the help of IntelXeonprocessors, high-bandwidth, low-latency IntelOmni-Path Fabric and HPE management software, the supercomputer acts as a single system with an effective performance of more than 1 Petaflop (1 Petaflop equals one quadrillion floating point operations per second). With this system architecture, a multitude of nodes can work simultaneously on highly complex tasks, dramatically reducing the processing time.

Sign up for our insideHPC Newsletter

Read more from the original source:

Video: HPE Powers 1 Petaflop QURIOSITY Supercomputer at BASF - insideHPC

Switch Donates $3.4M in Data Center Services for Reno’s New Supercomputer – Data Center Knowledge

The University of Nevada, Reno, is on its way to having a new supercomputer. Called Pronghorn after the American antelope, the fastest mammal in North American the new $1.3 million, 310 TFLOPS high-performance cluster is part of the Universitys initiative to reach R1 Carnegie Research Classification. When completed, it will provide 30 times more computing power than the universitys existing HPCsystem.

High-performance computing is critical for modern research and scientific discovery, the universitys president, Marc Johnson, said in a statement. The impact of this will be multi-dimensional; it will allow for faster analysis and exchange of large scientific datasets. It will contribute to deeper discovery across a range of research disciplines university-wide and to development of industry partnerships.

Proghorn will be housed in thenew Tahoe Reno 1facility by the Las Vegas-based data center providerSwitch. The data center, whose anchor tenant is eBay,openedin February. Switch has pledged to be a benevolent landlord, by donating $3.4 million in critical infrastructure support, including space, power and security, for the next five years.

Making Nevada the most connected state and driving economic development through technology and data analytics are critical priorities that Switch shares with the University of Nevada, Reno, explained Adam Kramer, Switchs executive vice president for strategy. This collaborative project will cement the universitys commitment to strengthen its status as a top-level research university and its ability to partner with the private sector.

According the the universitys Office of Information Technology, the system will be built by Dell EMC and DDN Storage.

The idea is to build an infrastructure with enough capacity so we have what we need with additional head room for future development, the universitys vice provost and chief information officer, Steve Smith, said.

According to specs published by the university, Pronghorn will consist of 64 CPU nodes, with each node utilizing 32 E5-2683 v4, 2.1GHz Intel Xeon processors. Counting the processors used by 32 GPU nodes, the system will employ a total of 2,400 processors with 19TB memory. Storage will utilize DDN GridScaler appliances implementing the IBM General Parallel File System with 1 PB capacity.

Some big data projects require large-scale memory while others require high-speed networks, Jeff LaCombe, the chair of the universitys faculty-based Cyberinfrastructure Committee, said. We are looking to balance both.

Initial hardware installation is expected to be completed in September 2017, with availability for faculty and investors scheduled for November. The system should be fully operational in January 2018. It will be used for research that will include artificial intelligence, robotics, and computational biology.

The project is being funded by the State of Nevada Knowledge Fund (facilitated by the Governors Office for Economic Development); a donation from a university supporter and noted researcher, Mick Hitchcock; the universitys Research & Innovation division; its Office of Information Technology and faculty investors.

Industry access to Pronghorn will be coordinated through the Nevada Center for Applied Research, a research and technology center that makes the schools facilities, equipment and talent available to industry through customized, fee-based contracts. Industry partners must have a tangible connection to the university, such as a research collaboration.

View original post here:

Switch Donates $3.4M in Data Center Services for Reno's New Supercomputer - Data Center Knowledge

Hazel Hen Supercomputer Reaches Computational Milestone – insideHPC

3D visualization of the data set investigated in Hazel Hens millionth job. The CAVE at HLRS makes it possible to explore a fluid jet in fine detail.

Leading the research behind the millionth job was Professor Bernhard Weigand, Director of the Institute of Aerospace Thermodynamics at the University of Stuttgart. His laboratory studies multiphase flows, a common phenomenon across nature in which materials in different states or phases (gases, liquids, and solids) are simultaneously present and physically interact. In meteorology, for instance, raindrops, dew, and fog constitute multiphase flows, as does the exchange of gases between the oceans and the atmosphere. Such phenomena also occur in our daily lives, such as when water bounces off our skin in the shower or when we inhale nasal sprays to control the symptoms of a cold.

High-performance computing (HPC) is absolutely essential to the success of FS3D because the software requires an extremely high gate resolution. Like the frame rate in a video or movie camera, the program must represent the complex collisions, adhesions, and breaking apart of droplets and molecules at extremely small scales of space and time. FS3D can simulate such interactions in 2 billion cells at once, each of which represents a volume of less than 7 cubic micrometers, tracking how the composition of every cell changes over time.

Achieving such a high resolution generates massively large datasets, and it is only by using a supercomputer as powerful as HLRSs Hazel Hen that these simulations can be run quickly enough to be of any practical use. Moreover, during simulations, HPC architectures can rapidly and reliably save enormous collections of data that are output from one round of calculations and efficiently convert them into inputs for the next. In this way, simulation becomes an iterative process, leading to better and better models of complex phenomena, such as the multiphase flows the Weigand Lab is investigating.

In the future, such information could enable engineers to improve the efficiency of their nozzle designs. In this sense, the millionth compute job on Hazel Hen was just one page in a long and continuing scientific story. Nevertheless, it embodies the unique kinds of research that HLRS makes possible everyday.

Read the Full Story

Sign up for our insideHPC Newsletter

Go here to see the original:

Hazel Hen Supercomputer Reaches Computational Milestone - insideHPC

NIH Receives Major Supercomputer Upgrade | TOP500 … – TOP500 News

CSRA, a system integrator and service company, has installed the second phase of the Biowulf supercomputer at the National Institutes of Health (NIH), more than doubling the systems capacity.

Biowulf was built to serve biologists, medical researchers, and other life scientists associated with NIH projects. Those include research efforts in genomics, molecular biology, bioimage analysis, and structural biology, to name a few. The system hosts dozens of software packages that support these areas, as well as an array of scientific databases.

Biowulf, an HPE Apollo XL1x0r cluster, was initially installed in 2016, and currently sits at number 139 on the TOP500 list. Its peak performance of 1.23 petaflops yielded a Linpack mark of 991.6 teraflops. The phase 1 system is powered by Broadwell-generation Xeon processors, and uses Mellanox FDR as the system interconnect for both the compute nodes and the main storage array. Ethernet provides connectivity to the NIH wide area network, known as NIHnet, and the NFS storage. The system also provides 14 petabytes of GPFS storage, courtesy of Data Direct Neworks (DDN).

According to the CSRA press release, the Biowulf upgrade will include an additional 1,104 CPU nodes representing 1.2 peak petaflops of extra capacity, along with 72 GPU nodes, which were added to the existing 2,372-node cluster. If the Biowulf website has been updated correctly, those GPUs are NVIDIA K80s, with two per node. That would bring the GPU contribution alone to over 400 teraflops, and the upgraded cluster to 1.6 peak petaflops. With the inclusion of the 1,104-node addition, that brings the capacity of the entire system to 2.8 petaflops.

Thats a lot more computational horsepower than the NIH has ever commanded before. Curiously, the press release doesnt include a quote from any NIH official on what all that extra capacity might be used for. The announcement does, however, offers this:

The second stage of computing power announced today will enable NIH researchers to make important advances in biomedical fields. This field of research is deeply dependent on computation, such as whole-genome analysis of bacteria, simulation of pandemic spread, and analysis of human brain MRIs. Results from these analyses may enable new treatments for diseases including cancer, diabetes, heart conditions, infectious disease, and mental health.

The lack of NIH input could reflect the uncertainty in the research that will be funded there over the next year. The Trump White House has called for a $1.7 billion reduction for FY2017 and a further decrease of $5.8 billion in FY2018, amounting to almost a 20 percent cutback for the agency. Congress doesnt appear to be going along with these proposed reductions, however, and has come up with an omnibus agreement to increase spending by $2 billion for at least this fiscal year.

Regardless, the additional capacity in Biowulf will almost certainly fill up with workloads from life scientists who rely on the NIH for computational resources. The desire for the government to provide better healthcare, which drives much of this research, is growing, even in an era when the appetite for public spending is waning.

In a recent interview by the Washington Examiner, NIH Director Francis Collins noted that this type of research can return can return $8 to the economy for each dollar spent, notwithstanding its ability to improve peoples lives. This is a really remarkable moment in terms of making rapid progress, whether you're talking about cancer, diabetes, Alzheimer's disease, rare diseases or common diseases, said Collins. We are at a particularly exciting moment, scientifically, in terms of the ability to make rapid progress.

Continue reading here:

NIH Receives Major Supercomputer Upgrade | TOP500 ... - TOP500 News

Super computer predicts the final Championship table: Where will your club finish? – Daily Star

THE TalkSport super computer has predicted how the Championship season will pan out.

THE TalkSport super computer has predicted how the Championship season will pan out.

1 / 24

Sunderland face Derby tonight to get the 2017/18 campaign underway

Sunderland face Derby tonight to get the 2017/18 campaign underway, and there are some huge clubs in contention for promotion.

The Black Cats were relegated along with Middlesbrough and Hull - and the northern trio join the likes of Aston Villa, Sheffield Wednesday, Leeds, Wolves and Fulham among the runners and riders for promotion.

Sheffield United, Bolton and Millwall are also back in the division after achieving promotion from League One last season.

But where will they all finish?

CLICK THROUGH THE GALLERY ABOVE TO SEE WHAT THE TALKSPORT SUPER COMPUTER THINKS.

Read more from the original source:

Super computer predicts the final Championship table: Where will your club finish? - Daily Star