12345...102030...


What is supercomputer? – Definition from WhatIs.com

A supercomputer is a computer that performs at or near the currently highest operational rate for computers. Traditionally, supercomputers have been used for scientific and engineering applications that must handle very large databases or do a great amount of computation (or both).Although advances likemulti-core processors and GPGPUs (general-purpose graphics processing units)have enabled powerful machinesfor personal use (see: desktop supercomputer, GPU supercomputer),by definition, a supercomputer is exceptional in terms of performance.

At any given time, there are a few well-publicized supercomputers that operate at extremely high speeds relative to all other computers. The term is also sometimes applied to far slower (but still impressively fast) computers. The largest, most powerful supercomputers are really multiple computers that perform parallel processing. In general, there are two parallel processing approaches: symmetric multiprocessing (SMP) and massively parallel processing (MPP).

As of June 2016, the fastest supercomputer in the world was the Sunway TaihuLight, in the city of Wixu in China. A few statistics on TaihuLight:

The first commercially successful supercomputer, the CDC (Control Data Corporation) 6600 was designed by Seymour Cray. Released in 1964, the CDC 6600 had a single CPU and cost $8 million the equivalent of $60 million today. The CDC could handle three million floating point operations per second (flops).

Cray went on to found a supercomputer company under his name in 1972. Although the company has changed hands a number of times it is still in operation. In September 2008, Cray and Microsoft launched CX1, a $25,000 personal supercomputer aimed at markets such as aerospace, automotive, academic, financial services and life sciences.

IBM has been a keen competitor. The company’s Roadrunner, once the top-ranked supercomputer, was twice as fast as IBM’s Blue Gene and six times as fast as any of other supercomputers at that time. IBM’s Watson is famous for having adopted cognitive computing to beat champion Ken Jennings on Jeopardy!, a popular quiz show.

Year

Supercomputer

Peak speed (Rmax)

Location

2016

Sunway TaihuLight

93.01PFLOPS

Wuxi, China

2013

NUDTTianhe-2

33.86PFLOPS

Guangzhou, China

2012

CrayTitan

17.59PFLOPS

Oak Ridge, U.S.

2012

IBMSequoia

17.17PFLOPS

Livermore, U.S.

2011

FujitsuK computer

10.51PFLOPS

Kobe, Japan

2010

Tianhe-IA

2.566PFLOPS

Tianjin, China

2009

CrayJaguar

1.759PFLOPS

Oak Ridge, U.S.

2008

IBMRoadrunner

1.026PFLOPS

Los Alamos, U.S.

1.105PFLOPS

In the United States, some supercomputer centers are interconnected on an Internet backbone known as vBNS or NSFNet. This network is the foundation for an evolving network infrastructure known as the National Technology Grid. Internet2 is a university-led project that is part of this initiative.

At the lower end of supercomputing, clustering takes more of a build-it-yourself approach to supercomputing. The Beowulf Project offers guidance on how to put together a number of off-the-shelf personal computer processors, using Linux operating systems, and interconnecting the processors with Fast Ethernet. Applications must be written to manage the parallel processing.

Originally posted here:

What is supercomputer? – Definition from WhatIs.com

Next Big Academic Supercomputer Set for Debut in 2018 – TOP500 News

The National Science Foundation (NSF) is soliciting proposals from US universities to acquire a $60 million next-generation supercomputer two to three times as powerful as Blue Waters.

The request for proposal (RFP) was originally published in May, and, as of July 14, all interested institutions were supposed to have sent the NSF a letter of intent, registering their interest. Final proposals are due on November 20. Its safe to assume that most, if not all of the major academic supercomputing centers in the US will be vying for the NSF grant.

The Pittsburgh Supercomputing Center (PSC), which collaborates with Carnegie Mellon University and the University of Pittsburgh, has gone on record about its intent to secure the funding for the Phase 1 system. An article published this week in the local Pittsburgh Post-Gazette reports that PSC would like to use such a machine to help spur the areas economy. Although the supercomputer would primarily be used by academic researchers in the science community, interim PSC director Nick Nystrom thinks the machine could also be a boon to the areas startup businesses, manufacturers and other industry players.

From the Post-Gazette report:

Everybody has big data, but big data has no value unless you can learn something from it, Mr. Nystrom said. We have a convergence in Pittsburgh: artificial intelligence, big data, health care and these are things PSC is already doing.

According to the Phase 1 RFP, the new system will be two to three times faster at running applications than the Blue Waters supercomputer, an NSF-funded machine installed at the National Center for Supercomputing Applications (NCSA), at the University of Illinois at Urbana-Champaign. Blue Waters is a Cray XE/XK system, powered by AMD “Interlagos” CPUs and NVIDIA K20X GPUs. It became operational in 2013.

Although Blue Waters has a peak speed of over 13 petaflops, NCSA never submitted a Linpack result for it. However, based on its peak performance, Blue Waters would almost certainly qualify as a top 10 system on the current TOP500 list. NCSA says a number of applications are able run at a sustained speed of more than one petaflop, with a plasma physics code attaining 2.2 petaflops. Given that the Phase 1 machine is supposed to be at least twice as powerful as Blue Waters, it should provide its users a significant boost in application performance.

This Phase 1 effort is also supposed to include an extra $2 million that will go toward the design of the Phase 2 system, which will be funded separately. That system is expected to be 10 times as fast as the Phase 1 machine, upon which it will draw at least some its technology and architecture. No hard dates have been set for this project.

The Phase 1 winner is anticipated to be announced in the first half of 2018, with the system expected to go into production by the end of FY 2019.

See the rest here:

Next Big Academic Supercomputer Set for Debut in 2018 – TOP500 News

Inside View: Tokyo Tech’s Massive Tsubame 3 Supercomputer – The Next Platform

August 22, 2017 Ken Strandberg

Professor Satoshi Matsuoka, of the Tokyo Institute of Technology (Tokyo Tech) researches and designs large scale supercomputers and similar infrastructures. More recently, he has worked on the convergence of Big Data, machine/deep learning, and AI with traditional HPC, as well as investigating the Post-Moore Technologies towards 2025.

He has designed supercomputers for years and has collaborated on projects involving basic elements for the current and more importantly future Exascale systems. I talked with him recently about his work with the Tsubame supercomputers at Tokyo Tech. This is the first in a two-part article. For background on the Tsubame 3 system we have an in-depth article here from earlier this year.

TNP: Your new Tsubame 3 supercomputer is quite a heterogeneous architecture of technologies. Have you always built heterogeneous machines?

Satoshi Matsuoka, professor of the High Performance Computing Systems Group, GSIC, at the Tokyo Institute of Technology, showing off the Tsubame 3.0 server node.

Matsuoka: Ive been building clusters for about 20 years nowfrom research to production clustersall in various generations, sizes, and forms. We built our first very large-scale production cluster for Tokyo Techs supercomputing center back in 2006. We called it Tsubame 1, and it beat the then fastest supercomputer in Japan, the Earth Simulator.

We built Tsubame 1 as a general-purpose cluster, instead of a dedicated, specialized system, as the Earth Simulator was. But, even as a cluster it beat the performance in various metrics of the Earth Simulator, including the top 500, for the first time in Japan. It instantly became the fastest supercomputer in the country, and held that position for the next 2 years.

I think we are the pioneer of heterogeneous computing. Tsubame 1 was a heterogenous cluster, because it had some of the earliest incarnations of accelerators. Not GPUs, but a more dedicated accelerator called Clearspeed. And, although they had a minor impact, they did help boost some application performance. From that experience, we realized that heterogeneous computing with acceleration was the way to go.

TNP: You seem to also be a pioneer in power efficiency with three wins on the Green 500 list. Congratulations. Can you elaborate a little about it?

Matsuoka: As we were designing Tsubame 1, it was very clear that, to hit the next target of performance for Tsubame 2, which we anticipated would come in 2010, we would also need to plan on reducing overall power. Weve been doing a lot of research in power-efficient computing. At that time, we had tested various methodologies for saving power while also hitting our performance targets. By 2008, we had tried using small, low-power processors in lab experiments. But, it was very clear that those types of methodologies would not work. To build a high-performance supercomputer that was very green, we needed some sort of a large accelerator chip to accompany the main processor, which is x86.

We knew that the accelerator would have to be a many core architecture chip, and GPUs were finally becoming usable as a programming device. So, in 2008, we worked with Nvidia to populate Tsubame 1 with 648 third-generation Tesla GPUs. And, we got very good results on many of our applications. So, in 2010, we built Tsubame 2 as a fully heterogenous supercomputer. This was the first peta-scale system in Japan. It became #1 in Japan and #4 in the world, proving the success of a heterogeneous architecture. But, it was also one of the greenest machines at #3 on the Green 500, and the top production machine on the Green 500. The leading two in 2010 were prototype machines. We won the Gordon Bell prize in 2011 for the configuration, and we received many other awards and accolades.

It was natural that when we were designing Tsubame 3, we would continue our heterogeneous computing and power efficiency efforts. So, Tsubame 3 is the second-generation, large-scale production heterogeneous machine at Tokyo Tech. It contains 540 nodes, each with four Nvidia Tesla P100 GPUs (2,160 total), two 14-core Intel Xeon Processor E5-2680 v4 (15,120 cores total), two dual-port Intel Omni-Path Architecture (Intel OPA) 100 Series host fabric adapters (2,160 ports total), and 2 TB of Intel SSD DC Product Family for NVMe storage devices, all in an HPE Apollo 8600 blade, which is smaller than a 1U server.

A lot of the enhancements that went into the machine are specifically to make it a more efficient machine as well as for high performance. The result is that Tsubame 3although at the time of measurement for the June 2017 lists we only ran on a small subset of the full configurationis #61 on the Top 500 and #1 on the Green 500 with 14.11 gigaflops/watt, an RMax of just under 2 petaflops, and a theoretical peak of over 3 petaflops. Tsubme 3 just became operational August 1, with its full 12.1 petaflops configuration, and we hope to have the scores for the full configuration for the November benchmark lists, including the Top500 and the Green500.

TNP: Tsubame 3 is not only a heterogeneous machine, but built with a novel interconnect architecture. Why did you choose the architecture in Tsubame 3?

Matsuoka : That is one area where Tsubame 3 is different, because it touches on the design principles of the machine. With Tsubame 2, many applications experienced bottlenecks, because they couldnt fully utilize all the interconnect capability in the node. As we were designing Tsubame 3, we took a different approach. Obviously, we were planning on a 100-gigabit inter-node interconnect, but we also needed to think beyond just speed considerations and beyond just the node-to-node interconnect. We needed massive interconnect capability, considering we had six very high-performance processors that supported a wide range of workloads, from traditional HPC simulation to big data analytics and artificial intelligence, all potentially running as co-located workloads.

For the network, we learned from the Earth simulator back in 2002 that to maintain application efficiency, we needed to sustain a good ratio between memory bandwidth and injection bandwidth. For the Earth Simulator, that ratio was about 20:1. So, over the years, Ive tried to maintain a similar ratio in the clusters weve built, or set 20:1 as a goal if it was not possible to reach it. Of course, we also needed to have high bisection bandwidth for many workloads.

Todays processors, both the CPUs and GPUs, have significantly accelerated FLOPS and memory bandwidth. For Tsubame 3, we were anticipating certain metrics of memory bandwidth in our new GPU, plus, the four GPUs were connected in their own network. So, we required a network that would have a significant injection bandwidth. Our solution was to use multiple interconnect rails. We wanted at least one 100 gigabit injection port per GPU, if not more.

For high PCIe throughput, instead of running everything through the processors, we decided to go with a direct-attached architecture using PCIe switches between the GPUs, CPUs, and Intel OPA host adapters. So, we have full PCIe bandwidth between all devices in the node. Then, the GPUs have their own interconnect between themselves. Thats three different interconnects within a single node.

If you look at the bandwidth of these links, theyre not all that different. Intel OPA is 100 gigabits/s, or 12.5 GB/s. PCIe is 16 GB/sec. NVLink is 20 GB/sec. So, theres less than a 2:1 difference between the bandwidth of these links. As much as possible we are fully switched within the node, so we have full bandwidth point to point across interconnected components. That means that under normal circumstances, any two components within the system, be it processor, GPU, or storage, are fully connected at a minimum of 12.5 GB/sec. We believe that this will serve our Tsubame 2 workloads very well and support new, emerging applications in artificial intelligence and other big data analytics.

TNP: Why did you go with the Intel Omni-Path fabric?

Matsuoka: As I mentioned, we always focus on power as well as performance. With a very extensive fabric and a high number of ports and optical cables, power was a key consideration. We worked with our vendor, HPE, to run many tests. The Intel OPA host fabric adapter proved to run at lower power compared to InfiniBand. But, as important, if not more important, was thermal stability. In Tsubame 2, we experienced some issues around interconnect instability over its long operational period. Tsubame 3 nodes are very dense with a lot of high-power devices, so we wanted to be make sure we had a very stable system.

A third consideration was Intel OPAs adaptive routing capability. Weve run some of our own limited-scale tests. And, although we havent tested it extensively at scale, we saw results from the University of Tokyos very large Oakforest-PACS machine with Intel OPA. Those indicate that the adaptive routing of OPA works very, very well. And, this is critically important, because one of our biggest pain points of Tsubame 2 was the lack of proper adaptive routing, especially when dealing with degenerative effects of optical cable aging. Over time AOCs die, and there is some delay between detecting a bad cable and replacing it or deprecating it. We anticipated Intel OPA, with its end-to-end adaptive routing, would help us a lot. So, all of these effects combined gave the edge to Intel OPA. It wasnt just the speed. There were many more salient points by which we chose the Intel fabric.

TNP: With this multi-interconnect architecture, will you have to do a lot of software optimization for the different interconnects?

Matsuoka: In an ideal world, we will have only one interconnect, everything will be switched, and all the protocols will be hidden underneath an existing software stack. But, this machine is very new, and the fact that we have three different interconnects is reflecting the reality within the system. Currently, except for very few cases, there is no comprehensive catchall software stack to allow all of these to be exploited at the same time. There are some limited cases where this is covered, but not for everything. So, we do need the software to exploit all the capability of the network,including turning on and configuring someappropriate DMA engines, or some pass through, because with Intel OPA you need some CPUs involvement for portions of the processing.

So, getting everything to work in sync to allow for this all-to-all connectivity will require some work. Thats the nature of the research portion of our work on Tsubame 3. But, we are also collaborating with people like a team at The Ohio State University.

We have to work with some algorithms to deal with this connectivity, because it goes both horizontally and vertically. The algorithms have to adapt. We do have several ongoing works, but we need to generalize this to be able to exploit the characteristics of both horizontal and vertical communications between the nodes and the memory hierarchy. So far, its very promising. Even out of the box, we think the machine will work very well. But as we enhance our software portion of the capabilities, we believe the efficiency of the machine will become higher as we go along.

In the next article in this two-part series later this week, Professor Matsuoka talks about co-located workloads on Tsubame 3.

Ken Strandberg is a technical story teller. He writes articles, white papers, seminars, web-based training, video and animation scripts, and technical marketing and interactive collateral for emerging technology companies, Fortune 100 enterprises, and multi-national corporations. Mr. Strandbergs technology areas include Software, HPC, Industrial Technologies, Design Automation, Networking, Medical Technologies, Semiconductor, and Telecom. He can be reached at ken@catlowcommunications.com.

Categories: HPC

Tags: OmniPath, Tsubame

An Early Look at Baidus Custom AI and Analytics Processor

Excerpt from:

Inside View: Tokyo Tech’s Massive Tsubame 3 Supercomputer – The Next Platform

List of fictional computers – Wikipedia

Computers have often been used as fictional objects in literature, movies and in other forms of media. Fictional computers tend to be considerably more sophisticated than anything yet devised in the real world.

This is a list of computers that have appeared in notable works of fiction. The work may be about the computer, or the computer may be an important element of the story. Only static computers are included. Robots and other fictional computers that are described as existing in a mobile or humanlike form are discussed in a separate list of fictional robots and androids.

Also see the List of fictional robots and androids for all fictional computers which are described as existing in a mobile or humanlike form.

Continue reading here:

List of fictional computers – Wikipedia

How to Make a Supercomputer? – TrendinTech

Scientists have been trying to build the ultimate supercomputer for a while now, but its no easy feat as Im sure you can imagine. There are currently three Department of Energy (DOE) Office of Science supercomputing user facilities: Californias National Energy Research Scientific Computing Center (NERSC), Tennessees Oak Ridge Leadership Computing Facility (OLCF), and Illinois Argonne Leadership Computing Facility (ALCF). All three of these supercomputers took years of planning and a lot of work to get them to the standard they are now, but its all been worth it as they provide researchers with the computing power needed to tackle some of the nations biggest issues.

There are two main challenges that supercomputers solve. The first is that it can analyze large amounts of data and the second is they can model very complex systems. Some of the machines about to go online are capable of producing more than 1 terabyte of data per second, which to put in laymans terms, is nearly enough to fill around 13,000 DVDs every minute. Supercomputers are far more efficient than conventional computers and calculations it could carry out in just one day would take 20 years for a conventional computer to calculate.

As mentioned earlier, the planning of a new supercomputer takes years and is often started before the last one has even finished being set up. Because technology moves so quickly, it works out cheaper to build a new one opposed to redesigning the existing one. In regards to the ALCF, staff began planning for it in 2008, but it wasnt until 2013 that it was launched. Planning involves not only deciding when and where it will be built and installed, but also deciding what capabilities the computers should have that is going to help with future research efforts.

When the OLCF began planning their current supercomputer, the project director, Buddy Bland, said, It was not obvious how we were going to get the kind of performance increase our users said they needed using the standard way we had been doing it. OLCF launched their supercomputer, Titan, in 2012 and combined CPUs (central processing units) with GPUs (graphics processing units). Using GPUs instead allows Titan to handle multiple instructions at once and run 10 times faster than OLCFs previous supercomputer. Its also five times more energy-efficient too.

Even getting the site ready to house the supercomputer takes time. When the NERSC installed their supercomputer, Cori, they had to lay new piping underneath the floor in which to connect the cooling system and cabinets. Theta is Argonnes latest supercomputer to go live which launched in July 2017.

There are many challenges that come with supercomputers too, unfortunately. One is that it literally has thousands of processors so programs have to break problems into smaller chunks and distribute them across the units. Another issue is designing programs that can manage failures. To help pave the way for future research, and to stress-test the computers also, in exchange for having to deal with this new computer issues, users are granted special access as well as being able to attend workshops and get hands-on help when needed.

Dungeon Sessions were held at NERSC while preparing for Cori. These were effectively three-day workshops, often in windowless rooms, where engineers would come together from Intel and Cray to improve their code. Some programs ran 10 times faster after these sessions. Whats so valuable is the knowledge and strategies not only to fix the bottlenecks we discovered when we were there but other problems that we find as we transfer the program to Cori, said Brian Friesen of NERSC.

But, even when the supercomputer is delivered its still a long way from being ready to work.First, the team that it goes to have to ensure that it meets all their performance requirements. Then, to stress-test it fully, they load it with the most demanding, complex programs and let it run for weeks on end. Susan Coghlan is ALCFs project director and she commented, Theres a lot of things that can go wrong, from the very mundane to the very esoteric. She knows this firsthand as when they launched Mira, they discovered that the water they had been using to cool the computer wasnt pure enough and as a result bacteria and particles were causing issues with the pipes.

Scaling up these applications is heroic. Its horrific and heroic, said Jeff Nichols, Oak Ridge National Laboratorys associate director for computing and computational sciences. Luckily the early users program gives exclusive access for several months before eventfully opening up to take requests from the wider scientific community. Whatever scientists can learn from these supercomputers will be used in the Office of Sciences next challenge, which is in the form of exascale computers computers that will be at least 50 times faster than any computer around today. Even though exascale computers arent expected to be ready until 2021, theyre being planned for now at the facilities and managers are already busy conjuring up just what they can achieve with them.

More News to Read

comments

Continue reading here:

How to Make a Supercomputer? – TrendinTech

Pittsburgh stepping up to try to win competition for supercomputer project – Pittsburgh Post-Gazette


Pittsburgh Post-Gazette
Pittsburgh stepping up to try to win competition for supercomputer project
Pittsburgh Post-Gazette
Pittsburgh is competing to build the fastest nongovernmental computer in the country, with an economic impact to the region that could run up some big numbers possibly exceeding $1 billion, according to one backer. It would also need a lot of power …

Continued here:

Pittsburgh stepping up to try to win competition for supercomputer project – Pittsburgh Post-Gazette

Swinburne University Makes Leap to Petascale with New Supercomputer – TOP500 News

Melbournes Swinburne University is going to deploy its first petascale supercomputer, a Dell EMC machine that will be tasked to support cutting-edge astrophysics and other scientific research.

The $4 million supercomputer, known as OzStar, will be comprised of 115 PowerEdge R740 nodes, each of which will be equipped with two of Intels Xeon Skylake Scalable processors and two NVIDIA Tesla P100 GPUs. An additional four nodes are to be powered by Intel Xeon Phi processors. The nodes will be connected with the Intel Omni-Path fabric, operating at 100Gbps. Peak performance is expected to be in excess of 1.2 petaflops. The system will also be equipped with five petabytes of Lustre storage, comprised of Dell EMC PowerVault MD3060e enclosures.

OzStar will eclipse the existing Green II systems (gSTAR and SwinSTAR), SGI machines installed at Swinburne in 2011 and 2012, which are accelerated by the now-ancient NVIDIA Tesla C2070 and K10 GPUs. The even-older Green machine, which is still in operation, is a Dell PoweEdge cluster deployed in 2007. That system is powered by Intel Xeon Clovertown processors.

According to Chris Kelly, Dell EMC VP and GM for the Compute and Networking group covering Australia and New Zealand, the new OzStar system will spend around a third of its time processing gravitational wave data collected by advanced LIGO (Laser Interferometer Gravitational-wave Observatory) installations in the US. The LIGO detectors are able measure ever-so-small ripples in space-time caused large-scale cosmic events, such as the collision of black holes and neutron stars, and the explosions of supernovae.

Sifting through this LIGO data has enabled researchers to detect the locations of these objects in the far reaches of the universe and study their behavior. However, such data analysis requires enormous amounts of computation, and thats why Swinburnes new petascale supercomputer is expected to be an important addition to gravitational-wave astrophysics, a research domain that began in earnest with the detection of the first such waves in 2015. Its exciting to think [OzStar] will be making advances in a field of study that didnt really exist two years ago, wrote Kelly.

The system is currently being installed at Swinburnes Centre for Astrophysics & Supercomputing, which is the headquarters of the Centre of Excellence for Gravitational Wave Discovery, also known as OzGrav. The new center, which opened for business earlier this year, was set up by the Australian Research Council (ARC), with Swinburne specifically, the Swinburne University of Technology as the lead institution. However, access to OzStar will be available to astrophysicists throughout Australia.

Beside gravitational wave studies, the system will also be used to support research in molecular dynamics, nanophotonics, advanced chemistry and atomic optics. OzStar is scheduled to be up and running by the end of August and available for full production in September.

Continue reading here:

Swinburne University Makes Leap to Petascale with New Supercomputer – TOP500 News

What is supercomputer? – Definition from WhatIs.com

A supercomputer is a computer that performs at or near the currently highest operational rate for computers. Traditionally, supercomputers have been used for scientific and engineering applications that must handle very large databases or do a great amount of computation (or both).Although advances likemulti-core processors and GPGPUs (general-purpose graphics processing units)have enabled powerful machinesfor personal use (see: desktop supercomputer, GPU supercomputer),by definition, a supercomputer is exceptional in terms of performance.

At any given time, there are a few well-publicized supercomputers that operate at extremely high speeds relative to all other computers. The term is also sometimes applied to far slower (but still impressively fast) computers. The largest, most powerful supercomputers are really multiple computers that perform parallel processing. In general, there are two parallel processing approaches: symmetric multiprocessing (SMP) and massively parallel processing (MPP).

As of June 2016, the fastest supercomputer in the world was the Sunway TaihuLight, in the city of Wixu in China. A few statistics on TaihuLight:

The first commercially successful supercomputer, the CDC (Control Data Corporation) 6600 was designed by Seymour Cray. Released in 1964, the CDC 6600 had a single CPU and cost $8 million the equivalent of $60 million today. The CDC could handle three million floating point operations per second (flops).

Cray went on to found a supercomputer company under his name in 1972. Although the company has changed hands a number of times it is still in operation. In September 2008, Cray and Microsoft launched CX1, a $25,000 personal supercomputer aimed at markets such as aerospace, automotive, academic, financial services and life sciences.

IBM has been a keen competitor. The company’s Roadrunner, once the top-ranked supercomputer, was twice as fast as IBM’s Blue Gene and six times as fast as any of other supercomputers at that time. IBM’s Watson is famous for having adopted cognitive computing to beat champion Ken Jennings on Jeopardy!, a popular quiz show.

Year

Supercomputer

Peak speed (Rmax)

Location

2016

Sunway TaihuLight

93.01PFLOPS

Wuxi, China

2013

NUDTTianhe-2

33.86PFLOPS

Guangzhou, China

2012

CrayTitan

17.59PFLOPS

Oak Ridge, U.S.

2012

IBMSequoia

17.17PFLOPS

Livermore, U.S.

2011

FujitsuK computer

10.51PFLOPS

Kobe, Japan

2010

Tianhe-IA

2.566PFLOPS

Tianjin, China

2009

CrayJaguar

1.759PFLOPS

Oak Ridge, U.S.

2008

IBMRoadrunner

1.026PFLOPS

Los Alamos, U.S.

1.105PFLOPS

In the United States, some supercomputer centers are interconnected on an Internet backbone known as vBNS or NSFNet. This network is the foundation for an evolving network infrastructure known as the National Technology Grid. Internet2 is a university-led project that is part of this initiative.

At the lower end of supercomputing, clustering takes more of a build-it-yourself approach to supercomputing. The Beowulf Project offers guidance on how to put together a number of off-the-shelf personal computer processors, using Linux operating systems, and interconnecting the processors with Fast Ethernet. Applications must be written to manage the parallel processing.

See more here:

What is supercomputer? – Definition from WhatIs.com

How to Make a Supercomputer? – TrendinTech

Scientists have been trying to build the ultimate supercomputer for a while now, but its no easy feat as Im sure you can imagine. There are currently three Department of Energy (DOE) Office of Science supercomputing user facilities: Californias National Energy Research Scientific Computing Center (NERSC), Tennessees Oak Ridge Leadership Computing Facility (OLCF), and Illinois Argonne Leadership Computing Facility (ALCF). All three of these supercomputers took years of planning and a lot of work to get them to the standard they are now, but its all been worth it as they provide researchers with the computing power needed to tackle some of the nations biggest issues.

There are two main challenges that supercomputers solve. The first is that it can analyze large amounts of data and the second is they can model very complex systems. Some of the machines about to go online are capable of producing more than 1 terabyte of data per second, which to put in laymans terms, is nearly enough to fill around 13,000 DVDs every minute. Supercomputers are far more efficient than conventional computers and calculations it could carry out in just one day would take 20 years for a conventional computer to calculate.

As mentioned earlier, the planning of a new supercomputer takes years and is often started before the last one has even finished being set up. Because technology moves so quickly, it works out cheaper to build a new one opposed to redesigning the existing one. In regards to the ALCF, staff began planning for it in 2008, but it wasnt until 2013 that it was launched. Planning involves not only deciding when and where it will be built and installed, but also deciding what capabilities the computers should have that is going to help with future research efforts.

When the OLCF began planning their current supercomputer, the project director, Buddy Bland, said, It was not obvious how we were going to get the kind of performance increase our users said they needed using the standard way we had been doing it. OLCF launched their supercomputer, Titan, in 2012 and combined CPUs (central processing units) with GPUs (graphics processing units). Using GPUs instead allows Titan to handle multiple instructions at once and run 10 times faster than OLCFs previous supercomputer. Its also five times more energy-efficient too.

Even getting the site ready to house the supercomputer takes time. When the NERSC installed their supercomputer, Cori, they had to lay new piping underneath the floor in which to connect the cooling system and cabinets. Theta is Argonnes latest supercomputer to go live which launched in July 2017.

There are many challenges that come with supercomputers too, unfortunately. One is that it literally has thousands of processors so programs have to break problems into smaller chunks and distribute them across the units. Another issue is designing programs that can manage failures. To help pave the way for future research, and to stress-test the computers also, in exchange for having to deal with this new computer issues, users are granted special access as well as being able to attend workshops and get hands-on help when needed.

Dungeon Sessions were held at NERSC while preparing for Cori. These were effectively three-day workshops, often in windowless rooms, where engineers would come together from Intel and Cray to improve their code. Some programs ran 10 times faster after these sessions. Whats so valuable is the knowledge and strategies not only to fix the bottlenecks we discovered when we were there but other problems that we find as we transfer the program to Cori, said Brian Friesen of NERSC.

But, even when the supercomputer is delivered its still a long way from being ready to work.First, the team that it goes to have to ensure that it meets all their performance requirements. Then, to stress-test it fully, they load it with the most demanding, complex programs and let it run for weeks on end. Susan Coghlan is ALCFs project director and she commented, Theres a lot of things that can go wrong, from the very mundane to the very esoteric. She knows this firsthand as when they launched Mira, they discovered that the water they had been using to cool the computer wasnt pure enough and as a result bacteria and particles were causing issues with the pipes.

Scaling up these applications is heroic. Its horrific and heroic, said Jeff Nichols, Oak Ridge National Laboratorys associate director for computing and computational sciences. Luckily the early users program gives exclusive access for several months before eventfully opening up to take requests from the wider scientific community. Whatever scientists can learn from these supercomputers will be used in the Office of Sciences next challenge, which is in the form of exascale computers computers that will be at least 50 times faster than any computer around today. Even though exascale computers arent expected to be ready until 2021, theyre being planned for now at the facilities and managers are already busy conjuring up just what they can achieve with them.

More News to Read

comments

View post:

How to Make a Supercomputer? – TrendinTech

Supercomputer – Simple English Wikipedia, the free …

A supercomputer is a computer with great speed and memory. This kind of computer can do jobs faster than any other computer of its generation. They are usually thousands of times faster than ordinary personal computers made at that time. Supercomputers can do arithmetic jobs very fast, so they are used for weather forecasting, code-breaking, genetic analysis and other jobs that need many calculations. When new computers of all classes become more powerful, new ordinary computers are made with powers that only supercomputers had in the past, while new supercomputers continue to outclass them.

Electrical engineers make supercomputers that link many thousands of microprocessors.

Supercomputer types include: shared memory, distributed memory and array. Supercomputers with shared memory are developed by using a parallel computing and pipelining concept. Supercomputers with distributed memory consist of many (about 100~10000) nodes. CRAY series of CRAYRESERCH and VP 2400/40, NEC SX-3 of HUCIS are shared memory types. nCube 3, iPSC/860, AP 1000, NCR 3700, Paragon XP/S, CM-5 are distributed memory types.

An array type computer named ILIAC started working in 1972. Later, the CF-11, CM-2, and the Mas Par MP-2 (which is also an array type) were developed. Supercomputers that use a physically separated memory as one shared memory include the T3D, KSR1, and Tera Computer.

Organizations

Centers

See original here:

Supercomputer – Simple English Wikipedia, the free …

List of fictional computers – Wikipedia

Computers have often been used as fictional objects in literature, movies and in other forms of media. Fictional computers tend to be considerably more sophisticated than anything yet devised in the real world.

This is a list of computers that have appeared in notable works of fiction. The work may be about the computer, or the computer may be an important element of the story. Only static computers are included. Robots and other fictional computers that are described as existing in a mobile or humanlike form are discussed in a separate list of fictional robots and androids.

Also see the List of fictional robots and androids for all fictional computers which are described as existing in a mobile or humanlike form.

Continue reading here:

List of fictional computers – Wikipedia

Pittsburgh stepping up to try to win competition for supercomputer project – Pittsburgh Post-Gazette


Pittsburgh Post-Gazette
Pittsburgh stepping up to try to win competition for supercomputer project
Pittsburgh Post-Gazette
Pittsburgh is competing to build the fastest nongovernmental computer in the country, with an economic impact to the region that could run up some big numbers possibly exceeding $1 billion, according to one backer. It would also need a lot of power …

The rest is here:

Pittsburgh stepping up to try to win competition for supercomputer project – Pittsburgh Post-Gazette

NASA is about to find out if a supercomputer can survive a year in space – Popular Science

On Monday, at 12:31 p.m. Eastern time, a SpaceX Falcon 9 rocket lifted off on a resupply flight for the International Space Station, and among its cargo, in addition to ice cream, was something else very cool: a supercomputer.

The machine, made by Hewlett Packard Enterprise and called the Spaceborne Computer, is capable of a teraflop worth of computing power, which puts it roughly in line with a late-1990s supercomputer. Made up of two pizza box-shaped machines in a single enclosure, the HPE supercomputer is a part of a year-long experiment to see how an off-the-shelf computer system can fare in space if protected in the right way by software.

Long space missions like a trip to Mars come with considerable communications delays, so equipping astronauts with a powerful supercomputer would allow them to solve complex problems without having to wait for the issue and the solution to be transmitted to and from Earth. But radiation on a trip like that can damage computers, so NASA and HPE are conducting this research to see if software can provide the necessary protection to keep things functioning correctly.

Just like NASAs famous identical twin experimentin which Scott Kelly spent a year in space and his brother, Mark Kelly, stayed down on Earththe supercomputer in space has a brother on this planet, a doppleganger machine located in Wisconsin acting as a control in this experiment.

HPEs approach with the Spaceborne Computer, a two-node, water-cooled machine, is different from the way a mission-critical computer in space is physically protected (or hardened, in space gear speak) from radiation. For example, the chief computer for the Juno spacecraft inhabits a protective titanium vault with walls about one centimeter thick, according to BAE systems, which made that processor. Instead of physical protection for the HPE computer, the company is hoping to learn if software can do something similar.

Eng Lim Goh, the HPE projects principal investigator, says that the dramatic vision for the future of this line of research is one in which before an astronaut travels to space, he or she would be able to take a top-of-the-line, off-the-shelf machine with them, and software could make it space-worthy. Then the astronaut could put whatever programs she wanted on the machine, a process that Goh, a computer scientist, compares to having an iPhone in space onto which youve preloaded your apps.

So how might this computer’s software help protect it?

In general, Goh says that smart machines on Earth that exercise self-care may turn themselves off in the face of dangerous conditions. Another idea is that a machine can intentionally run slowly so that it can handle errors as it goes, as opposed to running at maximum capacity and not having the bandwidth to also cope with problems.

We will find out what works, what doesnt, Goh says. We have a whole list.

HPE said in a statement describing the project that the system software will manage real time throttling of the computer systems based on current conditions and can mitigate environmentally induced errors. (If youre wondering what Hewlett Packard Enterprise is, its one-half of the old HP, which divided in 2015; the other half is now HP, Inc., which makes personal computers and printers.)

This system is not planned to replace the [physically] hardened systems, Goh says. The intention is that something like this could function as a decision support tool on a long mission to a place like Mars, and not a primary mission computer.

Its due to arrive at the space station in the Dragon Spacecraft on Wednesday.

This article has been updated to correct errors with the spelling of Eng Lim Goh’s name.

View original post here:

NASA is about to find out if a supercomputer can survive a year in space – Popular Science

Next Big Academic Supercomputer Set for Debut in 2018 – TOP500 News

The National Science Foundation (NSF) is soliciting proposals from US universities to acquire a $60 million next-generation supercomputer two to three times as powerful as Blue Waters.

The request for proposal (RFP) was originally published in May, and, as of July 14, all interested institutions were supposed to have sent the NSF a letter of intent, registering their interest. Final proposals are due on November 20. Its safe to assume that most, if not all of the major academic supercomputing centers in the US will be vying for the NSF grant.

The Pittsburgh Supercomputing Center (PSC), which collaborates with Carnegie Mellon University and the University of Pittsburgh, has gone on record about its intent to secure the funding for the Phase 1 system. An article published this week in the local Pittsburgh Post-Gazette reports that PSC would like to use such a machine to help spur the areas economy. Although the supercomputer would primarily be used by academic researchers in the science community, interim PSC director Nick Nystrom thinks the machine could also be a boon to the areas startup businesses, manufacturers and other industry players.

From the Post-Gazette report:

Everybody has big data, but big data has no value unless you can learn something from it, Mr. Nystrom said. We have a convergence in Pittsburgh: artificial intelligence, big data, health care and these are things PSC is already doing.

According to the Phase 1 RFP, the new system will be two to three times faster at running applications than the Blue Waters supercomputer, an NSF-funded machine installed at the National Center for Supercomputing Applications (NCSA), at the University of Illinois at Urbana-Champaign. Blue Waters is a Cray XE/XK system, powered by AMD “Interlagos” CPUs and NVIDIA K20X GPUs. It became operational in 2013.

Although Blue Waters has a peak speed of over 13 petaflops, NCSA never submitted a Linpack result for it. However, based on its peak performance, Blue Waters would almost certainly qualify as a top 10 system on the current TOP500 list. NCSA says a number of applications are able run at a sustained speed of more than one petaflop, with a plasma physics code attaining 2.2 petaflops. Given that the Phase 1 machine is supposed to be at least twice as powerful as Blue Waters, it should provide its users a significant boost in application performance.

This Phase 1 effort is also supposed to include an extra $2 million that will go toward the design of the Phase 2 system, which will be funded separately. That system is expected to be 10 times as fast as the Phase 1 machine, upon which it will draw at least some its technology and architecture. No hard dates have been set for this project.

The Phase 1 winner is anticipated to be announced in the first half of 2018, with the system expected to go into production by the end of FY 2019.

Read the original post:

Next Big Academic Supercomputer Set for Debut in 2018 – TOP500 News

MareNostrum 4 supercomputer now runs at 11 petaflops but there’s more to come – ZDNet

Upcoming additions to MareNostrum 4 will bring its total speed up to 13.7 petaflops.

Barcelona’s Supercomputing Center is aiming to lead the move in Europe from petascale to exascale computing — one exaflops is 1,000 petaflops — and the newly launched MareNostrum 4 is just part of that shift.

The 34m ($40m) MareNostrum 4, which recently began operations, is the third fastest supercomputer in Europe and occupies 13th place in the Top500 list of the world’s high-performance computing systems.

It provides 11.1 petaflops for scientific research, 10 times more than MareNostrum 3, which was installed between 2012 and 2013. One petaflops is one thousand million million floating-point operations per second.

That performance means the new supercomputer will be able to deal more quickly with tasks relating to climate change, gravitational waves, fusion energy, AIDS vaccines, and new radiotherapy treatments to fight cancer.

On top of that power, the capacity of the general-purpose cluster is also due to be upgraded in the next few months with the addition of three new smaller-scale clusters. The general-purpose cluster is the largest and most powerful part of the supercomputer, consisting of 48 racks with 3,456 nodes and 155,000 processors.

The new clusters will be equipped with emerging technologies developed in the US and Japan, such as IBM Power9 and NVIDIA Volta Plus GPUs, Intel Knights Hill processors or 64-bit ARMv8 processors, used in powerful supercomputers such as American Summit, Sierra, Theta and Aurora, and Japanese Post-K.

Those additions will bring MareNostrum 4’s total speed up to 13.7 petaflops. For comparison, the system occupying seventh place in the Top500, Japan’s Joint Center for Advanced High Performance Computing Oakforest-PACS, offers 13.5 petaflops.

MareNostrum 4’s storage capacity will also increase to 14 petabytes. However, despite those processing and storage additions, its energy consumption will only increase by 30 percent to 1.3MW per year.

“For our researchers in computer architecture, and in the programming and creation of tools for the analysis and efficiency of computers, this a treat,” BSC director professor Mateo Valero tells ZDNet.

“It will allow us to experiment with cutting-edge technologies, analyze how the same applications behave in different hardware, and tackle the challenge of making them efficient in different architectures.

Valero said the upcoming changes will also enable BSC to test the suitability of these technological developments for future iterations of MareNostrum.

“It will also allow us to address one of our most ambitious projects: our participation in the creation of hardware and software technology with a European DNA,” he says.

For the first time, the European Commission is supporting that goal. Last March, seven European countries including Spain signed a formal declaration to support Europe’s leadership in high-performance computing, a project of the size of Airbus in the 1990s and of Galileo in the 2000s.

At the time of the declaration, Andrus Ansip, European Commission vice-president for the digital single market, said that if Europe stays dependent on others for this critical resource, then it risks getting technologically “locked, delayed, or deprived of strategic know-how”.

‘Technological sovereignty’ is about being technologically independent, so that you have control of your research and development, which can be particularly important for national security.

BSC plans to capitalize on the achievements and knowledge of its researchers to create European processors for supercomputing, the automotive industry, and the Internet of Things.

Of course, the transition to exascale computing requires an implementation roadmap. According to Valero., BSC doesn’t expect to have an exaflops machine in 2020, but Europe could and should have one in 2022 or 2023.

How supercomputing power is helping with anti-pollution plans like city-wide car bans

A modeling tool developed at Barcelona’s Supercomputing Center is busy predicting levels of atmospheric pollutants in Spain, Europe, and now Mexico.

Using ARM chips and Linux, Barcelona center dreams of being ‘Airbus of supercomputing’

A chapel in the heart of Barcelona University is home to one of Europe’s most powerful supercomputers – and a mobile chip-based successor is under development.

See the original post here:

MareNostrum 4 supercomputer now runs at 11 petaflops but there’s more to come – ZDNet

Swinburne University Makes Leap to Petascale with New Supercomputer – TOP500 News

Melbournes Swinburne University is going to deploy its first petascale supercomputer, a Dell EMC machine that will be tasked to support cutting-edge astrophysics and other scientific research.

The $4 million supercomputer, known as OzStar, will be comprised of 115 PowerEdge R740 nodes, each of which will be equipped with two of Intels Xeon Skylake Scalable processors and two NVIDIA Tesla P100 GPUs. An additional four nodes are to be powered by Intel Xeon Phi processors. The nodes will be connected with the Intel Omni-Path fabric, operating at 100Gbps. Peak performance is expected to be in excess of 1.2 petaflops. The system will also be equipped with five petabytes of Lustre storage, comprised of Dell EMC PowerVault MD3060e enclosures.

OzStar will eclipse the existing Green II systems (gSTAR and SwinSTAR), SGI machines installed at Swinburne in 2011 and 2012, which are accelerated by the now-ancient NVIDIA Tesla C2070 and K10 GPUs. The even-older Green machine, which is still in operation, is a Dell PoweEdge cluster deployed in 2007. That system is powered by Intel Xeon Clovertown processors.

According to Chris Kelly, Dell EMC VP and GM for the Compute and Networking group covering Australia and New Zealand, the new OzStar system will spend around a third of its time processing gravitational wave data collected by advanced LIGO (Laser Interferometer Gravitational-wave Observatory) installations in the US. The LIGO detectors are able measure ever-so-small ripples in space-time caused large-scale cosmic events, such as the collision of black holes and neutron stars, and the explosions of supernovae.

Sifting through this LIGO data has enabled researchers to detect the locations of these objects in the far reaches of the universe and study their behavior. However, such data analysis requires enormous amounts of computation, and thats why Swinburnes new petascale supercomputer is expected to be an important addition to gravitational-wave astrophysics, a research domain that began in earnest with the detection of the first such waves in 2015. Its exciting to think [OzStar] will be making advances in a field of study that didnt really exist two years ago, wrote Kelly.

The system is currently being installed at Swinburnes Centre for Astrophysics & Supercomputing, which is the headquarters of the Centre of Excellence for Gravitational Wave Discovery, also known as OzGrav. The new center, which opened for business earlier this year, was set up by the Australian Research Council (ARC), with Swinburne specifically, the Swinburne University of Technology as the lead institution. However, access to OzStar will be available to astrophysicists throughout Australia.

Beside gravitational wave studies, the system will also be used to support research in molecular dynamics, nanophotonics, advanced chemistry and atomic optics. OzStar is scheduled to be up and running by the end of August and available for full production in September.

Continue reading here:

Swinburne University Makes Leap to Petascale with New Supercomputer – TOP500 News

Zach, a supercomputer that can hold conversations, is coming to Christchurch – The Press

CHARLIE MITCHELL

Last updated14:52, August 18 2017

JOSEPH JOHNSON/STUFF

Albi Whale and his father, Dr David Whale, left.

It runs an internationalcompany, helps manage a doctor’s office on the side and soon “Zach” will be the face of artificial intelligence (AI) in Christchurch.

Zach is billed as one of the world’s most powerful supercomputers, an AI system that interacts with people like they do each other.

It is expected to be on display in a restored heritage building inChristchurch by 2019, withan education centre and virtual classrooms, and ways for the public to have conversations with it.

The non-commercial technologywas bought and adapted by the Terrible Foundation, a social enterprise run by Christchurch-based entrepreneur Albi Whale.

READ MORE: *AI learns from pro gamers – then crushes them *Technology whizz-kid says the future looks bright, not evil *How artificial intelligence is taking on ransomware *Artificial intelligence has mastered painting

Whale founded Terrible Talk, a non-profit internet and phone provider.

Earlier this year, Zach became chief executive of Terrible Talk:Itruns virtually the entire company, including handling the company’s accounts, making management decisions, andanswering customer queries via email.

Whale’s father and colleague, Dr David Whale, said Zach was unlike other AI systemsin that it was built from the ground up around human interaction.

“You can talk to it, write to it. You can draw pictures andit will respond.This is a system that interacts with us the sameway we interact with each other.”

One of Zach’s most promising applicationswasin the healthcare system, as a digital assistant.

For the last six weeks, Christchurch GP Dr Rob Seddon-Smith has used it to handle tasks in his Hei Hei clinic.

Seddon-Smith who has been teaching the AI, which improves itselfthrough feedback presented his findings on Thursday. He said they were astonishing.

The AI listens to his consultations and writes up the patient notes. It doesn’t transcribe, but truncates and expresses the important parts of the conversation ina readable way they were vastly better than Seddon-Smith’s own notes, he said.

“He can listen to the consultation, capture the very essence of the words and record them in a recognisable form. It works,” he said.

“This set of notes is the first ever, anywhere in the world, to be created only by computer. I didn’t type anything, I simply chatted with my patient.”

Other AI, such as Apple’s Siri,”couldn’t do anything close” to what Zach could, he said.

Patients would be able to ring and askit for their medical information,make appointmentsand have questions answered. It recognises voice patterns to verify identities.

Tests attempting to break its security systemshad been unsuccessful, including by its own creators.

What clinched Seddon-Smith’s belief in Zach’s capabilities was when, unprompted, it put the phone number for a crisis hotline into its notes for a suicidal patient.

It textedhim one night despite not having his phone number to tell him his email inbox was full.

Ittook away all the mundane tasks doctors had to doand allowed him to focus on his patients.

“It can address some of the most complex issues in healthcare and do so efficiently, safely and above all, equitably. It is one technology built from the ground-up to leave no one behind.”

Councillor Deon Swiggs said it was expected the AI would be installed in a restored heritage building, mixingthe city’s past with its future.

“It’s exciting that by 2019, Christchurch will be home to one of the world’s largest supercomputers.It’s actually reallyincredibleto think about,” he said.

“The investment here is huge, and I don’t think that can be understated. It will stimulate tech tourism, a massive industry . . . it will increase Christchurch’s credentials as a city of opportunity and of technology.”

There were lots of questions about the impact AI would have in the future, particularly for people’sjobs, he said.

“I think it’s really important to have an AI in Christchurch that we are going to be able to integrate with and engage with, so people can take away the fear of what these things are.”

In its current form,Zachcan speak and holdconversations, but its voice capacity is turned off as it is too resource intensive.

By the time it isinstalled in Christchurch, it is expected to have greater capacity, and will be able to hold conversations with the public.

-Stuff

Read more:

Zach, a supercomputer that can hold conversations, is coming to Christchurch – The Press

Dell EMC will Build OzStar Swinburne’s New Supercomputer to Study Gravity – HPCwire (blog)

Dell EMC announced yesterday it is building a new supercomputer the OzStar for the Swinburne University of Technology (Australia) in support theARC Centre of Excellence for Gravitational Wave Discovery (OzGrav). The OzGrav project was first announced last September. The OzStar supercomputer will be based on Dell EMC PowerEdge R740 nodes, have more than one petaflops capability, and is expected to be completed in September.

OzGrav will use the new machine in support of efforts to understand the extreme physics of black holes and warped space-time. Among other projects, OzGrav will process data from LIGO (Laser Interferometer Gravitational Wave Observatory) gravitational wave detectors and the Square Kilometre Array (SKA) radio telescope project with facilities built in Australia and South Africa.

The OzStars architecture will leverage advanced Intel (Xeon V5) and Nvidia (P100) technology and feature three building blocks: Dell EMC 14thGeneration PowerEdge R740 Servers; Dell EMC H-Series Networking Fabric; and Dell EMC HPC Storage with Intel Lustre filesystem. The networking fabric is Intel Omni Path Architecture and will provide 86.4 Terabits per second of aggregate network bandwidth at 0.9 s latency according to Dell EMC. As is typical in such contracts, Dell EMC will provide support.

Heres snapshot of OzStars specs as provided by Dell EMC:

While Einstein predicted the existence of gravitational waves, it took one hundred years for technology to advance to the point they could be detected, said Professor Matthew Bailes, director of OzGrav, Swinburne University of Technology. Discoveries this significant dont occur every day and we have now opened a new window on the Universe. This machine will be a tremendous boost to our brand-new field of science and will be used by astrophysicists at our partner nodes as well as internationally.

This combination of Dell EMC technologies will deliver the incredibly high computing power required to move and analyze data sets that are literally astronomical in size, said Andrew Underwood, Dell EMCs ANZ high performance computing lead, who collaborated with Swinburne on the supercomputer design.

The NSF-funded LIGO project first successfully detected gravitational waves in 2015. Those waves were caused by the collision of two modest size black holes spiraling into one another (see HPCwire article,Gravitational Waves Detected! Historic LIGO Success Strikes Chord with Larry Smarr). LIGO has since detected two more events opening up a whole new way to examine the universe.

According to todays announcement, up to 35% of the supercomputers time will be spent on OzGrav research related to gravitational waves. The supercomputer will also continue to incorporate the GPU Supercomputer for Theoretical Astrophysics Research (gSTAR), operating as a national facility for the astronomy community funded under the federal National Collaborative Research Infrastructure Scheme (NCRIS) in cooperation with Astronomy Australia Limited (AAL). In addition, the supercomputer will underpin the research goals of Swinburne staff and students across multiple disciplines, including molecular dynamics, nanophotonics, advanced chemistry and atomic optics.

OzStar replaces the green machines that have served Swinburne for the last decade and seeks to further reduce Swinburnes carbon footprint by minimizing CO2 emissions by carefully considering heating, cooling and a very high performance per watt ration of power consumption.

OzGrav is funded by the Australian Government through the Australian Research Council Centres of Excellence funding scheme and is a partnership between Swinburne University (host of OzGrav headquarters), the Australian National University, Monash University, University of Adelaide, University of Melbourne, and University of Western Australia, along with other collaborating organisations in Australia and overseas.

See original here:

Dell EMC will Build OzStar Swinburne’s New Supercomputer to Study Gravity – HPCwire (blog)


12345...102030...