PEARC20 Plenary Introduces Five Upcoming NSF-Funded HPC Systems – HPCwire

Five new HPC systemsthree National Science Foundation-funded Capacity systems and two Innovative Prototype/Testbed systemswill be coming online through the end of 2021. John Towns, principal investigator (PI) for XSEDE, introduced panelists who described their upcoming systems at the PEARC20 virtual conference on July 29, 2020.

The systems are part of NSFs Advanced Computing Systems & Services: Adapting to the Rapid Evolution of Science and Engineering Research solicitation. The Capacity systems, which will support a range of computation and data analytics in science and engineering, are expected to be available for allocation via XSEDEs process for projects starting Oct 1, 2021. The Innovative platforms, which will deploy specialized hardware tailored for artificial intelligence, will be available for early user access in late 2021 followed by a production period as the platforms mature.

The Practice and Experience in Advanced Research Computing (PEARC) Conference Series is a community-driven effort built on the successes of the past, with the aim to grow and be more inclusive by involving additional local, regional, national, and international cyberinfrastructure and research computing partners spanning academia, government and industry. Sponsored by the ACM, the worlds largest educational and scientific computing society, PEARC20 is now taking place online through July 31.

This years theme, Catch the Wave, embodies the spirit of the communitys drive to stay on pace and in front of all the new waves in technology, analytics, and a globally connected and diverse workforce. Scientific discovery and innovation require a robust, innovative and resilient cyberinfrastructure to support the critical research required to address world challenges in climate change, population, health, energy and environment.

Anvil: Composable, Interactive, User-Focused

Anvil, the first of the three NSF Category I Capacity Systems, was introduced by principal investigator Carol Song, senior research scientist and director of Scientific Solutions with Research Computing at Purdue University. Song stressed the capabilities of the $9.9-million system in providing composability and interactivity to meet the increasing demand for computational resources, enable new computational paradigms, expand HPC to non-traditional research domains, and train the next generation of researchers and HPC workforce.

Its not just the CPU nodes or the GPU nodes, Song said. Its the entire ecosystem that focuses on getting more users onto the significant resources.

Partnering Purdue with Dell, DDN, and Nvidia, Anvil will feature:

The system, which will have a peak performance of 5.3 petaflops, will become operational by Sept. 30, 2021, with early-user access the previous summer. It will be 90% allocated through XSEDEs XRAC allocations system, with the remainder as discretionary allocation by Purdue.

Delta: The Mark of Change

Bill Gropp, director of the National Center for Supercomputing Applications, University of Illinois Urbana-Champaign, introduced the Category I Delta system. With more than 800 late-model Nvidia GPUs, the $10-million resource will be the largest GPU system by FLOPS in NSFs portfolio at launch.

Titled after the Greek letter, the name was chosen to indicate change, said Gropp, PI of the new resource. Theres a lot of change in the hardware and software and the way we make use of the systems. Delta is intended to help drive a broader adoption of GPU technology past the end of Dennard scaling.

Delta will feature:

Delta, like Anvil, will be 90% allocated through XSEDE, will start operations on Oct. 1, 2020.

Jetstream2: An Approaching Front in Cloud HPC

Jetstream2, the final new NSF Category I system, was introduced by PI David Hancock, director for advanced cyberinfrastructure at Indiana University. Building on the success of the Jetstream system, the new $10-million supercomputer will serve a similar role in interactive, configurable computing for research and education, thanks in part to agreements with Amazon, Google, and Microsoft to support cloud compatibility.

The configuration process for Jetstream2 is in its final phases and is still ongoing, Hancock said. But the new system will feature:

The system, which will combine cyberinfrastructure from Indiana University, Arizona State University, Cornell University, the Texas Advanced Computing Center, and the University of Hawaii, is planned to begin early operations in August 2021 and production by October 2021. Additional partners include the University of Arizona, Johns Hopkins University [Galaxy team], and UCAR [Unidata team]. The system vendor partner for the project will be Dell, Inc. Jetstream2 will be XSEDE-allocated.

Neocortex: The Next Leap Forward in Deep Learning

Paola Buitrago, director of Artificial Intelligence and Deep Learning at the Pittsburgh Supercomputing Center (PSC) at Carnegie Mellon University and the University of Pittsburgh, presented on the centers new NSF Category II system, Neocortex. Named for the brains center for higher functions, the new machine will serve as an experimental testbed of new technology to accelerate deep learning by orders of magnitude, similar to the sea change introduced by GPU technology in 2012.

Its innovative and its meant to be exploratory, PI Buitrago said. In particular we have one goal that we would like to scale this technology we aim to engage a wide audience and foster adoption of innovative technologies in deep learning.

The $5-million system will pair Cerebrass CS-1 and Hewlett Packard Enterprise (HPE) Superdome Flex technology to provide 800,000 AI-optimized cores with a uniquely quick interconnect. Neocortex will feature:

Neocortex will enter its early user program in the fall of 2020.

Voyager: Specialized Processors, Optimized Software for AI

Voyager, another $5-million NSF Category II system, was introduced by PI Amit Majumdar of the San Diego Supercomputer Center. Beginning with focused select projects in October 2021, the supercomputer will stress specialized processors for training and inference linked with a high-performance interconnect, x86 compute nodes, and a rich storage hierarchy.

We are most interested to see this as an experimental machine and see its impact and engagement of the user community, Majumdar said. So we will reach out to AI researchers from a wide variety of science, engineering and social sciences [fields], and there will be deep engagement with users.

Supermicro Inc. and SDSC will jointly deploy Voyager, featuring:

Specific early user applications intended for Voyager will include the use of machine learning to improve trigger, event reconstruction, and signal-to-background in high-energy physics; achieving quantum-modeling-level accuracy in molecular simulations in chemistry, biophysics, and material science; and satellite image analysis.

Voyager will follow a three-year testbed phase focused on select deep user engagement with a minimum of two years of XSEDE-allocation.

View post:

PEARC20 Plenary Introduces Five Upcoming NSF-Funded HPC Systems - HPCwire

Related Posts

Comments are closed.