What is a supercomputer? – CNBC

The race for the world's fastest supercomputer is on.

China held the lead for the last 5 years, but the United States has surged ahead with Summit. It's a $200 million government-funded supercomputer built for Oak Ridge National Laboratory in partnership with IBM and Nvidia.

Today's supercomputers are made up of thousands of connected processors, and their speed has grown exponentially over the past few decades. The first supercomputer, released in 1964, was called the CDC 6600. It used a single processor to achieve 3 million calculations per second. While that may sound impressive, it is tens of thousands of times slower than an iPhone.

The Lab Director of Oak Ridge, Thomas Zacharia, says, "I've always thought of supercomputing as a time machine, in the sense that it allows you to do things that most other people will be able to do in the future." As he explains, smartphones today are more powerful than the supercomputers used in the 1990s to work on the Human Genome Project.

Summit consists of over 36,000 processors from IBM and Nvidia that can perform 200 quadrillion calculations per second. Zacharia says that what a typical computer can do in 30 years Summit will be able to accomplish in just an hour.

Summit takes up 5,600 square feet of floor space and has nearly 200 miles of cable. It uses 4,000 gallons of water per minute to stay cool and consumes enough power to run 8 thousand homes.

Supercomputers are used for functions like forecasting weather and climate trends, simulating nuclear tests, performing pharmaceutical research and cracking encryption keys. Some initial projects on deck for Summit include researching possible genetic predispositions to cancer or opioid addiction.

By surpassing China, the U.S. has escalated the tech rivalry between the two countries.

As Nvidia CEO Jensen Huang told CNBC, "There's no question the race is on, but this is not the space race, this is the race to knowledge."

But faster supercomputers are already on the horizon. The European Union, Japan and China are all developing machines they say will outperform Summit. The next big frontier is exascale computing, that is, computers that can perform a billion times a billion calculations per second.

John Kelly, IBM Senior Vice President of Cognitive Solutions and Research, says, "Think about what you can do with a system that every billionth of a second it does a billion calculations. We can model and simulate systems that we can't model and simulate today, and we can discover from the world's data insights into major breakthroughs in the area of healthcare, science, materials, etc."

See the article here:

What is a supercomputer? - CNBC

Supercomputer may give us COVID meds to join vaccines – al.com

An Alabama scientists research may lead to medicines that can team up with vaccines as another weapon against COVID-19, according to findings released today.

The team of University of Alabama in Huntsville biologist Dr. Jerome Baudry has already won an award for their work so far, and Beaudry said the widespread scientific and technical cooperation to fight COVID reminds him of the space exploration of the 60s.

No competitors, only collaborators, and a unique feeling of purpose, Baudry said.

Baudrys laboratory at UAH used a supercomputer to screen 50,000 natural compounds that might affect COVID. The computer found 125 candidates. Now, testing at the University of Tennessee says 35 of those are being studied now for possible medication ingredients.

There is very good news on vaccine developments, and it is great, Baudry said today, but it is important that we continue working on other pharmaceuticals. Its a bit like for the flu, where there are vaccines and there are pharmaceuticals, and they work together, not against each other. And what we learned here will be priceless to respond to other similar crises, if and when they show up in the future.

The Oak Ridge National Laboratory in Tennessee is leading the international effort to find medications to fight COVID.

We used some of (their) data, and we basically added value to it, Baudry said. Although it is unique in many ways our focus on natural products, for instance it is important to note that this project of ours is still integrated into the national COVID-19 research effort.

The first of the 35 compounds still in play is now being tested in a biosafe Memphis laboratory directed by Dr. Colleen Jonsson. They use live virus infections of living cells grown in the equivalent of Petri dishes, Baudry said. The chemicals that will have a good profile can then be tested in animal models using mice.

The Baudry labs work has already won one of five Hyperion HPC Innovation Excellence Awards, UAH said. The awards recognize achievements by users of high-performance computers. Hyperion, the award sponsor, is the most respected group of industry experts in (high-performance computing), Baudry said. I was very surprised about the award because I didnt not even know that we had been under consideration. I was both very happy and very humbled.

Baudry has performed and led scientific research for 25 years and now holds the Mrs. Pei-Ling Chan Chair in the UAH Department of Biological Sciences. He said COVID has brought incredible cooperation and effort among scientists.

There are no competitors, only collaborators, and a unique feeling of purpose that is absolutely wonderful, Baudry said. This may be the most important experience of my professional life. It reminded me of what I read happened during the space exploration of the 60s. There is nothing we cannot do when we work together.

View original post here:

Supercomputer may give us COVID meds to join vaccines - al.com

Singapore Researchers Plug in to World’s Fastest Supercomputer – HPCwire

SINGAPORE, Nov. 30 2020 A partnership between Singapores national supercomputing resource, NSCC, Japans RIKEN and RIST allows Singapore-based researchers to directly access the vast supercomputing resources of the worlds fastest supercomputer, Fugaku. At 442 Petaflops (PFLOPS) of computing power, Fugaku is nearly three times more powerful than its nearest competitor and is at the top of the latest November 2020 edition of the global TOP500 supercomputer listing. Singapore researchers will now be able to apply for Fugakus huge computing resources through regular project calls and connect directly via dedicated high-speed, high-bandwidth research optical fibre links of up to 100 Gbps. The accessibility to Fugakus computing resources is in addition to Singapores petascale compute power that local researchers already have available at NSCC.

Singapore researchers will have the honour of being one of the first in Asia to have access to the amazing compute power of Fugaku, said Associate Professor Tan Tin Wee, Chief Executive of NSCC. The broad spectrum of HPC cooperation between the two centres includes joint training, workshops and summer schools; talent exchange and student internship programmes; HPC support for research and talent capability building in areas like high-impact HPC-intensive national research projects and student competitions; and direct high-speed data transfer and storage linkages with both RIKEN and RIST. The Fugaku access, in addition to the supercomputer resources already available at NSCC, will give local researchers the opportunity to think beyond the conventional and to perform research at much more complex and larger scales.

NSCCs national supercomputer is already functioning at more than 90% capacity with users from Singapores research institutes, institute of higher learning (IHLs) and industry leveraging the resources for research, education and industry-based HPC projects. The demand for HPC is expected to increase exponentially in Singapores drive towards a smart nation. The government announced a S$200 million upgrade of the current supercomputer resources at the SupercomputingAsia 2019 (SCA19) conference in March 2019.

Singapores national supercomputing resources are already stretched thin and the HPC upgrades will ensure local researchers and organisations are better enabled, equipped and prepared for a much more digitalised future, added A/Prof Tan Tin Wee who said that the current 1 PFLOPS system will be enhanced to a 10-15 PFLOPS system over the next few years. In the meantime, local researchers can be assured of additional seamless, continued access to HPC resources in Singapore and through our partnership with RIKEN and RIST.

Even before being fully commissioned, Fugaku has already made strides in providing solutions for the COVID-19 pandemic by speeding up the identification of potential drug candidates and developing simulations that demonstrate the spread of coronavirus in indoor settings and on trains, said Prof Satoshi Matsuoka, Director of R-CCS and one of the architects of the Fugaku supercomputer. We hope that by sharing such examples and Fugakus resources we can inspire more of our researchers, and colleagues from other countries, to leverage the power of HPC in their own research work. This partnership between the top tier national HPC centres of Japan and Singapore is a significant step in that direction.

RIST has been collaborating with NSCC by exchanging information on promotion of shared use of supercomputers since 2016. Project calls for supercomputer Fugaku have started this year, and NSCC and RIST have been exploring cooperation on supercomputer Fugaku. I believe that the new establishment of the partnership between NSCC and RIKEN will promote the collaboration between Singapore and Japan and we can work together to produce amazing outcomes on Fugaku, said Dr Hideyuki Takatsu, Managing Director of RIST.

Supercomputers have been instrumental in most of the worlds major scientific advancements. These include enabling complex computational and data-intensive tasks to be completed much more quickly in fields as diverse as advanced scientific modelling & simulations, artificial intelligence, weather forecasting, climate research, oil and gas exploration, chemical and biomolecular modelling, and quantum computing. The research has led to modern scientific achievements like deciphering the human genome, enhanced air travel, space exploration, biomedicine, unravelling the secrets of the universe and even research on solutions for pandemics like COVID-19.

A MoU was endorsed on 16th September 2020 between R-CCS and NSCC, and complements an existing MoU with RIST. The collaboration with RIKEN covers access and data sharing to Fugaku while RIST will work with NSCC on promoting HPC research utilisation by cooperating on HPC project research calls and shared supercomputing use. Singapore researchers who are interested to apply for HPC resources from Japan can do so at https://www.nscc.sg/open-calls-hpc-resources-from-japan/.

About the National Supercomputing Centre (NSCC) Singapore

The National Supercomputing Centre (NSCC) Singapore was established in 2015 and manages Singapores first national petascale facility with available high performance computing (HPC) resources. As a National Research Infrastructure funded by the National Research Foundation (NRF), we support the HPC research needs of the public and private sectors, including research institutes, institutes of higher learning, government agencies and companies. With the support of its stakeholders, including Agency for Science Technology and Research (A*STAR), Nanyang Technological University (NTU), National University of Singapore (NUS), Singapore University of Technology and Design (SUTD), National Environment Agency (NEA) and Technology Centre for Offshore and Marine, Singapore (TCOMS), NSCC catalyses national research and development initiatives, attracts industrial research collaborations and enhances Singapores research capabilities. For more information, please visit https://www.nscc.sg/.

About RIKEN Center for Computational Science (R-CCS)

As the leadership center of high-performance computing, we explore the science of computing, the science by computing, and the science for computing. We at the RIKEN Center for Computational Science (R-CCS) will carry out the following mission: Develop and operate the supercomputer Fugaku efficiently and effectively to serve as a core of high performance computing research, and further expand the number of users, improve the ease-ofuse, and promote educational activities. Facilitate leading edge infrastructures for research based on K and Fugaku, and moreover conduct translational research to elevate the operational technologies for large-scale computing facilities to world-leading levels. Conduct cutting-edge research on high performance computing, and promote the results through open-source software, allowing our deliverables to further incubate new values in world s technological developments based on high-performance computing.

About Research Organization for Information Science and Technology (RIST)

Research Organization for Information Science and Technology (RIST) is a general incorporated foundation which has been carrying out usage promotion services of the Japanese flagship computers (first the K computer, then the successor supercomputer Fugaku since 2020), commissioned by the Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT), since 2014. Our scope includes project selection, user support, and help spreading research results of projects. In addition, since 2017, RIST has also taken a role as the operation office of the innovative High Performance Computing Infrastructure (HPCI). Within a framework of HPCI, we are in charge of managing computational resources and promoting usage.

Source: NSCC

Read the original post:

Singapore Researchers Plug in to World's Fastest Supercomputer - HPCwire

GENCI Supercomputer Simulation Illuminates the Dark Universe – HPCwire

What we can see and touch are, in the scheme of the universe, relatively minor components, with visible matter and tangible mass constituting just 16 percent of the universes mass and 30 percent of its energy, respectively. The remainders consist of dark matter and dark energy, which are invisible and intangible making supercomputer simulations an integral part of the investigative workflow for understanding these cosmic forces. Now, a team of researchers from 16 different institutions across five countries has announced the results of the Extreme-Horizon simulation, a massive, supercomputer-powered simulation of the formation of galaxies that tested assumptions about the nature of dark energy and dark matter.

The Extreme-Horizon simulation ran on Joliot-Curie, a supercomputer owned by GENCI and hosted by CEA at TGCC. Among Joliot-Curies four partitions (one Intel Skylake partition, one Intel Knights Landing partition, one AMD Epyc Rome Partition and one Intel Cascade Lake partition), the AMD partition is the most powerful (7.0 Linpack petaflops), placing 38th on the most recent Top500 list of the worlds most powerful publicly ranked supercomputers.

Using Joliot-Curie, the research team simulated how cosmic structures have evolved from the Big Bang through the present day, crunching over three terabytes of data at multiple points throughout the simulation.

Extreme-Horizon yielded some important results for astrophysicists. First, the generally higher resolution meant that Extreme-Horizon was able to paint a picture of how cold gases pooled in galaxies in low-density regions of space and how new galaxies formed in the early days of the post-Big Bang universe.

Second, the simulation produced a correction factor for black holes that obscure our view of intergalactic hydrogen clouds here on Earth. With that correction factor in-hand, astrophysicists will be better able to characterize those clouds and the trends in the distribution of matter in the universe.

Extreme-Horizon is one of the Grand Challenges undertaken by GENCI Frances high-performance computing center to test the abilities of its supercomputing systems. These Grand Challenges represent a unique opportunity for selected scientists to gain access to the supercomputers resources, enabling them to make major advances, or even achieve world firsts, GENCI wrote in a press release.

About the research

The research discussed in this article was published as a letter to the editor titled Formation of compact galaxies in the Extreme-Horizon simulation in the November 2020 issue of the journal Astronomy & Astrophysics. 21 authors across 16 institutions in five countries contributed to the letter, which can be read in full at this link.

See original here:

GENCI Supercomputer Simulation Illuminates the Dark Universe - HPCwire

Pawsey’s Galaxy Supercomputer Aids Telescope in Creating New Atlas of the Universe – HPCwire

Dec. 2, 2020 The Australian Square Kilometre Array Pathfinder (ASKAP), developed and operated by Australias national science agency, CSIRO, mapped approximately three million galaxies in just 300 hours.

The Rapid ASKAP Continuum Survey is like a Google map of the universe where most of the millions of star-like points on the map are distant galaxiesabout a million of which weve never seen before.

CSIRO Chief Executive Dr. Larry Marshall said ASKAP brought together world-class infrastructure with scientific and engineering expertise to unlock the deepest secrets of the universe.

ASKAP is applying the very latest in science and technology to age-old questions about the mysteries of the universe and equipping astronomers around the world with new breakthroughs to solve their challenges, Dr. Marshall said.

Its all enabled by innovative receivers developed by CSIRO that feature phased array feed technology, which see ASKAP generate moreraw dataat a faster rate than Australias entire internet traffic.

In a time when we have access to more data than ever before, ASKAP and the supercomputers that support it are delivering unparalleled insights and wielding the tools that will underpin our data-driven future to make life better for everybody.

Minister for Industry, Science and Technology, Karen Andrews said ASKAP is another outstanding example of Australias world-leading radio astronomy capability.

ASKAP is a major technological development that puts our scientists, engineers and industry in the drivers seat to lead deep space discovery for the next generation.

This new survey proves that we are ready to make a giant leap forward in the field of radio astronomy, Minister Andrews said.

The telescopes key feature is its wide field of view, generated by new CSIRO-designed receivers, that enable ASKAP to take panoramic pictures of the sky in amazing detail.

Using ASKAP at CSIROs Murchison Radio-astronomy Observatory (MRO) in outback Western Australia, the survey team observed 83 percent of the entire sky.

The initial results are published in the Publications of the Astronomical Society of Australia.

This record-breaking result proves that an all-sky survey can be done in weeks rather than years, opening new opportunities for discovery.

The new data will enable astronomers to undertake statistical analyses of large populations of galaxies, in the same way social researchers use information from a national census.

This census of the universe will be used by astronomers around the world to explore the unknown and study everything from star formation to how galaxies and their super-massive black holes evolve and interact, lead author and CSIRO astronomer Dr. David McConnell said.

With ASKAPs advanced receivers the RACS team only needed to combine 903 images to form the full map of the sky, significantly less than the tens of thousands of images needed for earlier all-sky radio surveys conducted by major world telescopes.

For the first time ASKAP has flexed its full muscles, building a map of the universe in greater detail than ever before, and at record speed.

We expect to find tens of millions of new galaxies in future surveys, Dr. McConnell said.

The 13.5 exabytes of raw data generated by ASKAP were processed using hardware and software custom-built by CSIRO.

The Pawsey Supercomputing Centres Galaxy supercomputer converted the data into 2-D radio images containing a total of 70 billion pixels.

The final 903 images and supporting information amount to 26 terabytes of data.

Pawsey Executive Director Mark Stickells said the supercomputing capability was a key part of ASKAPs design.

The Pawsey Supercomputing Centre has worked closely with CSIRO and the ASKAP team since our inception and we are proud to provide essential infrastructure that is supporting science delivering great impact, Mr Stickells said.

The images and catalogs from the survey will be made publicly available through the CSIRO Data Access Portal and hosted at Pawsey.

Source: Annabelle Young, CSIRO

Read more from the original source:

Pawsey's Galaxy Supercomputer Aids Telescope in Creating New Atlas of the Universe - HPCwire

Cerebras CS-1 supercomputer uses the worlds largest chip – Inceptive Mind

On the occasion of the SC20 conference, Cerebra Systems, in collaborations with researchers at the National Energy Technology Laboratory (NETL), showed that its latest single wafer-scale Cerebras CS-1 could outperform one of the fastest supercomputers in the U.S. by more than 200 times.

The Cerebras CS-1 is the worlds first wafer-scale computer system. It is 26 inches tall, fits in a standard data center rack, and is powered by a single Cerebras Wafer Scale Engine (WSE) chip. It is the worlds largest chip, measuring 72 square inches (462 cm2) and the largest square that can be cut from a 300 mm wafer. All processing, memory, and core-to-core communication occur on the wafer. In total, there are 1.2 trillion transistors in an area of 72 square inches.

The wafer holds almost 400,000 individual processor cores, each with its private memory and a network router. The cores form a square mesh. Each router connects to the routers of the four nearest cores in the mesh. The cores share nothing; they communicate via messages sent through the mesh.

Cerebras CS-1 will be used especially for scientific research and science-related projects. The machine can solve a large, sparse, structured system of linear equations of the sort that arises in modeling physical phenomena like fluid dynamics using a finite-volume method on a regular three-dimensional mesh. Solving these equations is fundamental to such efforts as forecasting the weather; finding the best shape for an airplanes wing; predicting the temperatures and the radiation levels in a nuclear power plant; modeling combustion in a coal-burning power plant; and making pictures of the layers of sedimentary rock in places likely to contain oil and gas.

To achieve such results, Cerebras says there are three factors that enable the computers speed, including the CS-1s memory performance, high bandwidth and low latency of the on-wafer communication fabric, and processor architecture optimized high-bandwidth computing.

In return, of course, you have a chip about 60 times the size of a large conventional chip like a CPU or GPU. It was built to provide a much-needed breakthrough in computer performance for deep learning.

The researchers used the CS-1 to do sparse linear algebra, typically used in computational physics and other scientific applications. Using the wafer, they achieved a performance more than 200 times faster than that of NETLs Joule 2.0 supercomputer. NETLs Joule is the 24th fastest supercomputer in the U.S. and 82nd fastest on a list of the worlds top 500 supercomputers. It uses Intel Xeon chips with 20 cores per chip for a total of 16,000 cores.

Excerpt from:

Cerebras CS-1 supercomputer uses the worlds largest chip - Inceptive Mind

Supercomputer Market Overview with Qualitative analysis, Competitive landscape & Forecast by 2027 – The Market Feed

TheGlobal Supercomputer Market report by Reports and Data is an all-encompassing study of the global Supercomputer market. The report serves as a prototype of the highly functional Supercomputer industry. Our market researchers panel has performed quantitative and qualitative assessments of the global Supercomputer market dynamics in a bid to forecast the global market growth over the forecast period. Reports and Data have taken into consideration several factors, such as market penetration, pricing structure, product portfolios, end-user industries, and the key market growth drivers and constraints, to endow the readers with a sound understanding of the market. The report provides the reader with a panoramic view of the Supercomputer market, supported by key statistical data and industry-verified facts. Hence, it examines the size, share, and volume of the Supercomputer industry in the historical period to forecast the same valuations for the forecast period.

Request a sample copy of this report @https://www.reportsanddata.com/sample-enquiry-form/2921

The Supercomputer market research report is broadly bifurcated in terms of product type, application spectrum, end-user landscape, and competitive backdrop, which would help readers gain more impactful insights into the different aspects of the market. Under the competitive outlook, the reports authors have analyzed the financial standing of the leading companies operating across this industry. The gross profits, revenue shares, sales volume, manufacturing costs, and the individual growth rates of these companies have also been ascertained in this section. Our team has accurately predicted the future market scope of the new entrants and established competitors using several analytical tools, such as Porters Five Forces Analysis, SWOT analysis, and investment assessment.

Market segments by Top Manufacturers:

IBM Corporation, Cray Inc., Lenovo Inc., Sugon, Inspur, Dell EMC, Hewlett Packard Enterprise, Atos SE, FUJITSU, and Penguin Computing, among others.

Market split by Type, can be divided into:

Market split by Application, can be divided into:

!!! Limited Time DISCOUNT Available!!! Get Your Copy at Discounted [emailprotected] https://www.reportsanddata.com/discount-enquiry-form/2921

The latest report is furnished with a detailed examination of the Supercomputer market and the global economic landscape ravaged by the ongoing COVID-19 pandemic. The pandemic has significantly affected millions of peoples lives. Besides, it has turned the global economy upside down, which has adversely impacted the Supercomputer business sphere. Thus, the report encompasses the severe effects of the coronavirus pandemic on the Supercomputer market and its key segments.

Geographical Scenario:

The global Supercomputer market report comprehensively studies the present growth prospects and challenges for the key regions of the Supercomputer market. The report continues to evaluate the revenue shares of these regions over the forecast timeline. It further scrutinizes the year-on-year growth rate of these regions over the projected years. The leading regions encompassed in this report:

Browse the full report description, along with the ToCs and List of Facts and Figures @ https://www.reportsanddata.com/report-detail/supercomputer-market

Key Coverage of the Report:

The report considers the following timeline for market estimation:

To get a customized sample of the report, click on the link mentioned alongside @ https://www.reportsanddata.com/sample-enquiry-form/2921

Thank you for reading our report. In case of further queries regarding the report or inquiry about its customization, please connect with us. We will ensure your report is well-suited to your requirements.

View post:

Supercomputer Market Overview with Qualitative analysis, Competitive landscape & Forecast by 2027 - The Market Feed

New IBM encryption tools head off quantum computing threats – TechTarget

The messages surrounding quantum computers have almost exclusively focused on the sunny side of how these machines will solve infinitely complex problems today's supercomputers can't begin to address. But rarely, if ever, have the masters of hype focused on the dark side of what these powerful machines might be capable of.

For all the good they promise, quantum systems, specifically fault-tolerant quantum systems, are able to crumble the security that guards sensitive information on government servers and those of the largest Fortune 500 companies.

Quantum computers are capable of processing a vast number of numerical calculations simultaneously. Classical computers deal in ones and zeros, while a quantum computer can use ones and zeros as well as achieve a "superposition" of both ones and zeros.

Earlier this year, Google achieved quantum supremacy with its quantum system by solving a problem thought to be impossible to solve with classical computing. The system was able to complete a computation in 200 seconds that would take a supercomputer about 10,000 years to finish -- literally 1 billion times faster than any available supercomputer, company officials boasted.

Quantum computers' refrigeration requirements and the cost of the system itself, which has not been revealed publicly, make it unlikely to be a system IBM or other quantum makers could sell as they would supercomputing systems. But quantum power is available through cloud services.

Faced with this upcoming superior compute power, IBM has introduced a collection of improved cloud services to strengthen users' cryptographic key protection as well as defend against threats expected to come from quantum computers.

Building on current standards used to transmit data between an enterprise and the IBM cloud, the new services secure data using a "quantum-safe" algorithm. Though quantum computers are years away from broad use, it's important to identify the potential risk that fault-tolerant quantum computers pose, including the ability to quickly break encryption algorithms to get sensitive data, IBM said.

Part of IBM's new strategic agenda includes the research, development and standardization of core quantum-safe cryptography algorithms as open source tools such as Crystals and Open Quantum Safe grow in popularity.

With emerging technologies like quantum computing, users can't accurately predict how long it will be before they need services like this. Judith HurwitzPresident, Hurwitz & Associates

The agenda also includes the standardization of governance tools and accompanying technologies to support corporate users as they begin integrating quantum systems alongside existing classical systems.

Some analysts applaud IBM for extending support for the new cloud services beyond the security needs of existing hybrid cloud users to quantum computers as a way of future-proofing the new offerings.

"With emerging technologies like quantum computing, users can't accurately predict how long it will be before they need services like this," said Judith Hurwitz, president of Hurwitz & Associates. "But prices [of quantum systems] could come down and the technology mature quicker than you anticipate, so you may need services like this to work across platforms. It could also be IBM just wanting to show how far ahead of everyone else they are."

While fault-tolerant quantum computers are a long way from reality for the vast majority of hackers, some analysts point out that adversarial governments could access such systems sooner rather than later to break the security schemes of the U.S. military and other federal government agencies.

"There could be legitimate concern about some well-organized and funded nation-states using quantum computers to crack algorithms to get at sensitive information, but there is little chance cybercriminals can get access to a quantum system anytime soon," said Doug Cahill, vice president and group director of cybersecurity with Enterprise Strategy Group. "But the short-term benefit here is future-proofing for mission critical workloads."

The need for data privacy is more critical as users become increasingly dependent on data, said Hillery Hunter, vice president and CTO of IBM Cloud, in a prepared statement. Security and compliance remain central to IBM's Confidential Computing initiative, Hunter said, as it is for corporate users in highly regulated industries where it's critical to keep proprietary data secure.

IBM also delivered an improved version of its Key Protect offering, designed for lifecycle management for encryption keys used in IBM Cloud services or in applications built by users. The new version has the ability to use quantum-safe cryptography-enabled Transport Layer Security (TLS) connections, which helps protect data during key lifecycle management.

The company also unveiled quantum-safe cryptography support features that enable application transactions. For instance, when cloud-native containerized applications run on Red Hat's OpenShift or IBM Cloud Kubernetes Services, secured TLS connections contribute to application transactions with quantum-safe cryptography support during data-in-transit protecting against breaches.

IBM's Cloud Hyper Protect Crypto Service provides users with Keep Your Own Key features. The offering is built on FIPS-140-2 Level 4-certified hardware, which gives users exclusive key control and authority over data and workloads that are protected by the keys.

"What I like about this is you get to keep your own [encryption] keys for cloud data encryption, which is unique," said Frank Dzubeck, president of Communications Network Architects. "No one but you -- not even cloud administrators -- can access your data."

The product is primarily meant for application transactions where there is a more essential need for advanced cryptography. Users are allowed to keep their private keys secured within the cloud hardware security module and, at the same time, offload TLS to the IBM Cloud Hyper Protect Crypto Services, thereby creating a more secure connection to the web server. Users can also gain application-level encryption of sensitive data, including credit card numbers, before it gets stored in a database system.

Originally posted here:

New IBM encryption tools head off quantum computing threats - TechTarget

As it closes in on Arm, Nvidia announces UK supercomputer dedicated to medical research – TechCrunch

As Nvidia continues to work through its deal to acquire Armfrom SoftBank for $40 billion, the computing giant is making another big move to lay out its commitment to investing in U.K. technology. Today the company announced plans to develop Cambridge-1, a new 40 million AI supercomputer that will be used for research in the health industry in the country, the first supercomputer built by Nvidia specifically for external research access, it said.

Nvidia said it is already working with GSK, AstraZeneca, London hospitals Guys and St Thomas NHS Foundation Trust, Kings College London and Oxford Nanopore to use the Cambridge-1. The supercomputer is due to come online by the end of the year and will be the companys second supercomputer in the country. The first is already in development at the companysAI Center of Excellence in Cambridge, and the plan is to add more supercomputers over time.

The growing role of AI has underscored an interesting crossroads in medical research. On one hand, leading researchers all acknowledge the role it will be playing in their work. On the other, none of them (nor their institutions) have the resources to meet that demand on their own. Thats driving them all to get involved much more deeply with big tech companies like Google, Microsoft and, in this case, Nvidia, to carry out work.

Alongside the supercomputer news, Nvidia is making a second announcement in the area of healthcare in the U.K.: it has inked a partnership with GSK, which has established an AI hub in London, to build AI-based computational processes that will be used in drug vaccine and discovery an especially timely piece of news, given that we are in a global health pandemic and all drug makers and researchers are on the hunt to understand more about, and build vaccines for, COVID-19.

The news is coinciding with Nvidias industry event, the GPU Technology Conference.

Tackling the worlds most pressing challenges in healthcare requires massively powerful computing resources to harness the capabilities of AI, said Jensen Huang, founder and CEO of Nvidia, in his keynote at the event. The Cambridge-1 supercomputer will serve as a hub of innovation for the U.K., and further the groundbreaking work being done by the nations researchers in critical healthcare and drug discovery.

The company plans to dedicate Cambridge-1 resources in four areas, it said: industry research, in particular joint research on projects that exceed the resources of any single institution; university granted compute time; health-focused AI startups; and education for future AI practitioners. Its already building specific applications in areas, like the drug discovery work its doing with GSK, that will be run on the machine.

The Cambridge-1 will be built on Nvidias DGX SuperPOD system, which can process 400 petaflops of AI performance and 8 petaflops of Linpack performance. Nvidia said this will rank it as the 29th fastest supercomputer in the world.

Number 29 doesnt sound very groundbreaking, but there are other reasons why the announcement is significant.

For starters, it underscores how the supercomputing market while still not a mass-market enterprise is increasingly developing more focus around specific areas of research and industries. In this case, it underscores how health research has become more complex, and how applications of artificial intelligence have both spurred that complexity but, in the case of building stronger computing power, also provides a better route some might say one of the only viable routes in the most complex of cases to medical breakthroughs and discoveries.

Its also notable that the effort is being forged in the U.K. Nvidias deal to buy Arm has seen some resistance in the market with one group leading a campaign to stop the sale and take Arm independent but this latest announcement underscores that the company is already involved pretty deeply in the U.K. market, bolstering Nvidias case to double down even further. (Yes, chip reference designs and building supercomputers are different enterprises, but the argument for Nvidia is one of commitment and presence.)

AI and machine learning are like a new microscope that will help scientists to see things that they couldnt see otherwise, said Dr. Hal Barron, chief scientific officer and president, R&D, GSK, in a statement. NVIDIAs investment in computing, combined with the power of deep learning, will enable solutions to some of the life sciences industrys greatest challenges and help us continue to deliver transformational medicines and vaccines to patients. Together with GSKs new AI lab in London, I am delighted that these advanced technologies will now be available to help the U.K.s outstanding scientists.

The use of big data, supercomputing and artificial intelligence have the potential to transform research and development; from target identification through clinical research and all the way to the launch of new medicines, added James Weatherall, PhD, head of Data Science and AI, AstraZeneca, in his statement.

Recent advances in AI have seen increasingly powerful models being used for complex tasks such as image recognition and natural language understanding, said Sebastien Ourselin, head, School of Biomedical Engineering & Imaging Sciences at Kings College London. These models have achieved previously unimaginable performance by using an unprecedented scale of computational power, amassing millions of GPU hours per model. Through this partnership, for the first time, such a scale of computational power will be available to healthcare research it will be truly transformational for patient health and treatment pathways.

Dr. Ian Abbs, chief executive & chief medical director of Guys and St Thomas NHS Foundation Trust Officer, said: If AI is to be deployed at scale for patient care, then accuracy, robustness and safety are of paramount importance. We need to ensure AI researchers have access to the largest and most comprehensive datasets that the NHS has to offer, our clinical expertise, and the required computational infrastructure to make sense of the data. This approach is not only necessary, but also the only ethical way to deliver AI in healthcare more advanced AI means better care for our patients.

Compact AI has enabled real-time sequencing in the palm of your hand, and AI supercomputers are enabling new scientific discoveries in large-scale genomic data sets, added Gordon Sanghera, CEO, Oxford Nanopore Technologies. These complementary innovations in data analysis support a wealth of impactful science in the U.K., and critically, support our goal of bringing genomic analysis to anyone, anywhere.

More here:

As it closes in on Arm, Nvidia announces UK supercomputer dedicated to medical research - TechCrunch

With Crossroads Supercomputer, HPE Notches Another DOE Win – The Next Platform

When you come to the crossroads and make a big decision about selling your soul to the devil to get what you want, it is supposed to be a dramatic event, the stuff that legends are made of.

In this case, with the announcement of a $105 million deal for Hewlett Packard Enterprise to build the Crossroads supercomputer for the so-called Tri-Labs of the US Department of Energy that would be Lawrence Livermore National Laboratory, Los Alamos National Laboratory, and Sandia National Laboratories, who are, among other things, responsible for managing the nuclear weapons stockpile for the US government the drama has more to do with the changes in the Intel Xeon and Xeon Phi processor and Omni-Path interconnect roadmaps than anything else.

The DOE supercomputer architects always have to hedge their compute, networking, and storage bets across the Tri-Labs and so do their peers at Oak Ridge National Laboratory, Argonne National Laboratory, Lawrence Berkeley National Laboratory, and a few other national labs. You want commonality because this drives down costs and complexity, but you want distinction so you can push the limits on several technologies all at the same time because that is what HPC centers are actually for and it helps balance out risks when roadmaps get crumpled.

The Advanced Simulation and Computing (ASC) program at the DOE has funded the development of so many different computing architectures over many years that it is hard to keep track of them all. There are several streams of systems that are part of the National Nuclear Security Administration, which runs simulations relating to the nuclear stockpile, and there is an interleaving of machines such that Lawrence Livermore gets the biggest one in what is called Advanced Technology Systems, or ATS, program and then either Los Alamos or Sandia gets the next one and they share it. Like this:

In the chart above, the ATS 2 system at Lawrence Livermore is the existing Sierra system, based on IBM Power9 processors and Nvidia V100 GPU accelerators hooked with Mellanox (now Nvidia) 100 Gb/sec InfiniBand interconnects. The ATS 4 system is Crossroads, the award for which is being announced now, and the ATS 5 machine, which is not shown on this old roadmap, is for the future El Capitan machine, built from future AMD Epyc CPUs and Radeon Instinct GPUs interlinked by a future rev of HPEs Cray Slingshot interconnect, which we detailed back in March and which is expected to surpass 2 exaflops in peak 64-bit floating point processing capacity when it is installed in 2022. The Commodity Technology Systems are less about creating thoroughbred supercomputers than regular workhorses that can pull the HPC plows for less money than the Advanced Technology Systems, which are first of a kind systems that push the technology.

The current Trinity machine shared by Los Alamos and Sandia and installed at Los Alamos, is the one that is coming to the end of its life and the one that Crossroads will replace ATS 1 in the chart above. The plan was to have the Crossroads replacement ready to go sometime about now, when Trinity would be sunsetted, but processor roadmaps at Intel have been problematic during the 10 nanometer generations of Xeon SP and Xeon Phi processors, to the point where Intel killed off the Xeon Phi line in July 2018 and deprecated its 200 Gb/sec Omni-Path interconnect a year later. The future Xeon SP, Xeon Phi, and Omni-Path technologies were the obvious and easiest choices for the compute and networking in the Crossroads system, given the Trinity all-CPU design and Intels desire to position Omni-Path as the successor to the Cray Aries XC interconnect used in the Trinity system. As it turns out, Intel has just this week spun out the Omni-Path business into a new company, called Cornelis Networks, founded by some InfiniBand innovators, and Omni-Path will live on in some form independent of Intel. So the technology used in Trinity will evolve and presumably be used in HPC and AI systems of the future. But that spinout did not come in time for Tri-Labs to not chose HPEs Cray Slingshot interconnect, a variant of Ethernet with HPC extensions and congestion control and adaptive routing that makes a hyperscaler envious, for Crossroads. For whatever reason, InfiniBand from Mellanox was not in the running.

The existing Trinity system is an all-CPU design, and Tri-Labs has been sticking with all-CPU systems for the machines that go into Los Alamos and Sandia under the ATS program; obviously, Lawrence Livermore chose a hybrid CPU-GPU machine for the Sierra and El Capitan systems. Again, this is about hedging technology bets as well as pushing the price/performance curves on a number of architectural fronts among the DOE labs. Trinity was built in stages, and that was not an architectural choice so much as a necessary one, as Jim Lujan, HPC program manager at Los Alamos, told us back in July 2016 when the Knights Landing Xeon Phi parallel processors were first shipping. Trinity was supposed to be entirely composed of these Xeon Phi processors, but they were delayed and the base machine was built from over 9,486 two-socket nodes using the 16-core Haswell Xeon E5-2698 v3 processors and another 9,934 nodes based on the 68-core Xeon Phi 7250 processors, for a total of 19,240 nodes with 979,072 cores with a peak performance of 41.5 petaflops at double precision. Trinity had 2.07 PB of main memory, and implemented a burst buffer on flash storage in the nodes that had a combined 3.7 PB of capacity and 3.3 TB/sec of sustained bandwidth. The parallel file system attached to Trinity had 78 PB of capacity and 1.45 TB/sec of sustained bandwidth.

Importantly, the Trinity system had about 8X the performance on the ASC workloads compared to the prior Cielo system that predated it, which was an all-CPU Cray XE6 system installed in 2013 based on AMD Opteron 6136 processors and the Cray Gemini 3D torus interconnect.

As it turns out, Crossroads is running about two years late, and that is both a good thing and a bad thing. The bad thing is that this means Trinity has to have its life extended to cover between now and when Crossroads is up and running in 2022. The good news is that the processor and interconnect technology will be all that much better when Crossroads is fired up. (We realize that this is a big presumption.)

The DOE and HPE are not saying much about Crossroads at the moment, but we do know that it will be based on the future Sapphire Rapids Xeon SP processor from Intel, and that it will, like Trinity, be an all-CPU design. Sapphire Rapids, you will recall, is the CPU motor that is going into the Aurora A21 system at Argonne National Laboratory, but in that case, the vast majority of the flops in the system will come from six Xe discrete GPU accelerators attached to each pair of Sapphire Rapids processors, as we talked about in November last year. It will be based on the HPE Cray EX system design, formerly known as Shasta by Cray, and will use the Slingshot interconnect as we said as well as liquid cooling for compute cabinets to allow it to run faster than it otherwise might and cool more efficiently. The systems will run the Cray Programming Environment, including Crays implementation of Linux and its compiler stack.

The Cori supercomputer at Lawrence Berkeley and the Trinity supercomputer at Tri-Labs were based on a mix of Xeon and Xeon Phi processors and the Aries interconnect. So it was reasonable to expect, given past history, that the Crossroads machine shared by Los Alamos and Sandia would use similar technology to the Perlmutter NERSC-9 system at Lawrence Berkeley, since these labs have tended to move to technologies at the same time over recent years. But that has not happened this time.

The Perlmutter and Crossroads machines are based on the Shasta now HPE Cray EX systems and both use the Slingshot interconnect, but the resemblance ends there. Perlmutter is using a future Milan Epyc processor from AMD plus Ampere A100 GPU accelerators from Nvidia. The Perlmutter machine will be installed in phases, with the GPU-accelerated nodes with more than 6,000 GPUs coming in late 2020 and the over 3,000 CPU-only nodes coming in the middle of 2021. Crossroads has no accelerators and is using the Sapphire Rapids Xeon SPs only for compute albeit ones that will support Intels Advanced Matrix Extensions for boosting the performance of machine learning applications.

HPE says that on applications, Crossroads will have about 4X the oomph of Trinity, which should mean somewhere around 165 petaflops at peak double precision. This is quite a bit of oomph for an all-CPU system, but then again, it is dwarfed by the 537 petaflops that the Fugaku supercomputer at RIKEN laboratory in Japan, built by Fujitsu using its A64FX Arm chip with wonking vector engines, delivers at double precision.

Original post:

With Crossroads Supercomputer, HPE Notches Another DOE Win - The Next Platform

What happens when two planets crash together? This supercomputer has the answer – Digital Trends


Obviously, we have no idea what really happens when planets collide, because we cant build planets in the lab and smash them together, said Jacob Kegerreis, a postdoctoral researcher in a specialist lab at the U.K.s Durham University called the Institute for Computational Cosmology.

So Kegerreis and his colleagues did the next best thing: They booked time on a supercomputer and used it to run hundreds of simulations of planets crashing into one another a demolition derby for astrophysics geniuses.

Its all about doing calculations, he told Digital Trends. Theres no reason you couldnt do it by hand, it would just take forever. Its really exactly how video games work. If youve got a character even a 2D one like Mario and you need them to jump and fall back down under gravity, that means the program has an equation for gravity, and it basically does a very, very simple simulation to work out how quickly that character falls. Its really the same principle. We just try and use slightly more careful equations to do these more physics-based things.

Of course, what are to Kegerreis slightly more involved equations are, to the rest of us, mind-boggling magnitudes of complexity. When the researchers working on the project created their model planets, they represented them as millions of particles, each pulling one another under gravity and pushing with material pressure. The model takes into account painstakingly accurate real-life details such as how planetary materials like rock and iron actually behave at different temperatures and densities, how gravity and pressure impacts the particles, and how these particles interact according to the equations of hydrodynamics.

We need a supercomputer because we require many millions of particles to resolve the details of what happens in these messy collisions, especially with low-density atmospheres, he said. This means a daunting number of calculations to do very many times in order to see how the system evolves throughout the impact.

The simulation in the teams most recent study potentially sheds some light on the creation of the moon. Todays most widely accepted theory is that the moon was formed as the result of a collision between Earth and another planet about the size of Mars. It is hypothesized that the debris from this impact became trapped in Earth orbit and eventually coagulated into the moon.

But although this much is broadly agreed upon, Kegerreis said that there are maybe five or six plausible ideas for the specific type of impact scenario. By modeling these, the teams was able to simulate details about how much of Earths atmosphere would have been lost in the most popular moon-forming scenarios. Numbers, he said, range from 10 to 60 percent of the atmosphere, depending on the precise angle, speed, and planet sizes.

These same kinds of simulation, in terms of the physics thats going on underneath, can be used for loads of different things.

If we can understand the history of Earths atmosphere well enough, then it might help us narrow down how erosive an impact the moon-forming collision should have been, he said. Or at least to perhaps rule out scenarios that remove far too much or far too little atmosphere to fit the observations.

Research such as this could therefore help answer some fundamental questions about the reason the observable universe is the way it is. [In this case,] we werent sure whether it was really easy or really hard for a giant impact to remove all of an atmosphere, or whether it was possible to get middling erosion as opposed to all or nothing, Kegerreis said. We also looked at the possibility of the impactor delivering atmosphere if it had some of its own to begin with.

While this project may be concluded, Kegerreis is excited about the future possibilities. Hes also enthused at the development of the simulation code the team wrote to carry out their work, in association with a group of astronomers and computer scientists. Called SWIFT, its an open-source hydrodynamics and gravity computer program that could be used by researchers anywhere in the world (so long as they have remote access to a supercomputer) to run simulations of astrophysical objects, including planets, galaxies, or even, conceivably, the whole universe.

These same kinds of simulation, in terms of the physics thats going on underneath, can be used for loads of different things, Kegerreis said. Here in Durham, the main thing that people actually use similar simulations to do is galaxy formation and much wider cosmology things where youre evolving dark matter, stars, and galaxies, rather than smaller things like planets. We can use the same simulation code to do those different things just by putting in different variations of the specific equations that were solving. But its the same basic structure.

The lack of real-time graphics (think endless code running on a screen, rather than than Civilization on a galactic scale) means this wont have the makings of a hit video game any time soon. However, it might just wind up helping reveal some of the secrets of the universe, from the Big Bang to the present day. As trade-offs go, thats not a bad one.

A paper describing the latest project, titled Atmospheric Erosion by Giant Impacts onto Terrestrial Planets: A Scaling Law for any Speed, Angle, Mass, and Density, was recently published in the journal Astrophysical Journal Letters.

Read this article:

What happens when two planets crash together? This supercomputer has the answer - Digital Trends

Supermicro Details Its Hardware for MN-3, the Most Efficient Supercomputer in the World – HPCwire

In June, HPCwire highlighted the new MN-3 supercomputer: a 1.6 Linpack petaflops system delivering 21.1 gigaflops per watt of power, making it the most energy-efficient supercomputer in the world at least, according to the latest Green500 list, the Top500s energy-conscious cousin. The system was built by Preferred Networks, a Japanese AI startup that used its in-house MN-Core accelerator to help deliver the MN-3s record-breaking efficiency. Collaborating with Preferred Networks was modular system manufacturer Supermicro, which detailed the hardware and processes behind the chart-topping green giant in a recent report.

As Supermicro tells it, Preferred Networks was facing challenges on two fronts: first, the need for a much more powerful system to solve its clients deep learning problems; and second, the exorbitant operating costs of the system they were envisioning. With increasing power costs, a large system of the size PFN was going to need, the operating costs of both the power and associated cooling would exceed the budget that was allocated, Supermicro wrote. Therefore, the energy efficiency of the new solution would have to be designed into the system, and not become an afterthought.

Preferred Networks turned to partnerships to help resolve these problems. First, they worked with researchers at Kobe University to develop the MN-Core accelerator, specializing it for deep learning training processes and optimizing it for energy efficiency. After successfully benchmarking the MN-Core above one teraflop per watt in testing, the developers turned to the rest of the system and thats where Supermicro entered the picture.

On a visit to Japan, Clay Chen general manager of global business development at Supermicro sat down with Preferred Networks to hear what they needed.

At first I was asking them, you know, what type of GPU they are using, Chen said in an interview with HPCwire. They say, oh, no, were not using any type were going to develop our own GPU. And that was quite fascinating to me.

Preferred Networks selected Supermicro for the daunting task: fitting four MN-Core boards, two Intel Xeon Platinum CPUs, up to 6TB of DDR4 memory and Intel Optane persistent memory modules in a single box without sacrificing the energy efficiency of the system.

Supermicro based its design on one of its preexisting GPU server models that was designed to house multiple GPUs (or other accelerators) and high-speed interconnects. Working with Preferred Networks engineers, Supermicro ran simulations to determine the optimal chassis design and component arrangement to ensure that the MN-Core accelerators would be sufficiently cooled and efficiency could be retained.

Somewhat surprisingly, the custom server is entirely fan-cooled. Our concept is: if we can design something with fan cooling, why would we want to use liquid cooling? Chen said. Because essentially, all the heat being pulled out from the liquid is going to cool somewhere. When you take the heat outside the box, you still need to cool the liquid with a fan.

The end result, a customized Supermicro server just for Preferred Networks, is pictured below.

The servers four MN-Core boards are connected to PCIe x16 slots on a Supermicro motherboard and to the MN-Core Direct Connect board that enables high-speed communication between the MN-Core boards.

These custom servers each 7U high were then rack-mounted into what would become the MN-3 supercomputer: 48 servers, four interconnect nodes and five 100GbE switches. In total, the systems 2,080 CPU cores, delivering 1,621 Linpack teraflops of performance, required just 77 kW of power for the Top500 benchmarking run. This efficiency level is just 15 percent short of the 40-megawatt limit targeted by planned exascale systems like Aurora, Frontier and El Capitan.

We are very pleased to have partnered with Supermicro, who worked with us very closely to build MN-3, which was recognized as the worlds most energy-efficient supercomputer, said Yusuke Doi, VP of computing infrastructure at Preferred Networks. We can deliver outstanding performance while using a fraction of the power that was previously required for such a large supercomputer.

Go here to read the rest:

Supermicro Details Its Hardware for MN-3, the Most Efficient Supercomputer in the World - HPCwire

I confess, I’m scared of the next generation of supercomputers – TechRadar

Earlier this year, a Japanese supercomputer built on Arm-based Fujitsu A64FX processors snatched the crown of worlds fastest machine, blowing incumbent leader IBM Summit out of the water.

Fugaku, as the machine is known, achieved 415.5 petaFLOPS by the popular High Performance Linpack (HPL) benchmark, which is almost three times the score of the IBM machine (148.5 petaFLOPS).

It also topped the rankings for Graph 500, HPL-AI and HPCH workloads - a feat never before achieved in the world of high performance computing (HPC).

Modern supercomputers are now edging ever-closer to the landmark figure of one exaFLOPS (equal to 1,000 petaFLOPS), commonly described as the exascale barrier. In fact, Fugaku itself can already achieve one exaFLOPS, but only in lower precision modes.

The consensus among the experts we spoke to is that a single machine will breach the exascale barrier within the next 6 - 24 months, unlocking a wealth of possibilities in the fields of medical research, climate forecasting, cybersecurity and more.

But what is an exaFLOPS? And what will it mean to break the exascale milestone, pursued doggedly for more than a decade?

To understand what it means to achieve exascale computing, its important to first understand what is meant by FLOPS, which stands for floating point operations per second.

A floating point operation is any mathematical calculation (i.e. addition, subtraction, multiplication or division) that involves a number containing a decimal (e.g. 3.0 - a floating point number), as opposed to a number without a decimal (e.g. 3 - a binary integer). Calculations involving decimals are typically more complex and therefore take longer to solve.

An exascale computer can perform 10^18 (one quintillion/100,000,000,000,000,000) of these mathematical calculations every second.

For context, to equal the number of calculations an exascale computer can process in a single second, an individual would have to perform one sum every second for 31,688,765,000 years.

The PC Im using right now, meanwhile, is able to reach 147 billion FLOPS (or 0.00000014723 exaFLOPS), outperforming the fastest supercomputer of 1993, the Intel Paragon (143.4 billion FLOPS).

This both underscores how far computing has come in the last three decades and puts into perspective the extreme performance levels attained by the leading supercomputers today.

The key to building a machine capable of reaching one exaFLOPS is optimization at the processing, storage and software layers.

The hardware must be small and powerful enough to pack together and reach the necessary speeds, the storage capacious and fast enough to serve up the data and the software scalable and programmable enough to make full use of the hardware.

For example, there comes a point at which adding more processors to a supercomputer will no longer affect its speed, because the application is not sufficiently optimized. The only way governments and private businesses will realize a full return on HPC hardware investment is through an equivalent investment in software.

Organizations such as the Exascale Computing Project (EPC) the ExCALIBUR programme are interested in solving precisely this problem. Those involved claim a renewed focus on algorithm and application development is required in order to harness the full power and scope of exascale.

Achieving the delicate balance between software and hardware, in an energy efficient manner and avoiding an impractically low mean time between failures (MTBF) score (the time that elapses before a system breaks down under strain) is the challenge facing the HPC industry.

15 years ago as we started the discussion on exascale, we hypothesized that it would need to be done in 20 mega-watts (MW); later that was changed to 40 MW. With Fugaku, we see that we are about halfway to a 64-bit exaFLOPS at the 40 MW power envelope, which shows that an exaFLOPS is in reach today, explained Brent Gorda, Senior Director HPC at UK-based chip manufacturer Arm.

We could hit an exaFLOPS now with sufficient funding to build and run a system. [But] the size of the system is likely to be such that MTBF is measured in single digit number-of-days based on todays technologies and the number of components necessary to reach these levels of performance.

When it comes to building a machine capable of breaching the exascale barrier, there are a number of other factors at play, beyond technological feasibility. An exascale computer can only come into being once an equilibrium has been reached at the intersection of technology, economics and politics.

One could in theory build an exascale system today by packing in enough CPUs, GPUs and DPUs. But what about economic viability? said Gilad Shainer of NVIDIA Mellanox, the firm behind the Infiniband technology (the fabric that links the various hardware components) found in seven of the ten fastest supercomputers.

Improvements in computing technologies, silicon processing, more efficient use of power and so on all help to increase efficiency and make exascale computing an economic objective as opposed to a sort of sporting achievement.

According to Paul Calleja, who heads up computing research at the University of Cambridge and is working with Dell on the Open Exascale Lab, Fugaku is an excellent example of what is theoretically possible today, but is also impractical by virtually any other metric.

If you look back at Japanese supercomputers, historically theres only ever been one of them made. They have beautifully exquisite architectures, but theyre so stupidly expensive and proprietary that no one else could afford one, he told TechRadar Pro.

[Japanese organizations] like these really large technology demonstrators, which are very useful in industry because it shows the direction of travel and pushes advancements, but those kinds of advancements are very expensive and not sustainable, scalable or replicable.

So, in this sense, there are two separate exascale landmarks; the theoretical barrier, which will likely be met first by a machine of Fugakus ilk (a technological demonstrator), and the practical barrier, which will see exascale computing deployed en masse.

Geopolitical factors will also play a role in how quickly the exascale barrier is breached. Researchers and engineers might focus exclusively on the technological feat, but the institutions and governments funding HPC research are likely motivated by different considerations.

Exascale computing is not just about reaching theoretical targets, it is about creating the ability to tackle problems that have been previously intractable, said Andy Grant, Vice President HPC & Big Data at IT services firm Atos, influential in the fields of HPC and quantum computing.

Those that are developing exascale technologies are not doing it merely to have the fastest supercomputer in the world, but to maintain international competitiveness, security and defence.

In Japan, their new machine is roughly 2.8x more powerful than the now-second place system. In broad terms, that will enable Japanese researchers to address problems that are 2.8x more complex. In the context of international competitiveness, that creates a significant advantage.

In years gone by, rival nations fought it out in the trenches or competed to see who could place the first human on the moon. But computing may well become the frontier at which the next arms race takes place; supremacy in the field of HPC might prove just as politically important as military strength.

Once exascale computers become an established resource - available for businesses, scientists and academics to draw upon - a wealth of possibilities will be unlocked across a wide variety of sectors.

HPC could prove revelatory in the fields of clinical medicine and genomics, for example, which require vast amounts of compute power to conduct molecular modelling, simulate interactions between compounds and sequence genomes.

In fact, IBM Summit and a host of other modern supercomputers are being used to identify chemical compounds that could contribute to the fight against coronavirus. The Covid-19 High Performance Computing Consortium assembled 16 supercomputers, accounting for an aggregate of 330 petaFLOPS - but imagine how much more quickly research could be conducted using a fleet of machines capable of reaching 1,000 petaFLOPS on their own.

Artificial intelligence, meanwhile, is another cross-disciplinary domain that will be transformed with the arrival of exascale computing. The ability to analyze ever-larger datasets will improve the ability of AI models to make accurate forecasts (contingent on the quality of data fed into the system) that could be applied to virtually any industry, from cybersecurity to e-commerce, manufacturing, logistics, banking, education and many more.

As explained by Rashid Mansoor, CTO at UK supercomputing startup Hadean, the value of supercomputing lies in the ability to make an accurate projection (of any variety).

The primary purpose of a supercomputer is to compute some real-world phenomenon to provide a prediction. The prediction could be the way proteins interact, the way a disease spreads through the population, how air moves over an aerofoil or electromagnetic fields interact with a spacecraft during re-entry, he told TechRadar Pro.

Raw performance such as the HPL benchmark simply indicates that we can model bigger and more complex systems to a greater degree of accuracy. One thing that the history of computing has shown us is that the demand for computing power is insatiable.

Other commonly cited areas that will benefit significantly from the arrival of exascale include brain mapping, weather and climate forecasting, product design and astronomy, but its also likely that brand new use cases will emerge as well.

The desired workloads and the technology to perform them form a virtuous circle. The faster and more performant the computers, the more complex problems we can solve and the faster the discovery of new problems, explained Shainer.

What we can be sure of is that we will see the continuous needs or ever growing demands for more performance capabilities in order to solve the unsolvable. Once this is solved, we will find the new unsolvable.

By all accounts, the exascale barrier will likely fall within the next two years, but the HPC industry will then turn its attention to the next objective, because the work is never done.

Some might point to quantum computers, which approach problem solving in an entirely different way to classical machines (exploiting symmetries to speed up processing), allowing for far greater scale. However, there are also problems to which quantum computing cannot be applied.

Mid-term (10 year) prospects for quantum computing are starting to shape up, as are other technologies. These will be more specialized where a quantum computer will very likely show up as an application accelerator for problems that relate to logistics first. They wont completely replace the need for current architectures for IT/data processing, explained Gorda.

As Mansoor puts it, on certain problems even a small quantum computer can be exponentially faster than all of the classical computing power on earth combined. Yet on other problems, a quantum computer could be slower than a pocket calculator.

The next logical landmark for traditional computing, then, would be one zettaFLOPS, equal to 1,000 exaFLOPS or 1,000,000 petaFLOPS.

Chinese researchers predicted in 2018 that the first zettascale system will come online in 2035, paving the way for new computing paradigms. The paper itself reads like science fiction, at least for the layman:

To realize these metrics, micro-architectures will evolve to consist of more diverse and heterogeneous components. Many forms of specialized accelerators are likely to co-exist to boost HPC in a joint effort. Enabled by new interconnect materials such as photonic crystal, fully optical interconnecting systems may come into use.

Assuming one exaFLOPS is reached by 2022, 14 years will have elapsed between the creation of the first petascale and first exascale systems. The first terascale machine, meanwhile, was constructed in 1996, 12 years before the petascale barrier was breached.

If this pattern were to continue, the Chinese researchers estimate would look relatively sensible, but there are firm question marks over the validity of zettascale projections.

While experts are confident in their predicted exascale timelines, none would venture a guess at when zettascale might arrive without prefacing their estimate with a long list of caveats.

Is that an interesting subject? Because to be honest with you, its so not obtainable. To imagine how we could go 1000x beyond [one exaFLOPS] is not a conversation anyone could have, unless theyre just making it up, said Calleja, asked about the concept of zettascale.

Others were more willing to theorize, but equally reticent to guess at a specific timeline. According to Grant, the way zettascale machines process information will be unlike any supercomputer in existence today.

[Zettascale systems] will be data-centric, meaning components will move to the data rather than the other way around, as data volumes are likely to be so large that moving data will be too expensive. Regardless, predicting what they might look like is all guesswork for now, he said.

It is also possible that the decentralized model might be the fastest route to achieving zettascale, with millions of less powerful devices working in unison to form a collective supercomputer more powerful than any single machine (as put into practice by the SETI Institute).

As noted by Saurabh Vij, CEO of distributed supercomputing firm Q Blocks, decentralized systems address a number of problems facing the HPC industry today, namely surrounding building and maintenance costs. They are also accessible to a much wider range of users and therefore democratize access to supercomputing resources in a way that is not otherwise possible.

There are benefits to a centralized architecture, but the cost and maintenance barrier overshadows them. [Centralized systems] also alienate a large base of customer groups that could benefit, he said.

We think a better way is to connect distributed nodes together in a reliable and secure manner. It wouldnt be too aggressive to say that, 5 years from now, your smartphone could be part of a giant distributed supercomputer, making money for you while you sleep by solving computational problems for industry, he added.

However, incentivizing network nodes to remain active for a long period is challenging and a high rate of turnover can lead to reliability issues. Network latency and capacity problems would also need to be addressed before distributed supercomputing can rise to prominence.

Ultimately, the difficulty in making firm predictions about zettascale lies in the massive chasm that separates present day workloads and HPC architectures from those that might exist in the future. From a contemporary perspective, its fruitless to imagine what might be made possible by a computer so powerful.

We might imagine zettascale machines will be used to process workloads similar to those tackled by modern supercomputers, only more quickly. But its possible - even likely - the arrival of zettascale computing will open doors that do not and cannot exist today, so extraordinary is the leap.

In a future in which computers are 2,000+ times as fast as the most powerful machine today, philosophical and ethical debate surrounding the intelligence of man versus machine are bound to be played out in greater detail - and with greater consequence.

It is impossible to directly compare the workings of a human brain with that of a computer - i.e. to assign a FLOPS value to the human mind. However, it is not insensible to ask how many FLOPS must be achieved before a machine reaches a level of performance that might be loosely comparable to the brain.

Back in 2013, scientists used the K supercomputer to conduct a neuronal network simulation using open source simulation software NEST. The team simulated a network made up of 1.73 billion nerve cells connected by 10.4 trillion synapses.

While ginormous, the simulation represented only 1% of the human brains neuronal network and took 40 minutes to replicate 1 seconds worth of neuronal network activity.

However, the K computer reached a maximum computational power of only 10 petaFLOPS. A basic extrapolation (ignoring inevitable complexities), then, would suggest Fugaku could simulate circa 40% of the human brain, while a zettascale computer would be capable of performing a full simulation many times over.

Digital neuromorphic hardware (supercomputers created specifically to simulate the human brain) like SpiNNaker 1 and 2 will also continue to develop in the post-exascale future. Instead of sending information from point A to B, these machines will be designed to replicate the parallel communication architecture of the brain, sending information simultaneously to many different locations.

Modern iterations are already used to help neuroscientists better understand the mysteries of the brain and future versions, aided by advances in artificial intelligence, will inevitably be used to construct a faithful and fully-functional replica.

The ethical debates that will arise with the arrival of such a machine - surrounding the perception of consciousness, the definition of thought and what an artificial uber-brain could or should be used for - are manifold and could take generations to unpick.

The inability to foresee what a zettascale computer might be capable of is also an inability to plan for the moral quandaries that might come hand-in-hand.

Whether a future supercomputer might be powerful enough to simulate human-like thought is not in question, but whether researchers should aspire to bringing an artificial brain into existence is a subject worthy of discussion.

Continued here:

I confess, I'm scared of the next generation of supercomputers - TechRadar

Bradykinin Hypothesis of COVID-19 Offers Hope for Already-Approved Drugs – BioSpace

A group of researchers at Oak Ridge National Lab in Tennessee used the Summit supercomputer, the second-fastest in the world, to analyze data on more than 40,000 genes from 17,000 genetic samples related to COVID-19. The analysis took more than a week and analyzed 2.5 billion genetic combinations. And it came up with a new theory, dubbed the bradykinin hypothesis, on how COVID-19 affects the body.

Daniel Jacobson, a computational systems biologist at Oak Ridge, noted that the expression of genes for significant enzymes in the renin-angiotensin system (RAS), which is involved in blood pressure regulation and fluid balance, was abnormal. He then tracked the abnormal RAS in the lung fluid samples to the kinin cascade, which is an inflammatory pathway closely regulated by the RAS.

In the kinin system, bradykinin, which is a key peptide, causes blood vessels to leak, allowing fluid to accumulate in organs and tissue. And in COVID-19 patients, this system was unbalanced. People with the disease had increased gene expression for the bradykinin receptors and for enzymes known as kallikreins that activate the kinin pathway.

Jacobson and his team published the research in the journal eLife. They believe that this research explains many aspects of COVID-19 that were previously not understood, including why there is an abnormal accumulation of fluid in the patients lungs.

From the research, SARS-CoV-2 infection typically starts when the virus enters the body via ACE2 receptor in the nose, where they are common. The virus then moves through the body, integrating into cells that also have ACE2, including the intestines, kidneys and heart. This is consistent with some of COVID-19s cardiac and gastrointestinal symptoms.

But the virus does not appear to stop there. Instead, it takes over the bodys systems, upregulating ACE2 receptors in cells and tissues where theyre not common, including the lungs. Or as Thomas Smith writes in Medium, COVID-19 is like a burglar who slips in your unlocked second-floor window and starts to ransack your house. Once inside, though, they dont just take your stuffthey also throw open all your doors and windows so their accomplices can rush in and help pillage more efficiently.

The final result of all this is what is being called a bradykinin storm. When the virus affects the RAS, the way the body regulates bradykinin runs amuck, bradykinin receptors are resensitized, and the body stops breaking down bradykinin, which is typically degraded by ACE. They believe it is this bradykinin storm that is responsible for many of COVID-19s deadliest symptoms.

The researchers wrote that the pathology of COVID-19 is likely the result of Bradykinin Storms rather than cytokine storms, which have been observed in COVID-19 patients, but that the two may be intricately linked.

Another researcher, Frank van de Veerdonk, an infectious disease researcher at the Radboud University Medical Center in Netherlands, had made similar observations in mid-March. In April, he and his research team theorized that a dysregulated bradykinin system was causing leaky blood vessels in the lungs, which was a potential cause of the excess fluid accumulation.

Josef Penninger, director of the Life Sciences Institute at the University of British Columbia in Vancouver, who identified that ACE2 is the essential in vivo receptor for SARS, told The Scientist that he believes bradykinin plays a role in COVID-19. It does make a lot of sense. And Jacobsons study supports the hypothesis, but additional research is needed for confirmation. Gene expression signatures dont tell us the whole story. I think it is very important to actually measure the proteins.

Another aspect of Jacobsons study is that via another pathway, COVID-19 increases production of hyaluronic acid (HLA) in the lungs. HLA is common in soaps and lotions because it absorbs more than 1,000 times its weight in fluid. Taking into consideration fluid leaking into the lungs and increased HLA, it creates a hydrogel in the lungs of some COVID-19 patients, which Jacobson describes as like trying to breathe through Jell-O.

This provides a possible explanation for why ventilators have been less effective in severe COVID-19 than physicians originally expected. It reaches a point, Jacobson says, where regardless of how much oxygen you pump in, it doesnt matter, because the alveoli in the lungs are filled with this hydrogel. The lungs become like a water balloon.

The bradykinin hypothesis also explains why about 20% of COVID-19 patients have heart damage, because RAS controls aspects of cardiac contractions and blood pressure. It also supports COVID-19s neurological effects, such as dizziness, seizures, delirium and stroke, which is seen in as much as 50% of hospitalized patients. French-based research identified leaky blood vessels in the brains of COVID-19 patients. And at high doses, bradykinin can break down the blood-brain barrier.

On the positive side, their research suggests that drugs that target components of RAS are already FDA approved for other diseases and might be effective in treating COVID-19. Some, such as danazol (to treat endometriosis, fibrocystic breast disease, and hereditary angioedema), stanazolol (an anabolic steroid derived from testosterone), and ecallantide (marketed as Kalbitor for hereditary angioedema (HAE) and the prevention of blood loss in cardiothoracic surgery), decrease bradykinin production. Icatibant, also used to treat HAE, and is marketed as Firazyr, decreases bradykinin signaling and could minimize its effects once its in the body. Vitamin D may potentially be useful, because it is involved in the RAS system and may reduce levels of REN, another compound involved in the system.

The researchers note that the testing of any of these pharmaceutical interventions should be done in well-designed clinical trials.

More here:

Bradykinin Hypothesis of COVID-19 Offers Hope for Already-Approved Drugs - BioSpace

Stranger than fiction? Why we need supercomputers – TechHQ

In2001: A Space Odyssey, the main villain is a supercomputer named Hal-9000 that was responsible for the death ofDiscovery Onescrew.

Need some help remembering Douglas Rains chilling voice as the sentient computer?

Even though HAL-9000 met with a slow, painful death by disconnection, it remains one of the most iconic supercomputers on screen and in fiction. The villainous systems display of humanity in its last moment, singing the lullaby of Daisy Bell urges viewers to recognize the strong sense of self that the machine possesses. However, in the real world, supercomputers are far less sentimental, if not far off in terms of their data processing and problem-solving ability.

What truly separates supercomputers from your not-so-super-computers is the way they process the workload. Supercomputers, fundamentally, adopt a technique called parallel processing that uses multiple compute resources to solve a computational problem. In contrast, our regular computers run on serial computing that solves computational problems one at a time, following a sequence.

For a sense of just how powerful these systems are, supercomputers are frequently used for simulating reality, including astronomical events like two galaxies colliding or predicting how a nuclear attack would play out.

Supercomputers can simulate astronomical events. Source: Unsplash

Now, scaling it down from the fate of the universe, supercomputers are also used for enterprise-wide applications.

Over the years, the power of supercomputers in simulating reality has given humankind a better ability to make predictions or boost product designs. In manufacturing, this ability users can test out countless product designs to discern which prototypes are best suited to the real world. In this sense, supercomputing significantly slashes the number of physical testing resources and helps organizations get products to market quicker, allowing them to seize opportunities to lead in their respective markets and gain extra profit.

Jack Dongarra, a leading supercomputer expert,noted that the industrial use of supercomputers is widespread: Industry gets it. They are investing in high-performance computers to be more competitive and to gain an edge on their competition. And they feel that money is well spent. They are investing in these things to help drive their products and innovation, their bottom line, their productivity, and their profitability, Dongarra said.

Supercomputers are also helping scientists and researchers in developing new life-saving medicines. Presently, supercomputers all over the world are united over the singular goal in the research and development of a COVID-19 vaccine.

Equipped with the capabilities of supercomputers, researches gain unique opportunities to explore the structure and behavior of the infamous virus at a molecular stage. Since a supercomputer can simulate a myriad of interactions between the virus and human body cells, researchers are able to forecast the spread of the disease and seek for promising treatments or vaccine materials.

Japans Fugaku supercomputer, located at the RIKEN Center for Computational Science in Kobe, was recently crowned the worlds fastest. Around 3,000 researchers use it to search and model new drugs, study weather, and natural disaster scenarios, even the fundamental laws of physics and nature. Recently, researchers have been experimenting with using Fugaku for COVID-19 research into diagnostics, therapeutics, and simulations that replicate the spread patterns of the virus.

Fugaku was developed based on the idea of achieving high performance on a variety of applications of great public interest [] and we are very happy that it has shown itself to be outstanding on all the major supercomputer benchmarks, Satoshi Matsuoka, director of the RIKEN Center, said. I hope that the leading-edge IT developed for it will contribute to major advances on difficult social challenges such as COVID-19.

InIBMs company blog, the Director of IBM Research, Dario Gil writes: The supercomputers will run myriad calculations in epidemiology, bioinformatics, and molecular modeling, in a bid to drastically cut the time of discovery of new molecules that could lead to a vaccine.

A supercomputers parallel computing makes it uniquely suited to screen through a deluge of data and, at its core, solve complex problems that require a lot of number-crunching. Erik Lindahl, a professor of biophysics,sharedto date, supercomputers enable scientists to see how liquids diffuse around the proteins, and no other experimental method is capable of that.

We could not do what we do without computers. The computers enable us to see things that we could never see in experiments otherwise.

While Hals infamous line Im sorry Dave, Im afraid I cant do that left viewers to debate if Hal was truly evil or just obeying orders, perhaps its time we bring this conversation back to life and focus on the extraordinary capabilities of these supercomputers.

View post:

Stranger than fiction? Why we need supercomputers - TechHQ

Google Says It Just Ran The First-Ever Quantum Simulation of a Chemical Reaction – ScienceAlert

Of the many high expectations we have of quantum technology, one of the most exciting has to be the ability to simulate chemistry on an unprecedented level. Now we have our first glimpse of what that might look like.

Together with a team of collaborators, the Google AI Quantum team has used their 54 qubit quantum processor, Sycamore, to simulate changes in the configuration of a molecule called diazene.

As far as chemical reactions go, it's one of the simplest ones we know of.Diazene is little more than a couple of nitrogens linked in a double bond, each towing a hydrogen atom.

However, the quantum computer accurately described changes in the positions of hydrogen to form different diazene isomers.The team also used their system to arrive at an accurate description of the binding energy of hydrogen in increasingly bigger chains.

As straight-forward as these two models may sound, there's a lot going on under the hood. Forget the formulaic chemical reactions from your school textbooks - on a level of quantum mechanics, chemistry is a complicated mix of possibilities.

In some ways, it's the difference between knowing a casino will always make a profit, and predicting the outcomes of the individual games being played inside. Restricted to the predictable rules of classical computers, an ability to represent the infinite combinations of dice rolls and royal flushes of quantum physics has been just too hard.

Quantum computers, on the other hand, are constructed around these very same principles of quantum probability that govern chemistry on a fundamental level.

Logical units called qubits exist in a fuzzy state of 'either/or'. When combined with the 'maybe' states of other qubits in a system, it provides computer engineers with a unique way to carry out computations.

Algorithms specially formulated to take advantage of these quantum mechanics allow for shortcuts, reducing down to minutes that which would take a classical super computer thousands of years of grinding.

If we're to have a hope of modelling chemistry on a quantum level, we're going to need that kind of power, and some.

Just calculating the sum of actions that determine the energy in a molecule of propane would hypothetically take a supercomputer more than a week.But there's a world of difference between a snapshot of a molecule's energy, and calculating all the ways they might change.

The diazene simulation used 12 of the 54 qubits in the Sycamore processor to perform its calculations. This in itself was still twice the size of any previous attempts at chemistry simulations.

The team also pushed the limits on an algorithm designed to marry classical with quantum processes, one designed to iron out the errors that arise all too easily in the delicate world of quantum computing.

It all adds up to possibilities of increasingly bigger simulations in the future, helping us design more robust materials, sift out more effective pharmaceuticals, and even unlock more secrets of our Universe's quantum casino.

Diazene's wandering hydrogens is just the start of the kinds of chemistry we might soon be able to model in a quantum landscape.

This research was published in Science.

Go here to see the original:

Google Says It Just Ran The First-Ever Quantum Simulation of a Chemical Reaction - ScienceAlert

This Equation Calculates the Chances We Live in a Computer Simulation – Discover Magazine

The Drake equation is one of the more famous reckonings in science. It calculates the likelihood that we are not alone in the universe by estimating the number of other intelligent civilizations in our galaxy that might exist now.

Some of the terms in this equation are well known or becoming better understood, such as the number of stars in our galaxy and the proportion that have planets in the habitable zone. But others are unknown, such as the proportion of planets that develop intelligent life; and some may never be known, such as the proportion that destroy themselves before they can be discovered.

Nevertheless, the Drake equation allows scientists to place important bounds on the numbers of intelligent civilizations that might be out there.

However, there is another sense in which humanity could be linked with an alien intelligence our world may just be a simulation inside a massively powerful supercomputer run by such a species. Indeed, various scientists, philosophers and visionaries have said that the probability of such a scenario could be close to one. In other words, we probably are living in a simulation.

The accuracy of these claims is somewhat controversial. So a better way to determine the probability that we live in a simulation would be much appreciated.

Enter Alexandre Bibeau-Delisle and Gilles Brassard at the University of Montreal in Canada. These researchers have derived a Drake-like equation that calculates the chances that we live in a computer simulation. And the results throw up some counterintuitive ideas that are likely to change the way we think about simulations, how we might determine whether we are in one and whether we could ever escape.

Bibeau-Delisle and Brassard begin with a fundamental estimate of the computing power available to create a simulation. They say, for example, that a kilogram of matter, fully exploited for computation, could perform 10^50 operations per second.

By comparison, the human brain, which is also kilogram-sized, performs up to 10^16 operations per second. It may thus be possible for a single computer the mass of a human brain to simulate the real-time evolution of 1.4 10^25 virtual brains, they say.

In our society, a significant number of computers already simulate entire civilizations, in games such as Civilization VI, Hearts of Iron IV, Humankind and so on. So it may be reasonable to assume that in a sufficiently advanced civilization, individuals will be able to run games that simulate societies like ours, populated with sentient conscious beings.

So an interesting question is this: of all the sentient beings in existence, what fraction are likely to be simulations? To derive the answer, Bibeau-Delisle and Brassard start with the total number of real sentient beings NRe, multiply that by the fraction with access to the necessary computing power fCiv; multiply this by the fraction of that power that is devoted to simulating consciousness fDed (because these beings are likely to be using their computer for other purposes, too); and then multiply this by the number of brains they could simulate Rcal.

The resulting equation is this, where fSim is the fraction of simulated brains:

Here RCal is the huge number of brains that fully exploited matter should be able to simulate.

The sheer size of this number, ~10^25, pushes Bibeau-Delisle and Brassard toward an inescapable conclusion. It is mathematically inescapable from [the above] equation and the colossal scale of RCal that fSim 1 unless fCiv fDed 0, they say.

So there are two possible outcomes. Either we live in a simulation or a vanishingly small proportion of advanced computing power is devoted to simulating brains.

Its not hard to imagine why the second option might be true. A society of beings similar to us (but with a much greater technological development) could indeed decide it is not very ethical to simulate beings with enough precision to make them conscious while fooling them and keeping them cut off from the real world, say Bibeau-Delisle and Brassard.

Another possibility is that advanced civilizations never get to the stage where their technology is powerful enough to perform these kinds of computations. Perhaps they destroy themselves through war or disease or climate change long before then. There is no way of knowing.

But suppose we are in a simulation. Bibeau-Delisle and Brassard ask whether we might escape while somehow hiding our intentions from our overlords. They assume that the simulating technology will be quantum in nature. If quantum phenomena are as difficult to compute on classical systems as we believe them to be, a simulation containing our world would most probably run on quantum computing power, they say.

This raises the possibility that it may be possible to detect our alien overlords since they cannot measure the quantum nature of our world without revealing their presence. Quantum cryptography uses the same principle; indeed, Brassard is one of the pioneers of this technology.

That would seem to make it possible for us to make encrypted plans that are hidden from the overlords, such as secretly transferring ourselves into our own simulations.

However, the overlords have a way to foil this. All they need to do is to rewire their simulation to make it look as if we are able to hide information, even though they are aware of it all the time. If the simulators are particularly angry at our attempted escape, they could also send us to a simulated hell, in which case we would at least have the confirmation we were truly living inside a simulation and our paranoia was not unjustified..., conclude Bibeau-Delisle and Brassard, with their tongues firmly in their cheeks.

In that sense, we are the ultimate laboratory guinea pigs: forever trapped and forever fooled by the evil genius of our omnipotent masters.

Time for another game of Civilization VI.

Ref: Probability and Consequences of Living Inside a Computer Simulation. arxiv.org/abs/2008.09275

Read more:

This Equation Calculates the Chances We Live in a Computer Simulation - Discover Magazine

17 of the best computers and supercomputers to grace the planet – Pocket-lint

(Pocket-lint) - Supercomputers, the behemoths of the tech world and inventions by man often to put to specific use to solve incredible problems mere mortals couldn't fathom alone.

From studying the decay of nuclear materials to predicting the path of our planet due to global warming and everything in between, these machines do the processing and crunch the numbers. Calculating in moments what it would take mere mortals decades or more to decipher.

Earth Simulator was the world's fastest supercomputer between 2002 and 2004. It was created in Japan, as part of the country's "Earth Simulator Project" which was intended to model the effects of global warming on our planet.

The original Earth Simulator supercomputer cost the government 60 billion yen but was a seriously impressive piece of technology for the time, with 5120 processors and 10 terabytes of memory.

It was later replaced by Earth Simulator 2 in 2009 and Earth Simulator 3 in 2015.

The original Earth Simulator supercomputer was surpassed in performance by IBM's Blue Gene/L prototype in 2004. Blue Gene was designed to reach petaFLOP operating speeds while maintaining low power consumption. As a result, the various Blue Gene systems have been ranked as some of the most powerful and most power-efficient supercomputers in the world.

The Blue Gene supercomputers were so named because they were designed to help analyse and understand protein folding and gene development. They were most well-known for power and performance though, reaching 596 TFLOPS peak performance. They were then outclassed by IBM's Cell-based Roadrunner system in 2008.

ENIAC was one of the very first supercomputers. It was originally designed by the US Army to calculate artillery firing tables and even to study the possibility of thermonuclear weapons.It was said to be able to calculate in just 30 seconds what it would take a person 20 hours to do.

This supercomputer cost around $500,000 to build (over $6 million in today's money).

Notably, the Electronic Numerical Integrator and Computer was later used to compute 2,037 digits of Pi and it was the first computer to do so. Even that computation took 70 hours to complete.

In 2018, the Chinese supercomputer known as Sunway TaihuLight was listed as the third-fastest supercomputer in the world. This system sported nearly 41,000 processors, each of which had 256 processing cores, meaning a total of over 10 million cores.

This supercomputer was also known to be able to carry out an eye-watering 93 quadrillion calculations per second. IT was designed for all sorts of research from weather forecasting to industrial design, life sciences and everything in between.

The Difference Engine was crafted by Charles Babbage in 1822. This was essentially the first computer or at least one of them. It could be used to calculate mathematical functions but unfortunately cost an astronomical amount for the time.

This machine was impressive for what it could do but also for the machines it inspired in the years and decades that followed.

IBM's Roadrunner supercomputer was a $100 million system built at the Los Alamos National Laboratory in New Mexico, USA.

In 2008, it managed to become one of the fastest supercomputers on the planet, reaching a top performance of 1.456 petaFLOPS.

Despite taking up 296 server racks and covering 6,000 square feet, Roadrunner still managed to be the fourth-most energy-efficient supercomputer at the time.

The system was used in order to analyse the decay of US nuclear weapons and examine whether the nuclear materials would be safe in the following years.

Summit is one of the most recent and most powerful supercomputers built by man. Another incredible system built by IBM, this time used at Oak Ridge National Laboratory and sponsored by the U.S. Department of Energy.

Between 2018 and June 2020, Summit (also known as OLCF-4) achieved the record of being the fastest supercomputer in the world, reaching benchmark scores of 148.6 petaFLOPS. Summit was also the first supercomputer to hit exaflop (a quintillion operations per second) speeds.

Summit boasts 9,216 22-core CPUs and 27,648 Nvidia Tesla V100 GPUs which have been put to work in all manner of complex research from Earthquake Simulation to Extreme Weather simulation as well as predicting the lifetime of Neutrinos in physics.

The Sierra is another supercomputer developed by IBM for the US Government. Like Summit, Sierra packs some serious power, with 1,572,480 processing cores and a peak performance of 125 petaFLOPS.

As with IBM Roadrunner, this supercomputer is used to manage the stockpile of US nuclear weapons to assure the safety of those weapons.

Tianhe-2 is another powerful supercomputer built by the Chinese. It's located at the National Supercomputer Center in Guangzhou, China and cost a staggering 2.4 billion Yuan (US$390 million) to build.

It took a team of 1,300 people to create and their hard work paid off when Tianhe-2 was recognised as the world's fastest supercomputer between 2013 and 2015.

The system sports nearly five million processor cores and 1,375 TiBs of memory, making it able to carry out over 33 quadrillion calculations per second.

The CDC 6600 was built in 1964 for $2,370,000. This machine is thought to be the worlds first supercomputer, managing three megaFLOPS, three times the speed of the previous record holder.

At the time, this system was so successful that it became a "must-have" for those carrying out high-end research and as a result over 100 of them weird built.

The Cray-1 came almost a decade after the CDC 6600, but quickly became one of the most successful supercomputers of the time. This was thanks to its unique design that not only included an unusual shape but also the first implementation of a vector processor design.

This was a supercomputer system that sported 64-bit processor running at 80 MHz with 8 megabytes of RAM which make it capable of a peak performance of 250 megaflops. A significant move forward compared to the CDC 6600 which came a mere decade before.

The Frontera supercomputer is the fastest university supercomputer in the world. In 2019, it achieved 23.5 PetaFLOPS making it able to calculate in a mere second what it would take an average person a billion years to do manually.

The system was designed to help teams at the University of Texas to solve massively difficult problems including everything from molecular dynamics to climate simulations and cancer studies too.

Trinity is yet another supercomputer designed to analyse the effectiveness of nuclear weapons.

With 979,072 processing cores and 20.2 petaFLOPS of performance power, it's able to simulate all manner of data to ensure the country's stockpile of weapons is safe.

In 2019, IBM built Pangea III, a system purported to be the world's most powerful commercial supercomputer. It was designed for Total, a global energy company with operations worldwide.

Pangea III was an AI-optimised supercomputer with a high-performance structure but one that was said to be significantly more power-efficient than previous models.

The system was designed to support seismic data acquisition by geoscientists to establish the location of oil and gas resources. Pangea III has a computing power of 25 petaflops (roughly the same as 130,000 laptops) and ranked 11th in the leaderboards of the top supercomputers at the time.

The Connection Machine 5 is interesting for a number o reasons, not simply because it's a marvellous looking supercomputer but also because it's likely the only system on our list to be featured in a Hollywood Blockbuster. That's right, this supercomputer appeared on the set of Jurassic Park, where it masqueraded as the Park's central control computer.

The Connection Machine 5 was announced in 1991 and later declared the fastest computer in the world in 1993. It ran 1024 cores with peak performance of 131.0 GFLOPS.

It's also said to have been used by the National Security Agency back in its early years.

HPC4 is a Spanish supercomputer that's particularly well-known for being energy efficient while still sporting some serious processing power that includes 253,600 processor cores and 304,320GB of memory.

In 2018, the updated HPC5 supercomputer was combined with HPC4 to result in 70 petaFlops of combined computational capacity. That means this system is capable of performing 70 million billion mathematical operations in a single second.

Selene is Nvidia's supercomputer built on the DGX SuperPOD architecture. This is an Nvidia-powered supercomputer sporting 2,240 NVIDIA A100 GPUs, 560 CPUs and an impressive record that includes being the second most power-efficient supercomputer around.

Selene is particularly impressive when you discover that it was built in just three weeks. We also like that it has its own robot attendant and is able to communicate with human operators via Slack.

Writing by Adrian Willings.

Read the original post:

17 of the best computers and supercomputers to grace the planet - Pocket-lint

Supercomputer finds best way to air out classroom to ward off virus : The Asahi Shimbun – Asahi Shimbun

The worlds fastest supercomputer has found opening just one window and one door diagonally opposite each other is the best way to ventilate an air-conditioned classroom to prevent the novel coronavirus from spreading.

A team of researchers from the Riken Center for Computational Science and other institutions crunched the numbers using Japans supercomputer Fugaku, which ranked No. 1 in the world in June for its calculation speed.

It ran various simulations to determine the best way to ventilate a classroom to prevent the coronavirus from spreading while also keeping the room temperature cool for students to ensure they do not get heatstroke in the hot summer months.

People can let a certain amount of fresh air in a room while keeping the room temperature cool by opening windows diagonally opposite from each other, said Makoto Tsubokura, a professor of computational science at Kobe University who heads the team. They can also take other measures, such as opening windows fully during breaks, at the same time to further lower the risk of infections.

For a classroom measuring 8 square meters, with 40 students sitting at their desks, the team simulated various combinations of having the doors and transom windows facing the corridor and other windows open to find the best way to efficiently ventilate the room while still being cooled by an air conditioner.

With a window in the back of the room and a door in the front of the room diagonally opposite each other left open by 20 centimeters each, the computer found it takes about 500 seconds for the air in the room to be completely replaced with fresh air.

Under two other configurations, it took roughly 100 seconds each time. One was with all the windows open 20 cm each, with transom windows facing the corridor open. The other was when all the windows were open by 20 cm each with doors at the front and back of the room open 40 cm each.

The first simulation required more time than the other two to ventilate the room because the open window area was smaller. But the amount of air replaced in the first setting was calculated at 1,190 cubic meters per hour.

According to the researchers, when that is converted into the amount of air ventilated per person in the room, it is equivalent to the ventilation standards for a common office under the law.

The team concluded that a room can be adequately ventilated by opening windows diagonally opposite from each other when accounting forair conditioning efficiency in the summer and heating in the winter.

See the original post here:

Supercomputer finds best way to air out classroom to ward off virus : The Asahi Shimbun - Asahi Shimbun

The Supercomputer Breaking Online Gaming Records and Modeling COVID-19 – BioSpace

Humanity is obsessed with making and breaking records in absolutely everything, just ask the good people at Guinness. In science, we dont exactly have a land-speed record for sequencing a genome or characterizing a protein, but we do know how long it takes to discover a therapeutic (typically 1 to 6 years) and get it to market (another decade, with all the tests and trials required). Even then, only about 10% get approved. We have gone from identifying a new virus to having multiple vaccine candidates in clinical testing within 6 months that is Earth-shattering record breaking. This was unthinkable with SARS in the mid-2000s, but our rapidly advancing technology and researchers dropping everything to work on SARS-CoV-2 have made next-to-impossible a reality.

Scaled Up Computing for Record Breaking Games

A big part of this has been global advancements in computing and processing power, leveraging the power of the cloud. Hadean, a UK-based company, has developed a cloud-native supercomputing platform. Their Hadean Platform, a distributed computing platform, streamlines running applications via cloud by removing excessive middleware and helping scale the process a journey that has taken them from the world of gaming to the modeling a pandemic.

Our cardinal application is Aether Engine, a spatial simulation engine, but we also have Mesh, the Big Data framework, and we have Muxer, which is a dynamic content delivery network for high performance workloads, said Miriam Keshani, VP of Operations at Hadean.

They took Aether Engine to the biggest gaming conference around the Gaming Developers Conference in San Francisco and were instantly attracted by massive online gaming and specifically EVE Online. The makers demonstrated record-breaking massive scale battles, but that often meant slowing the game down.

Fast forward to GDC 2019. We were there with the makers of EVE Online, CCP games, and together broke their world record for the most number of players in the single game with 14,000 connected clients, a mixture of human and AI, says Keshani.

The company has continued to work with CCP Games as well as Microsofts Minecraft. In parallel, Hadean also took their Aether Engine to a whole new level the molecular level.

Spatial Engines, Scale and Biology

Hadean and Dr. Paul Bates at the Francis Crick Institute in London partnered to investigate protein-protein interactions. The group is pioneering a new technique in the field called Cross-Docking, an approach to find the best holo structures among multiple structures available for a target protein.

The formation of specific proteinprotein interactions is often a key to understanding function. Proteins are flexible molecules and as they interact with each other they change shape / flex in response to each other. These can be major structural changes, or relatively minor movements, but either way a significant challenge in the field is being able to a priori predict the extent of such conformational structure changes and the flexibility of each target, Bates said.

The method can be used to predict protein binding sites useful for studying disease and drug design however, it requires a lot of processing power. This is where the Aether Engine comes in.

Despite promising results, this methods additional pre-processing steps (to choose the best input structures) make it practically difficult to do at scale, Bates said.

Publicly available docking servers rely on shared cloud resources, so a full docking run of all 56 protein pairs investigated [at the Crick Institute] takes weeks to complete. We used Aether Engine to sample tens of thousands of possible conformations for 56 protein pairs, profiled by potential energy, and selected candidates for docking according to features in this energy space, Bates said. This sophisticated sampling of inputs using Aether Engine led to a significant reduction in computation time and negated any additional burden brought on by this pre-processing step.

The research found 10% uplift in quality compared to other approaches, and the Aether Engine significantly reduced bottlenecks around pre-processing and docking, run as a publicly available server.

Modeling Spread of A New Disease

One of the first things we learned about SARS-CoV-2 is how it gains entry to our cells. The Spike protein on the viral envelope binds to Ace2, a receptor on the surface of endothelial cells. By binding Ace2, this effectively acts as a gateway for SARS-CoV-2 to enter our cells and begin replicating, spreading infection throughout the airway.

Buoyed by the success of their first study, the Bates Lab and Hadean renewed their partnership to focus on simulating COVID-19. The Aether Engines simulates a model of the lungs, going down over twenty levels, called generations, at each of which the airway bifurcates.

In the model, the virus is introduced at the top because we assume it was inhaled. There is a partial computational fluid dynamics element to it, as the virus travels down the airway according to a set diffusion rate. As it travels through the lungs there are elements, also known as agents in this type of model, that the virus agent is able to interact with, Keshani said.

The model relies on a number of parameters and can be used to measure the effect of treatments on viral replication in the lungs.

How we tweak these parameters will depend on keeping track of the literature over time. If there is an interaction between these two agents, the virus will invade the cell and ultimately cause it to burst after replicating inside the cell. Some of these agents will go back into the airways and some into the interstitial lung space. But theres other elements at play, the immune system fights back, here shown by the antibody and T-cell response, and anti-viral drug interventions can be added to the mix, Keshani said.

It does have its limitations. The model relies on a number of parameters. Simplifying the complexity of the human body and disease interaction by simulating the effect of what is happening, rather than the actual going events.

It's not always possible, or even necessary, to go into the level of detail that wed love to see. It's about making trade-offs between what's useful and what's reality, Keshani added.

Supercomputing & Future of Drug Discovery

Drug discovery is a long and expensive process. In recent years, artificial intelligence platforms are transforming the process, helping screen drug candidates and shorten the time required to get to clinical trial. Remdesivir was identified by AI platforms scouring existing drugs for potential COVID treatments. But machine and deep learning platforms require a lot of data to train and make better predictions if they are going to break records in drug development outside of a global pandemic. Keshani thinks there is a role for supercomputing here as well.

If you're able to create a simplification of a world that can model emergent behavior, which is the kind of simulation Aether Engine is able to scale massively, you can start building a picture of what could happen if you let different scenarios play out, Keshani said. And if you run that same simulation with slightly different parameters 100,000 times or 200,000 times, its building up a training set.

See the original post here:

The Supercomputer Breaking Online Gaming Records and Modeling COVID-19 - BioSpace