HORIZON BLOG: Research and innovation in the new seven-year budget – Science Business

The European Commission today published an independentreportdetailing the gaps in EU support for the European venture capitalist (VC) ecosystem, which may inform the work of the start-up funding programme, the European Innovation Council (EIC), on track to becoming one of the biggest VCs in Europe.

The report evaluated the effectiveness of different EU programmes supporting the venture market, such as InnovFin Equity and VentureEU, finding that while they follow a clear intention to develop the VCecosystem, growth has been modest and the market remains significantly smaller than in the US and China.

To help VCs take full advantage of the services, the report recommends streamlining the application process for EU support programmes, lowering administrative burden to make the programmes easier to administer, and supporting the creation of large later-stage pan-European equity funds, which would be able to fund bigger companies that often find support easier in the US and China. The report also suggests the EU should allow its programmes to take higher risks when investing to better achieve its policy priorities and crowd in private investors.

Yet, EU support cannot plug all the holes. Growth takes time and the development of risk capital markets often depends on a number of policies at national level, which is beyond the scope of the evaluated EU programmes, according to the report.

Read the original here:

HORIZON BLOG: Research and innovation in the new seven-year budget - Science Business

Supercomputer Market to Exhibit Impressive Growth of CAGR during the period 202 – Business-newsupdate.com

The core objective of the report on Supercomputer market is to determine the industrys performance over the forecast duration so as help stakeholders in making sound decisions and action plans that will guarantee success in the long run. The document sheds light on all factors that are promoting the growth of this vertical, followed by counter approaches for the major challenges faced by businesses. Additionally, it makes inclusions of the changes in this vertical due to the Covid-19 pandemic and highlights the top opportunities going forward.

Key pointers from Covid-19 impact analysis:

An overview of the regional landscape:

Request Sample Copy of this Report @ https://www.business-newsupdate.com/request-sample/14960

Other important takeaways from the Supercomputer market report:

Reasons to access this Report:

The key questions answered in this report:

Significant Point Mentioned in theResearch report:

Table of Contents for market shares by application, research objectives, market sections by type and forecast years considered:

Supercomputer Market Share by Key Players: Here, capital, revenue, and price analysis by the business are included along with other sections such as development plans, areas served, products offered by key players, alliance and acquisition and headquarters distribution.

Global Growth Trends: Industry trends, the growth rate of major producers, and production analysis are the segments included in this chapter.

Market Size by Application: This segment includes Supercomputer market consumption analysis by application.

Supercomputer market Size by Type: It includes analysis of value, product utility, market percentage, and production market share by type.

Profiles of Manufacturers: Here, commanding players of the global Supercomputer market are studied based on sales area, key products, gross margin, revenue, price, and production.

Supercomputer Market Value Chain and Sales Channel Analysis: It includes customer, distributor, market value chain, and sales channel analysis.

Market Forecast: This section is focused on production and production value forecast, key producers forecast by type, application, and regions

Request Customization on This Report @ https://www.business-newsupdate.com/request-for-customization/14960

Excerpt from:

Supercomputer Market to Exhibit Impressive Growth of CAGR during the period 202 - Business-newsupdate.com

Global Supercomputer Market Is Expected To Show Significant Growth over the Forecast Period 2020-2027 The Courier – The Courier

According to the latest market research reportSupercomputer Market by Operating System (Windows, Linux, UNIX, Mixed), Processor Type (Intel, AMD, IBM, Others), End-user (Commercial Industries, Scientific Research & Academic Institutions, Government Entities), Region, Global Industry Analysis, Market Size, Share, Growth, Trends, and Forecast 2020 to 2027published byFior Markets.The global supercomputer market is expected to grow from USD 6.3 billion in 2019 to USD 13.0 billion by 2027, at a CAGR of 9.5% during the forecast period 2020-2027.

The report explores the current outlook in global and key regions from the perspective of major players, countries, product types, and end industries. This report analyzes top players in the global market and divides the market into several parameters. It covers essential components such as the size of the market as well as its share along with forecast trends, specifications, and applications. The report examines data regarding the globalSupercomputermarket utilizing diverse methodologies. Each section of this report is elaborated with all the required data to gain knowledge about the market. The report clarifies the summary of present innovations, specifications, parameters, and creation in a detailed manner.

NOTE:Our analysts monitoring the situation across the globe explains that the market will generate remunerative prospects for producers post the COVID-19 crisis. The report aims to provide an additional illustration of the latest scenario, economic slowdown, and COVID-19 impact on the overall industry.

DOWNLOAD FREE SAMPLE REPORT:https://www.fiormarkets.com/report-detail/418092/request-sample

GlobalSupercomputerIndustry: Segmentation:

The report is segregated into different well-defined sections to provide the reader with an easy and understandable informational document. The segmentation of the globalSupercomputermarket segregates the market based on different aspects such as product, applications, end-users, and major regions. further, each segment is elaborated providing all the vital details along with growth analysis for the forecast period. These segments provide accurate calculations and forecasts for sales in terms of volume and value. This analysis can help customers increase their business and to take calculated decisions.

The anticipated market share of each segment with respect to revenue and sales is cited in the globalSupercomputermarket report. Evaluation of pricing patterns of each product segment is also offered. Major types covered are:

Projections regarding the consumption value and consumption volume of each application segment are documented. Market share held by the listed application segments is also included. Major end-user applications for the globalSupercomputermarket:

GlobalSupercomputerMarket: Competitive Segmentation:

The competitive landscape provides information about key company overview, global presence, sales and revenue generated, market share, prices, and strategies used. The leading market players are analyzed on the basis of production volume, gross margin, market value, and price structure. The competitive market scenario among globalSupercomputerindustry market players will help the industry competitors in planning their strategies.

The major players in the industry market are:

NVIDIA Corp., Fujitsu Ltd., Hewlett Packard Enterprise Co., Lenovo Group Ltd., Dell Technologies Inc., International Business Machines Corp., Huawei Investment & Holding Co. Ltd., Dawning Information Industry Co. Ltd., NEC Technologies India Private Limited., Atos SE, and Cray Inc. among others

Request forCustomization:https://www.fiormarkets.com/enquiry/request-customization/418092

GlobalSupercomputerMarket: Regional Insights:

Geographically, this report is segmented into several key regions, with production, consumption, revenue (million USD), and market share and growth rate of the market. The regional analysis segment is a highly comprehensive part of the report on the globalSupercomputermarket. This section offers information on the sales growth in these regions on a country-level market. Detailed volume analysis and region-wise market size analysis of the market has been given. Market size & share, by regions and countries/sub-regions:North America, Europe, Asia Pacific, South America, and the Middle East and Africa.

Moreover, the globalSupercomputermarket report has evaluated key market features, including revenue, price, capacity, capacity utilization rate, gross, production, production rate, consumption, import/export, supply/demand, cost, market share, CAGR, and gross margin. Also, SWOT analysis for new projects and feasibility analysis for new investments are included.

ACCESS FULL REPORT:https://www.fiormarkets.com/report/supercomputer-market-by-operating-system-windows-linux-unix-418092.html

Customization of the Report:This report can be customized to meet the clients requirements. Please connect with our sales team (sales@fiormarkets.com), who will ensure that you get a report that suits your needs.

Contact UsMark StonePhone:+1-201-465-4211Email:sales@fiormarkets.comWeb:www.fiormarkets.com

Read more here:

Global Supercomputer Market Is Expected To Show Significant Growth over the Forecast Period 2020-2027 The Courier - The Courier

What is a supercomputer? – CNBC

The race for the world's fastest supercomputer is on.

China held the lead for the last 5 years, but the United States has surged ahead with Summit. It's a $200 million government-funded supercomputer built for Oak Ridge National Laboratory in partnership with IBM and Nvidia.

Today's supercomputers are made up of thousands of connected processors, and their speed has grown exponentially over the past few decades. The first supercomputer, released in 1964, was called the CDC 6600. It used a single processor to achieve 3 million calculations per second. While that may sound impressive, it is tens of thousands of times slower than an iPhone.

The Lab Director of Oak Ridge, Thomas Zacharia, says, "I've always thought of supercomputing as a time machine, in the sense that it allows you to do things that most other people will be able to do in the future." As he explains, smartphones today are more powerful than the supercomputers used in the 1990s to work on the Human Genome Project.

Summit consists of over 36,000 processors from IBM and Nvidia that can perform 200 quadrillion calculations per second. Zacharia says that what a typical computer can do in 30 years Summit will be able to accomplish in just an hour.

Summit takes up 5,600 square feet of floor space and has nearly 200 miles of cable. It uses 4,000 gallons of water per minute to stay cool and consumes enough power to run 8 thousand homes.

Supercomputers are used for functions like forecasting weather and climate trends, simulating nuclear tests, performing pharmaceutical research and cracking encryption keys. Some initial projects on deck for Summit include researching possible genetic predispositions to cancer or opioid addiction.

By surpassing China, the U.S. has escalated the tech rivalry between the two countries.

As Nvidia CEO Jensen Huang told CNBC, "There's no question the race is on, but this is not the space race, this is the race to knowledge."

But faster supercomputers are already on the horizon. The European Union, Japan and China are all developing machines they say will outperform Summit. The next big frontier is exascale computing, that is, computers that can perform a billion times a billion calculations per second.

John Kelly, IBM Senior Vice President of Cognitive Solutions and Research, says, "Think about what you can do with a system that every billionth of a second it does a billion calculations. We can model and simulate systems that we can't model and simulate today, and we can discover from the world's data insights into major breakthroughs in the area of healthcare, science, materials, etc."

See the article here:

What is a supercomputer? - CNBC

Supercomputer may give us COVID meds to join vaccines – al.com

An Alabama scientists research may lead to medicines that can team up with vaccines as another weapon against COVID-19, according to findings released today.

The team of University of Alabama in Huntsville biologist Dr. Jerome Baudry has already won an award for their work so far, and Beaudry said the widespread scientific and technical cooperation to fight COVID reminds him of the space exploration of the 60s.

No competitors, only collaborators, and a unique feeling of purpose, Baudry said.

Baudrys laboratory at UAH used a supercomputer to screen 50,000 natural compounds that might affect COVID. The computer found 125 candidates. Now, testing at the University of Tennessee says 35 of those are being studied now for possible medication ingredients.

There is very good news on vaccine developments, and it is great, Baudry said today, but it is important that we continue working on other pharmaceuticals. Its a bit like for the flu, where there are vaccines and there are pharmaceuticals, and they work together, not against each other. And what we learned here will be priceless to respond to other similar crises, if and when they show up in the future.

The Oak Ridge National Laboratory in Tennessee is leading the international effort to find medications to fight COVID.

We used some of (their) data, and we basically added value to it, Baudry said. Although it is unique in many ways our focus on natural products, for instance it is important to note that this project of ours is still integrated into the national COVID-19 research effort.

The first of the 35 compounds still in play is now being tested in a biosafe Memphis laboratory directed by Dr. Colleen Jonsson. They use live virus infections of living cells grown in the equivalent of Petri dishes, Baudry said. The chemicals that will have a good profile can then be tested in animal models using mice.

The Baudry labs work has already won one of five Hyperion HPC Innovation Excellence Awards, UAH said. The awards recognize achievements by users of high-performance computers. Hyperion, the award sponsor, is the most respected group of industry experts in (high-performance computing), Baudry said. I was very surprised about the award because I didnt not even know that we had been under consideration. I was both very happy and very humbled.

Baudry has performed and led scientific research for 25 years and now holds the Mrs. Pei-Ling Chan Chair in the UAH Department of Biological Sciences. He said COVID has brought incredible cooperation and effort among scientists.

There are no competitors, only collaborators, and a unique feeling of purpose that is absolutely wonderful, Baudry said. This may be the most important experience of my professional life. It reminded me of what I read happened during the space exploration of the 60s. There is nothing we cannot do when we work together.

View original post here:

Supercomputer may give us COVID meds to join vaccines - al.com

Singapore Researchers Plug in to World’s Fastest Supercomputer – HPCwire

SINGAPORE, Nov. 30 2020 A partnership between Singapores national supercomputing resource, NSCC, Japans RIKEN and RIST allows Singapore-based researchers to directly access the vast supercomputing resources of the worlds fastest supercomputer, Fugaku. At 442 Petaflops (PFLOPS) of computing power, Fugaku is nearly three times more powerful than its nearest competitor and is at the top of the latest November 2020 edition of the global TOP500 supercomputer listing. Singapore researchers will now be able to apply for Fugakus huge computing resources through regular project calls and connect directly via dedicated high-speed, high-bandwidth research optical fibre links of up to 100 Gbps. The accessibility to Fugakus computing resources is in addition to Singapores petascale compute power that local researchers already have available at NSCC.

Singapore researchers will have the honour of being one of the first in Asia to have access to the amazing compute power of Fugaku, said Associate Professor Tan Tin Wee, Chief Executive of NSCC. The broad spectrum of HPC cooperation between the two centres includes joint training, workshops and summer schools; talent exchange and student internship programmes; HPC support for research and talent capability building in areas like high-impact HPC-intensive national research projects and student competitions; and direct high-speed data transfer and storage linkages with both RIKEN and RIST. The Fugaku access, in addition to the supercomputer resources already available at NSCC, will give local researchers the opportunity to think beyond the conventional and to perform research at much more complex and larger scales.

NSCCs national supercomputer is already functioning at more than 90% capacity with users from Singapores research institutes, institute of higher learning (IHLs) and industry leveraging the resources for research, education and industry-based HPC projects. The demand for HPC is expected to increase exponentially in Singapores drive towards a smart nation. The government announced a S$200 million upgrade of the current supercomputer resources at the SupercomputingAsia 2019 (SCA19) conference in March 2019.

Singapores national supercomputing resources are already stretched thin and the HPC upgrades will ensure local researchers and organisations are better enabled, equipped and prepared for a much more digitalised future, added A/Prof Tan Tin Wee who said that the current 1 PFLOPS system will be enhanced to a 10-15 PFLOPS system over the next few years. In the meantime, local researchers can be assured of additional seamless, continued access to HPC resources in Singapore and through our partnership with RIKEN and RIST.

Even before being fully commissioned, Fugaku has already made strides in providing solutions for the COVID-19 pandemic by speeding up the identification of potential drug candidates and developing simulations that demonstrate the spread of coronavirus in indoor settings and on trains, said Prof Satoshi Matsuoka, Director of R-CCS and one of the architects of the Fugaku supercomputer. We hope that by sharing such examples and Fugakus resources we can inspire more of our researchers, and colleagues from other countries, to leverage the power of HPC in their own research work. This partnership between the top tier national HPC centres of Japan and Singapore is a significant step in that direction.

RIST has been collaborating with NSCC by exchanging information on promotion of shared use of supercomputers since 2016. Project calls for supercomputer Fugaku have started this year, and NSCC and RIST have been exploring cooperation on supercomputer Fugaku. I believe that the new establishment of the partnership between NSCC and RIKEN will promote the collaboration between Singapore and Japan and we can work together to produce amazing outcomes on Fugaku, said Dr Hideyuki Takatsu, Managing Director of RIST.

Supercomputers have been instrumental in most of the worlds major scientific advancements. These include enabling complex computational and data-intensive tasks to be completed much more quickly in fields as diverse as advanced scientific modelling & simulations, artificial intelligence, weather forecasting, climate research, oil and gas exploration, chemical and biomolecular modelling, and quantum computing. The research has led to modern scientific achievements like deciphering the human genome, enhanced air travel, space exploration, biomedicine, unravelling the secrets of the universe and even research on solutions for pandemics like COVID-19.

A MoU was endorsed on 16th September 2020 between R-CCS and NSCC, and complements an existing MoU with RIST. The collaboration with RIKEN covers access and data sharing to Fugaku while RIST will work with NSCC on promoting HPC research utilisation by cooperating on HPC project research calls and shared supercomputing use. Singapore researchers who are interested to apply for HPC resources from Japan can do so at https://www.nscc.sg/open-calls-hpc-resources-from-japan/.

About the National Supercomputing Centre (NSCC) Singapore

The National Supercomputing Centre (NSCC) Singapore was established in 2015 and manages Singapores first national petascale facility with available high performance computing (HPC) resources. As a National Research Infrastructure funded by the National Research Foundation (NRF), we support the HPC research needs of the public and private sectors, including research institutes, institutes of higher learning, government agencies and companies. With the support of its stakeholders, including Agency for Science Technology and Research (A*STAR), Nanyang Technological University (NTU), National University of Singapore (NUS), Singapore University of Technology and Design (SUTD), National Environment Agency (NEA) and Technology Centre for Offshore and Marine, Singapore (TCOMS), NSCC catalyses national research and development initiatives, attracts industrial research collaborations and enhances Singapores research capabilities. For more information, please visit https://www.nscc.sg/.

About RIKEN Center for Computational Science (R-CCS)

As the leadership center of high-performance computing, we explore the science of computing, the science by computing, and the science for computing. We at the RIKEN Center for Computational Science (R-CCS) will carry out the following mission: Develop and operate the supercomputer Fugaku efficiently and effectively to serve as a core of high performance computing research, and further expand the number of users, improve the ease-ofuse, and promote educational activities. Facilitate leading edge infrastructures for research based on K and Fugaku, and moreover conduct translational research to elevate the operational technologies for large-scale computing facilities to world-leading levels. Conduct cutting-edge research on high performance computing, and promote the results through open-source software, allowing our deliverables to further incubate new values in world s technological developments based on high-performance computing.

About Research Organization for Information Science and Technology (RIST)

Research Organization for Information Science and Technology (RIST) is a general incorporated foundation which has been carrying out usage promotion services of the Japanese flagship computers (first the K computer, then the successor supercomputer Fugaku since 2020), commissioned by the Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT), since 2014. Our scope includes project selection, user support, and help spreading research results of projects. In addition, since 2017, RIST has also taken a role as the operation office of the innovative High Performance Computing Infrastructure (HPCI). Within a framework of HPCI, we are in charge of managing computational resources and promoting usage.

Source: NSCC

Read the original post:

Singapore Researchers Plug in to World's Fastest Supercomputer - HPCwire

GENCI Supercomputer Simulation Illuminates the Dark Universe – HPCwire

What we can see and touch are, in the scheme of the universe, relatively minor components, with visible matter and tangible mass constituting just 16 percent of the universes mass and 30 percent of its energy, respectively. The remainders consist of dark matter and dark energy, which are invisible and intangible making supercomputer simulations an integral part of the investigative workflow for understanding these cosmic forces. Now, a team of researchers from 16 different institutions across five countries has announced the results of the Extreme-Horizon simulation, a massive, supercomputer-powered simulation of the formation of galaxies that tested assumptions about the nature of dark energy and dark matter.

The Extreme-Horizon simulation ran on Joliot-Curie, a supercomputer owned by GENCI and hosted by CEA at TGCC. Among Joliot-Curies four partitions (one Intel Skylake partition, one Intel Knights Landing partition, one AMD Epyc Rome Partition and one Intel Cascade Lake partition), the AMD partition is the most powerful (7.0 Linpack petaflops), placing 38th on the most recent Top500 list of the worlds most powerful publicly ranked supercomputers.

Using Joliot-Curie, the research team simulated how cosmic structures have evolved from the Big Bang through the present day, crunching over three terabytes of data at multiple points throughout the simulation.

Extreme-Horizon yielded some important results for astrophysicists. First, the generally higher resolution meant that Extreme-Horizon was able to paint a picture of how cold gases pooled in galaxies in low-density regions of space and how new galaxies formed in the early days of the post-Big Bang universe.

Second, the simulation produced a correction factor for black holes that obscure our view of intergalactic hydrogen clouds here on Earth. With that correction factor in-hand, astrophysicists will be better able to characterize those clouds and the trends in the distribution of matter in the universe.

Extreme-Horizon is one of the Grand Challenges undertaken by GENCI Frances high-performance computing center to test the abilities of its supercomputing systems. These Grand Challenges represent a unique opportunity for selected scientists to gain access to the supercomputers resources, enabling them to make major advances, or even achieve world firsts, GENCI wrote in a press release.

About the research

The research discussed in this article was published as a letter to the editor titled Formation of compact galaxies in the Extreme-Horizon simulation in the November 2020 issue of the journal Astronomy & Astrophysics. 21 authors across 16 institutions in five countries contributed to the letter, which can be read in full at this link.

See original here:

GENCI Supercomputer Simulation Illuminates the Dark Universe - HPCwire

Pawsey’s Galaxy Supercomputer Aids Telescope in Creating New Atlas of the Universe – HPCwire

Dec. 2, 2020 The Australian Square Kilometre Array Pathfinder (ASKAP), developed and operated by Australias national science agency, CSIRO, mapped approximately three million galaxies in just 300 hours.

The Rapid ASKAP Continuum Survey is like a Google map of the universe where most of the millions of star-like points on the map are distant galaxiesabout a million of which weve never seen before.

CSIRO Chief Executive Dr. Larry Marshall said ASKAP brought together world-class infrastructure with scientific and engineering expertise to unlock the deepest secrets of the universe.

ASKAP is applying the very latest in science and technology to age-old questions about the mysteries of the universe and equipping astronomers around the world with new breakthroughs to solve their challenges, Dr. Marshall said.

Its all enabled by innovative receivers developed by CSIRO that feature phased array feed technology, which see ASKAP generate moreraw dataat a faster rate than Australias entire internet traffic.

In a time when we have access to more data than ever before, ASKAP and the supercomputers that support it are delivering unparalleled insights and wielding the tools that will underpin our data-driven future to make life better for everybody.

Minister for Industry, Science and Technology, Karen Andrews said ASKAP is another outstanding example of Australias world-leading radio astronomy capability.

ASKAP is a major technological development that puts our scientists, engineers and industry in the drivers seat to lead deep space discovery for the next generation.

This new survey proves that we are ready to make a giant leap forward in the field of radio astronomy, Minister Andrews said.

The telescopes key feature is its wide field of view, generated by new CSIRO-designed receivers, that enable ASKAP to take panoramic pictures of the sky in amazing detail.

Using ASKAP at CSIROs Murchison Radio-astronomy Observatory (MRO) in outback Western Australia, the survey team observed 83 percent of the entire sky.

The initial results are published in the Publications of the Astronomical Society of Australia.

This record-breaking result proves that an all-sky survey can be done in weeks rather than years, opening new opportunities for discovery.

The new data will enable astronomers to undertake statistical analyses of large populations of galaxies, in the same way social researchers use information from a national census.

This census of the universe will be used by astronomers around the world to explore the unknown and study everything from star formation to how galaxies and their super-massive black holes evolve and interact, lead author and CSIRO astronomer Dr. David McConnell said.

With ASKAPs advanced receivers the RACS team only needed to combine 903 images to form the full map of the sky, significantly less than the tens of thousands of images needed for earlier all-sky radio surveys conducted by major world telescopes.

For the first time ASKAP has flexed its full muscles, building a map of the universe in greater detail than ever before, and at record speed.

We expect to find tens of millions of new galaxies in future surveys, Dr. McConnell said.

The 13.5 exabytes of raw data generated by ASKAP were processed using hardware and software custom-built by CSIRO.

The Pawsey Supercomputing Centres Galaxy supercomputer converted the data into 2-D radio images containing a total of 70 billion pixels.

The final 903 images and supporting information amount to 26 terabytes of data.

Pawsey Executive Director Mark Stickells said the supercomputing capability was a key part of ASKAPs design.

The Pawsey Supercomputing Centre has worked closely with CSIRO and the ASKAP team since our inception and we are proud to provide essential infrastructure that is supporting science delivering great impact, Mr Stickells said.

The images and catalogs from the survey will be made publicly available through the CSIRO Data Access Portal and hosted at Pawsey.

Source: Annabelle Young, CSIRO

Read more from the original source:

Pawsey's Galaxy Supercomputer Aids Telescope in Creating New Atlas of the Universe - HPCwire

Cerebras CS-1 supercomputer uses the worlds largest chip – Inceptive Mind

On the occasion of the SC20 conference, Cerebra Systems, in collaborations with researchers at the National Energy Technology Laboratory (NETL), showed that its latest single wafer-scale Cerebras CS-1 could outperform one of the fastest supercomputers in the U.S. by more than 200 times.

The Cerebras CS-1 is the worlds first wafer-scale computer system. It is 26 inches tall, fits in a standard data center rack, and is powered by a single Cerebras Wafer Scale Engine (WSE) chip. It is the worlds largest chip, measuring 72 square inches (462 cm2) and the largest square that can be cut from a 300 mm wafer. All processing, memory, and core-to-core communication occur on the wafer. In total, there are 1.2 trillion transistors in an area of 72 square inches.

The wafer holds almost 400,000 individual processor cores, each with its private memory and a network router. The cores form a square mesh. Each router connects to the routers of the four nearest cores in the mesh. The cores share nothing; they communicate via messages sent through the mesh.

Cerebras CS-1 will be used especially for scientific research and science-related projects. The machine can solve a large, sparse, structured system of linear equations of the sort that arises in modeling physical phenomena like fluid dynamics using a finite-volume method on a regular three-dimensional mesh. Solving these equations is fundamental to such efforts as forecasting the weather; finding the best shape for an airplanes wing; predicting the temperatures and the radiation levels in a nuclear power plant; modeling combustion in a coal-burning power plant; and making pictures of the layers of sedimentary rock in places likely to contain oil and gas.

To achieve such results, Cerebras says there are three factors that enable the computers speed, including the CS-1s memory performance, high bandwidth and low latency of the on-wafer communication fabric, and processor architecture optimized high-bandwidth computing.

In return, of course, you have a chip about 60 times the size of a large conventional chip like a CPU or GPU. It was built to provide a much-needed breakthrough in computer performance for deep learning.

The researchers used the CS-1 to do sparse linear algebra, typically used in computational physics and other scientific applications. Using the wafer, they achieved a performance more than 200 times faster than that of NETLs Joule 2.0 supercomputer. NETLs Joule is the 24th fastest supercomputer in the U.S. and 82nd fastest on a list of the worlds top 500 supercomputers. It uses Intel Xeon chips with 20 cores per chip for a total of 16,000 cores.

Excerpt from:

Cerebras CS-1 supercomputer uses the worlds largest chip - Inceptive Mind

New IBM encryption tools head off quantum computing threats – TechTarget

The messages surrounding quantum computers have almost exclusively focused on the sunny side of how these machines will solve infinitely complex problems today's supercomputers can't begin to address. But rarely, if ever, have the masters of hype focused on the dark side of what these powerful machines might be capable of.

For all the good they promise, quantum systems, specifically fault-tolerant quantum systems, are able to crumble the security that guards sensitive information on government servers and those of the largest Fortune 500 companies.

Quantum computers are capable of processing a vast number of numerical calculations simultaneously. Classical computers deal in ones and zeros, while a quantum computer can use ones and zeros as well as achieve a "superposition" of both ones and zeros.

Earlier this year, Google achieved quantum supremacy with its quantum system by solving a problem thought to be impossible to solve with classical computing. The system was able to complete a computation in 200 seconds that would take a supercomputer about 10,000 years to finish -- literally 1 billion times faster than any available supercomputer, company officials boasted.

Quantum computers' refrigeration requirements and the cost of the system itself, which has not been revealed publicly, make it unlikely to be a system IBM or other quantum makers could sell as they would supercomputing systems. But quantum power is available through cloud services.

Faced with this upcoming superior compute power, IBM has introduced a collection of improved cloud services to strengthen users' cryptographic key protection as well as defend against threats expected to come from quantum computers.

Building on current standards used to transmit data between an enterprise and the IBM cloud, the new services secure data using a "quantum-safe" algorithm. Though quantum computers are years away from broad use, it's important to identify the potential risk that fault-tolerant quantum computers pose, including the ability to quickly break encryption algorithms to get sensitive data, IBM said.

Part of IBM's new strategic agenda includes the research, development and standardization of core quantum-safe cryptography algorithms as open source tools such as Crystals and Open Quantum Safe grow in popularity.

With emerging technologies like quantum computing, users can't accurately predict how long it will be before they need services like this. Judith HurwitzPresident, Hurwitz & Associates

The agenda also includes the standardization of governance tools and accompanying technologies to support corporate users as they begin integrating quantum systems alongside existing classical systems.

Some analysts applaud IBM for extending support for the new cloud services beyond the security needs of existing hybrid cloud users to quantum computers as a way of future-proofing the new offerings.

"With emerging technologies like quantum computing, users can't accurately predict how long it will be before they need services like this," said Judith Hurwitz, president of Hurwitz & Associates. "But prices [of quantum systems] could come down and the technology mature quicker than you anticipate, so you may need services like this to work across platforms. It could also be IBM just wanting to show how far ahead of everyone else they are."

While fault-tolerant quantum computers are a long way from reality for the vast majority of hackers, some analysts point out that adversarial governments could access such systems sooner rather than later to break the security schemes of the U.S. military and other federal government agencies.

"There could be legitimate concern about some well-organized and funded nation-states using quantum computers to crack algorithms to get at sensitive information, but there is little chance cybercriminals can get access to a quantum system anytime soon," said Doug Cahill, vice president and group director of cybersecurity with Enterprise Strategy Group. "But the short-term benefit here is future-proofing for mission critical workloads."

The need for data privacy is more critical as users become increasingly dependent on data, said Hillery Hunter, vice president and CTO of IBM Cloud, in a prepared statement. Security and compliance remain central to IBM's Confidential Computing initiative, Hunter said, as it is for corporate users in highly regulated industries where it's critical to keep proprietary data secure.

IBM also delivered an improved version of its Key Protect offering, designed for lifecycle management for encryption keys used in IBM Cloud services or in applications built by users. The new version has the ability to use quantum-safe cryptography-enabled Transport Layer Security (TLS) connections, which helps protect data during key lifecycle management.

The company also unveiled quantum-safe cryptography support features that enable application transactions. For instance, when cloud-native containerized applications run on Red Hat's OpenShift or IBM Cloud Kubernetes Services, secured TLS connections contribute to application transactions with quantum-safe cryptography support during data-in-transit protecting against breaches.

IBM's Cloud Hyper Protect Crypto Service provides users with Keep Your Own Key features. The offering is built on FIPS-140-2 Level 4-certified hardware, which gives users exclusive key control and authority over data and workloads that are protected by the keys.

"What I like about this is you get to keep your own [encryption] keys for cloud data encryption, which is unique," said Frank Dzubeck, president of Communications Network Architects. "No one but you -- not even cloud administrators -- can access your data."

The product is primarily meant for application transactions where there is a more essential need for advanced cryptography. Users are allowed to keep their private keys secured within the cloud hardware security module and, at the same time, offload TLS to the IBM Cloud Hyper Protect Crypto Services, thereby creating a more secure connection to the web server. Users can also gain application-level encryption of sensitive data, including credit card numbers, before it gets stored in a database system.

Originally posted here:

New IBM encryption tools head off quantum computing threats - TechTarget

Supercomputer Market Overview with Qualitative analysis, Competitive landscape & Forecast by 2027 – The Market Feed

TheGlobal Supercomputer Market report by Reports and Data is an all-encompassing study of the global Supercomputer market. The report serves as a prototype of the highly functional Supercomputer industry. Our market researchers panel has performed quantitative and qualitative assessments of the global Supercomputer market dynamics in a bid to forecast the global market growth over the forecast period. Reports and Data have taken into consideration several factors, such as market penetration, pricing structure, product portfolios, end-user industries, and the key market growth drivers and constraints, to endow the readers with a sound understanding of the market. The report provides the reader with a panoramic view of the Supercomputer market, supported by key statistical data and industry-verified facts. Hence, it examines the size, share, and volume of the Supercomputer industry in the historical period to forecast the same valuations for the forecast period.

Request a sample copy of this report @https://www.reportsanddata.com/sample-enquiry-form/2921

The Supercomputer market research report is broadly bifurcated in terms of product type, application spectrum, end-user landscape, and competitive backdrop, which would help readers gain more impactful insights into the different aspects of the market. Under the competitive outlook, the reports authors have analyzed the financial standing of the leading companies operating across this industry. The gross profits, revenue shares, sales volume, manufacturing costs, and the individual growth rates of these companies have also been ascertained in this section. Our team has accurately predicted the future market scope of the new entrants and established competitors using several analytical tools, such as Porters Five Forces Analysis, SWOT analysis, and investment assessment.

Market segments by Top Manufacturers:

IBM Corporation, Cray Inc., Lenovo Inc., Sugon, Inspur, Dell EMC, Hewlett Packard Enterprise, Atos SE, FUJITSU, and Penguin Computing, among others.

Market split by Type, can be divided into:

Market split by Application, can be divided into:

!!! Limited Time DISCOUNT Available!!! Get Your Copy at Discounted [emailprotected] https://www.reportsanddata.com/discount-enquiry-form/2921

The latest report is furnished with a detailed examination of the Supercomputer market and the global economic landscape ravaged by the ongoing COVID-19 pandemic. The pandemic has significantly affected millions of peoples lives. Besides, it has turned the global economy upside down, which has adversely impacted the Supercomputer business sphere. Thus, the report encompasses the severe effects of the coronavirus pandemic on the Supercomputer market and its key segments.

Geographical Scenario:

The global Supercomputer market report comprehensively studies the present growth prospects and challenges for the key regions of the Supercomputer market. The report continues to evaluate the revenue shares of these regions over the forecast timeline. It further scrutinizes the year-on-year growth rate of these regions over the projected years. The leading regions encompassed in this report:

Browse the full report description, along with the ToCs and List of Facts and Figures @ https://www.reportsanddata.com/report-detail/supercomputer-market

Key Coverage of the Report:

The report considers the following timeline for market estimation:

To get a customized sample of the report, click on the link mentioned alongside @ https://www.reportsanddata.com/sample-enquiry-form/2921

Thank you for reading our report. In case of further queries regarding the report or inquiry about its customization, please connect with us. We will ensure your report is well-suited to your requirements.

View post:

Supercomputer Market Overview with Qualitative analysis, Competitive landscape & Forecast by 2027 - The Market Feed

As it closes in on Arm, Nvidia announces UK supercomputer dedicated to medical research – TechCrunch

As Nvidia continues to work through its deal to acquire Armfrom SoftBank for $40 billion, the computing giant is making another big move to lay out its commitment to investing in U.K. technology. Today the company announced plans to develop Cambridge-1, a new 40 million AI supercomputer that will be used for research in the health industry in the country, the first supercomputer built by Nvidia specifically for external research access, it said.

Nvidia said it is already working with GSK, AstraZeneca, London hospitals Guys and St Thomas NHS Foundation Trust, Kings College London and Oxford Nanopore to use the Cambridge-1. The supercomputer is due to come online by the end of the year and will be the companys second supercomputer in the country. The first is already in development at the companysAI Center of Excellence in Cambridge, and the plan is to add more supercomputers over time.

The growing role of AI has underscored an interesting crossroads in medical research. On one hand, leading researchers all acknowledge the role it will be playing in their work. On the other, none of them (nor their institutions) have the resources to meet that demand on their own. Thats driving them all to get involved much more deeply with big tech companies like Google, Microsoft and, in this case, Nvidia, to carry out work.

Alongside the supercomputer news, Nvidia is making a second announcement in the area of healthcare in the U.K.: it has inked a partnership with GSK, which has established an AI hub in London, to build AI-based computational processes that will be used in drug vaccine and discovery an especially timely piece of news, given that we are in a global health pandemic and all drug makers and researchers are on the hunt to understand more about, and build vaccines for, COVID-19.

The news is coinciding with Nvidias industry event, the GPU Technology Conference.

Tackling the worlds most pressing challenges in healthcare requires massively powerful computing resources to harness the capabilities of AI, said Jensen Huang, founder and CEO of Nvidia, in his keynote at the event. The Cambridge-1 supercomputer will serve as a hub of innovation for the U.K., and further the groundbreaking work being done by the nations researchers in critical healthcare and drug discovery.

The company plans to dedicate Cambridge-1 resources in four areas, it said: industry research, in particular joint research on projects that exceed the resources of any single institution; university granted compute time; health-focused AI startups; and education for future AI practitioners. Its already building specific applications in areas, like the drug discovery work its doing with GSK, that will be run on the machine.

The Cambridge-1 will be built on Nvidias DGX SuperPOD system, which can process 400 petaflops of AI performance and 8 petaflops of Linpack performance. Nvidia said this will rank it as the 29th fastest supercomputer in the world.

Number 29 doesnt sound very groundbreaking, but there are other reasons why the announcement is significant.

For starters, it underscores how the supercomputing market while still not a mass-market enterprise is increasingly developing more focus around specific areas of research and industries. In this case, it underscores how health research has become more complex, and how applications of artificial intelligence have both spurred that complexity but, in the case of building stronger computing power, also provides a better route some might say one of the only viable routes in the most complex of cases to medical breakthroughs and discoveries.

Its also notable that the effort is being forged in the U.K. Nvidias deal to buy Arm has seen some resistance in the market with one group leading a campaign to stop the sale and take Arm independent but this latest announcement underscores that the company is already involved pretty deeply in the U.K. market, bolstering Nvidias case to double down even further. (Yes, chip reference designs and building supercomputers are different enterprises, but the argument for Nvidia is one of commitment and presence.)

AI and machine learning are like a new microscope that will help scientists to see things that they couldnt see otherwise, said Dr. Hal Barron, chief scientific officer and president, R&D, GSK, in a statement. NVIDIAs investment in computing, combined with the power of deep learning, will enable solutions to some of the life sciences industrys greatest challenges and help us continue to deliver transformational medicines and vaccines to patients. Together with GSKs new AI lab in London, I am delighted that these advanced technologies will now be available to help the U.K.s outstanding scientists.

The use of big data, supercomputing and artificial intelligence have the potential to transform research and development; from target identification through clinical research and all the way to the launch of new medicines, added James Weatherall, PhD, head of Data Science and AI, AstraZeneca, in his statement.

Recent advances in AI have seen increasingly powerful models being used for complex tasks such as image recognition and natural language understanding, said Sebastien Ourselin, head, School of Biomedical Engineering & Imaging Sciences at Kings College London. These models have achieved previously unimaginable performance by using an unprecedented scale of computational power, amassing millions of GPU hours per model. Through this partnership, for the first time, such a scale of computational power will be available to healthcare research it will be truly transformational for patient health and treatment pathways.

Dr. Ian Abbs, chief executive & chief medical director of Guys and St Thomas NHS Foundation Trust Officer, said: If AI is to be deployed at scale for patient care, then accuracy, robustness and safety are of paramount importance. We need to ensure AI researchers have access to the largest and most comprehensive datasets that the NHS has to offer, our clinical expertise, and the required computational infrastructure to make sense of the data. This approach is not only necessary, but also the only ethical way to deliver AI in healthcare more advanced AI means better care for our patients.

Compact AI has enabled real-time sequencing in the palm of your hand, and AI supercomputers are enabling new scientific discoveries in large-scale genomic data sets, added Gordon Sanghera, CEO, Oxford Nanopore Technologies. These complementary innovations in data analysis support a wealth of impactful science in the U.K., and critically, support our goal of bringing genomic analysis to anyone, anywhere.

More here:

As it closes in on Arm, Nvidia announces UK supercomputer dedicated to medical research - TechCrunch

With Crossroads Supercomputer, HPE Notches Another DOE Win – The Next Platform

When you come to the crossroads and make a big decision about selling your soul to the devil to get what you want, it is supposed to be a dramatic event, the stuff that legends are made of.

In this case, with the announcement of a $105 million deal for Hewlett Packard Enterprise to build the Crossroads supercomputer for the so-called Tri-Labs of the US Department of Energy that would be Lawrence Livermore National Laboratory, Los Alamos National Laboratory, and Sandia National Laboratories, who are, among other things, responsible for managing the nuclear weapons stockpile for the US government the drama has more to do with the changes in the Intel Xeon and Xeon Phi processor and Omni-Path interconnect roadmaps than anything else.

The DOE supercomputer architects always have to hedge their compute, networking, and storage bets across the Tri-Labs and so do their peers at Oak Ridge National Laboratory, Argonne National Laboratory, Lawrence Berkeley National Laboratory, and a few other national labs. You want commonality because this drives down costs and complexity, but you want distinction so you can push the limits on several technologies all at the same time because that is what HPC centers are actually for and it helps balance out risks when roadmaps get crumpled.

The Advanced Simulation and Computing (ASC) program at the DOE has funded the development of so many different computing architectures over many years that it is hard to keep track of them all. There are several streams of systems that are part of the National Nuclear Security Administration, which runs simulations relating to the nuclear stockpile, and there is an interleaving of machines such that Lawrence Livermore gets the biggest one in what is called Advanced Technology Systems, or ATS, program and then either Los Alamos or Sandia gets the next one and they share it. Like this:

In the chart above, the ATS 2 system at Lawrence Livermore is the existing Sierra system, based on IBM Power9 processors and Nvidia V100 GPU accelerators hooked with Mellanox (now Nvidia) 100 Gb/sec InfiniBand interconnects. The ATS 4 system is Crossroads, the award for which is being announced now, and the ATS 5 machine, which is not shown on this old roadmap, is for the future El Capitan machine, built from future AMD Epyc CPUs and Radeon Instinct GPUs interlinked by a future rev of HPEs Cray Slingshot interconnect, which we detailed back in March and which is expected to surpass 2 exaflops in peak 64-bit floating point processing capacity when it is installed in 2022. The Commodity Technology Systems are less about creating thoroughbred supercomputers than regular workhorses that can pull the HPC plows for less money than the Advanced Technology Systems, which are first of a kind systems that push the technology.

The current Trinity machine shared by Los Alamos and Sandia and installed at Los Alamos, is the one that is coming to the end of its life and the one that Crossroads will replace ATS 1 in the chart above. The plan was to have the Crossroads replacement ready to go sometime about now, when Trinity would be sunsetted, but processor roadmaps at Intel have been problematic during the 10 nanometer generations of Xeon SP and Xeon Phi processors, to the point where Intel killed off the Xeon Phi line in July 2018 and deprecated its 200 Gb/sec Omni-Path interconnect a year later. The future Xeon SP, Xeon Phi, and Omni-Path technologies were the obvious and easiest choices for the compute and networking in the Crossroads system, given the Trinity all-CPU design and Intels desire to position Omni-Path as the successor to the Cray Aries XC interconnect used in the Trinity system. As it turns out, Intel has just this week spun out the Omni-Path business into a new company, called Cornelis Networks, founded by some InfiniBand innovators, and Omni-Path will live on in some form independent of Intel. So the technology used in Trinity will evolve and presumably be used in HPC and AI systems of the future. But that spinout did not come in time for Tri-Labs to not chose HPEs Cray Slingshot interconnect, a variant of Ethernet with HPC extensions and congestion control and adaptive routing that makes a hyperscaler envious, for Crossroads. For whatever reason, InfiniBand from Mellanox was not in the running.

The existing Trinity system is an all-CPU design, and Tri-Labs has been sticking with all-CPU systems for the machines that go into Los Alamos and Sandia under the ATS program; obviously, Lawrence Livermore chose a hybrid CPU-GPU machine for the Sierra and El Capitan systems. Again, this is about hedging technology bets as well as pushing the price/performance curves on a number of architectural fronts among the DOE labs. Trinity was built in stages, and that was not an architectural choice so much as a necessary one, as Jim Lujan, HPC program manager at Los Alamos, told us back in July 2016 when the Knights Landing Xeon Phi parallel processors were first shipping. Trinity was supposed to be entirely composed of these Xeon Phi processors, but they were delayed and the base machine was built from over 9,486 two-socket nodes using the 16-core Haswell Xeon E5-2698 v3 processors and another 9,934 nodes based on the 68-core Xeon Phi 7250 processors, for a total of 19,240 nodes with 979,072 cores with a peak performance of 41.5 petaflops at double precision. Trinity had 2.07 PB of main memory, and implemented a burst buffer on flash storage in the nodes that had a combined 3.7 PB of capacity and 3.3 TB/sec of sustained bandwidth. The parallel file system attached to Trinity had 78 PB of capacity and 1.45 TB/sec of sustained bandwidth.

Importantly, the Trinity system had about 8X the performance on the ASC workloads compared to the prior Cielo system that predated it, which was an all-CPU Cray XE6 system installed in 2013 based on AMD Opteron 6136 processors and the Cray Gemini 3D torus interconnect.

As it turns out, Crossroads is running about two years late, and that is both a good thing and a bad thing. The bad thing is that this means Trinity has to have its life extended to cover between now and when Crossroads is up and running in 2022. The good news is that the processor and interconnect technology will be all that much better when Crossroads is fired up. (We realize that this is a big presumption.)

The DOE and HPE are not saying much about Crossroads at the moment, but we do know that it will be based on the future Sapphire Rapids Xeon SP processor from Intel, and that it will, like Trinity, be an all-CPU design. Sapphire Rapids, you will recall, is the CPU motor that is going into the Aurora A21 system at Argonne National Laboratory, but in that case, the vast majority of the flops in the system will come from six Xe discrete GPU accelerators attached to each pair of Sapphire Rapids processors, as we talked about in November last year. It will be based on the HPE Cray EX system design, formerly known as Shasta by Cray, and will use the Slingshot interconnect as we said as well as liquid cooling for compute cabinets to allow it to run faster than it otherwise might and cool more efficiently. The systems will run the Cray Programming Environment, including Crays implementation of Linux and its compiler stack.

The Cori supercomputer at Lawrence Berkeley and the Trinity supercomputer at Tri-Labs were based on a mix of Xeon and Xeon Phi processors and the Aries interconnect. So it was reasonable to expect, given past history, that the Crossroads machine shared by Los Alamos and Sandia would use similar technology to the Perlmutter NERSC-9 system at Lawrence Berkeley, since these labs have tended to move to technologies at the same time over recent years. But that has not happened this time.

The Perlmutter and Crossroads machines are based on the Shasta now HPE Cray EX systems and both use the Slingshot interconnect, but the resemblance ends there. Perlmutter is using a future Milan Epyc processor from AMD plus Ampere A100 GPU accelerators from Nvidia. The Perlmutter machine will be installed in phases, with the GPU-accelerated nodes with more than 6,000 GPUs coming in late 2020 and the over 3,000 CPU-only nodes coming in the middle of 2021. Crossroads has no accelerators and is using the Sapphire Rapids Xeon SPs only for compute albeit ones that will support Intels Advanced Matrix Extensions for boosting the performance of machine learning applications.

HPE says that on applications, Crossroads will have about 4X the oomph of Trinity, which should mean somewhere around 165 petaflops at peak double precision. This is quite a bit of oomph for an all-CPU system, but then again, it is dwarfed by the 537 petaflops that the Fugaku supercomputer at RIKEN laboratory in Japan, built by Fujitsu using its A64FX Arm chip with wonking vector engines, delivers at double precision.

Original post:

With Crossroads Supercomputer, HPE Notches Another DOE Win - The Next Platform

What happens when two planets crash together? This supercomputer has the answer – Digital Trends

NASA/JPL-Caltech

Obviously, we have no idea what really happens when planets collide, because we cant build planets in the lab and smash them together, said Jacob Kegerreis, a postdoctoral researcher in a specialist lab at the U.K.s Durham University called the Institute for Computational Cosmology.

So Kegerreis and his colleagues did the next best thing: They booked time on a supercomputer and used it to run hundreds of simulations of planets crashing into one another a demolition derby for astrophysics geniuses.

Its all about doing calculations, he told Digital Trends. Theres no reason you couldnt do it by hand, it would just take forever. Its really exactly how video games work. If youve got a character even a 2D one like Mario and you need them to jump and fall back down under gravity, that means the program has an equation for gravity, and it basically does a very, very simple simulation to work out how quickly that character falls. Its really the same principle. We just try and use slightly more careful equations to do these more physics-based things.

Of course, what are to Kegerreis slightly more involved equations are, to the rest of us, mind-boggling magnitudes of complexity. When the researchers working on the project created their model planets, they represented them as millions of particles, each pulling one another under gravity and pushing with material pressure. The model takes into account painstakingly accurate real-life details such as how planetary materials like rock and iron actually behave at different temperatures and densities, how gravity and pressure impacts the particles, and how these particles interact according to the equations of hydrodynamics.

We need a supercomputer because we require many millions of particles to resolve the details of what happens in these messy collisions, especially with low-density atmospheres, he said. This means a daunting number of calculations to do very many times in order to see how the system evolves throughout the impact.

The simulation in the teams most recent study potentially sheds some light on the creation of the moon. Todays most widely accepted theory is that the moon was formed as the result of a collision between Earth and another planet about the size of Mars. It is hypothesized that the debris from this impact became trapped in Earth orbit and eventually coagulated into the moon.

But although this much is broadly agreed upon, Kegerreis said that there are maybe five or six plausible ideas for the specific type of impact scenario. By modeling these, the teams was able to simulate details about how much of Earths atmosphere would have been lost in the most popular moon-forming scenarios. Numbers, he said, range from 10 to 60 percent of the atmosphere, depending on the precise angle, speed, and planet sizes.

These same kinds of simulation, in terms of the physics thats going on underneath, can be used for loads of different things.

If we can understand the history of Earths atmosphere well enough, then it might help us narrow down how erosive an impact the moon-forming collision should have been, he said. Or at least to perhaps rule out scenarios that remove far too much or far too little atmosphere to fit the observations.

Research such as this could therefore help answer some fundamental questions about the reason the observable universe is the way it is. [In this case,] we werent sure whether it was really easy or really hard for a giant impact to remove all of an atmosphere, or whether it was possible to get middling erosion as opposed to all or nothing, Kegerreis said. We also looked at the possibility of the impactor delivering atmosphere if it had some of its own to begin with.

While this project may be concluded, Kegerreis is excited about the future possibilities. Hes also enthused at the development of the simulation code the team wrote to carry out their work, in association with a group of astronomers and computer scientists. Called SWIFT, its an open-source hydrodynamics and gravity computer program that could be used by researchers anywhere in the world (so long as they have remote access to a supercomputer) to run simulations of astrophysical objects, including planets, galaxies, or even, conceivably, the whole universe.

These same kinds of simulation, in terms of the physics thats going on underneath, can be used for loads of different things, Kegerreis said. Here in Durham, the main thing that people actually use similar simulations to do is galaxy formation and much wider cosmology things where youre evolving dark matter, stars, and galaxies, rather than smaller things like planets. We can use the same simulation code to do those different things just by putting in different variations of the specific equations that were solving. But its the same basic structure.

The lack of real-time graphics (think endless code running on a screen, rather than than Civilization on a galactic scale) means this wont have the makings of a hit video game any time soon. However, it might just wind up helping reveal some of the secrets of the universe, from the Big Bang to the present day. As trade-offs go, thats not a bad one.

A paper describing the latest project, titled Atmospheric Erosion by Giant Impacts onto Terrestrial Planets: A Scaling Law for any Speed, Angle, Mass, and Density, was recently published in the journal Astrophysical Journal Letters.

Read this article:

What happens when two planets crash together? This supercomputer has the answer - Digital Trends

Supermicro Details Its Hardware for MN-3, the Most Efficient Supercomputer in the World – HPCwire

In June, HPCwire highlighted the new MN-3 supercomputer: a 1.6 Linpack petaflops system delivering 21.1 gigaflops per watt of power, making it the most energy-efficient supercomputer in the world at least, according to the latest Green500 list, the Top500s energy-conscious cousin. The system was built by Preferred Networks, a Japanese AI startup that used its in-house MN-Core accelerator to help deliver the MN-3s record-breaking efficiency. Collaborating with Preferred Networks was modular system manufacturer Supermicro, which detailed the hardware and processes behind the chart-topping green giant in a recent report.

As Supermicro tells it, Preferred Networks was facing challenges on two fronts: first, the need for a much more powerful system to solve its clients deep learning problems; and second, the exorbitant operating costs of the system they were envisioning. With increasing power costs, a large system of the size PFN was going to need, the operating costs of both the power and associated cooling would exceed the budget that was allocated, Supermicro wrote. Therefore, the energy efficiency of the new solution would have to be designed into the system, and not become an afterthought.

Preferred Networks turned to partnerships to help resolve these problems. First, they worked with researchers at Kobe University to develop the MN-Core accelerator, specializing it for deep learning training processes and optimizing it for energy efficiency. After successfully benchmarking the MN-Core above one teraflop per watt in testing, the developers turned to the rest of the system and thats where Supermicro entered the picture.

On a visit to Japan, Clay Chen general manager of global business development at Supermicro sat down with Preferred Networks to hear what they needed.

At first I was asking them, you know, what type of GPU they are using, Chen said in an interview with HPCwire. They say, oh, no, were not using any type were going to develop our own GPU. And that was quite fascinating to me.

Preferred Networks selected Supermicro for the daunting task: fitting four MN-Core boards, two Intel Xeon Platinum CPUs, up to 6TB of DDR4 memory and Intel Optane persistent memory modules in a single box without sacrificing the energy efficiency of the system.

Supermicro based its design on one of its preexisting GPU server models that was designed to house multiple GPUs (or other accelerators) and high-speed interconnects. Working with Preferred Networks engineers, Supermicro ran simulations to determine the optimal chassis design and component arrangement to ensure that the MN-Core accelerators would be sufficiently cooled and efficiency could be retained.

Somewhat surprisingly, the custom server is entirely fan-cooled. Our concept is: if we can design something with fan cooling, why would we want to use liquid cooling? Chen said. Because essentially, all the heat being pulled out from the liquid is going to cool somewhere. When you take the heat outside the box, you still need to cool the liquid with a fan.

The end result, a customized Supermicro server just for Preferred Networks, is pictured below.

The servers four MN-Core boards are connected to PCIe x16 slots on a Supermicro motherboard and to the MN-Core Direct Connect board that enables high-speed communication between the MN-Core boards.

These custom servers each 7U high were then rack-mounted into what would become the MN-3 supercomputer: 48 servers, four interconnect nodes and five 100GbE switches. In total, the systems 2,080 CPU cores, delivering 1,621 Linpack teraflops of performance, required just 77 kW of power for the Top500 benchmarking run. This efficiency level is just 15 percent short of the 40-megawatt limit targeted by planned exascale systems like Aurora, Frontier and El Capitan.

We are very pleased to have partnered with Supermicro, who worked with us very closely to build MN-3, which was recognized as the worlds most energy-efficient supercomputer, said Yusuke Doi, VP of computing infrastructure at Preferred Networks. We can deliver outstanding performance while using a fraction of the power that was previously required for such a large supercomputer.

Go here to read the rest:

Supermicro Details Its Hardware for MN-3, the Most Efficient Supercomputer in the World - HPCwire

I confess, I’m scared of the next generation of supercomputers – TechRadar

Earlier this year, a Japanese supercomputer built on Arm-based Fujitsu A64FX processors snatched the crown of worlds fastest machine, blowing incumbent leader IBM Summit out of the water.

Fugaku, as the machine is known, achieved 415.5 petaFLOPS by the popular High Performance Linpack (HPL) benchmark, which is almost three times the score of the IBM machine (148.5 petaFLOPS).

It also topped the rankings for Graph 500, HPL-AI and HPCH workloads - a feat never before achieved in the world of high performance computing (HPC).

Modern supercomputers are now edging ever-closer to the landmark figure of one exaFLOPS (equal to 1,000 petaFLOPS), commonly described as the exascale barrier. In fact, Fugaku itself can already achieve one exaFLOPS, but only in lower precision modes.

The consensus among the experts we spoke to is that a single machine will breach the exascale barrier within the next 6 - 24 months, unlocking a wealth of possibilities in the fields of medical research, climate forecasting, cybersecurity and more.

But what is an exaFLOPS? And what will it mean to break the exascale milestone, pursued doggedly for more than a decade?

To understand what it means to achieve exascale computing, its important to first understand what is meant by FLOPS, which stands for floating point operations per second.

A floating point operation is any mathematical calculation (i.e. addition, subtraction, multiplication or division) that involves a number containing a decimal (e.g. 3.0 - a floating point number), as opposed to a number without a decimal (e.g. 3 - a binary integer). Calculations involving decimals are typically more complex and therefore take longer to solve.

An exascale computer can perform 10^18 (one quintillion/100,000,000,000,000,000) of these mathematical calculations every second.

For context, to equal the number of calculations an exascale computer can process in a single second, an individual would have to perform one sum every second for 31,688,765,000 years.

The PC Im using right now, meanwhile, is able to reach 147 billion FLOPS (or 0.00000014723 exaFLOPS), outperforming the fastest supercomputer of 1993, the Intel Paragon (143.4 billion FLOPS).

This both underscores how far computing has come in the last three decades and puts into perspective the extreme performance levels attained by the leading supercomputers today.

The key to building a machine capable of reaching one exaFLOPS is optimization at the processing, storage and software layers.

The hardware must be small and powerful enough to pack together and reach the necessary speeds, the storage capacious and fast enough to serve up the data and the software scalable and programmable enough to make full use of the hardware.

For example, there comes a point at which adding more processors to a supercomputer will no longer affect its speed, because the application is not sufficiently optimized. The only way governments and private businesses will realize a full return on HPC hardware investment is through an equivalent investment in software.

Organizations such as the Exascale Computing Project (EPC) the ExCALIBUR programme are interested in solving precisely this problem. Those involved claim a renewed focus on algorithm and application development is required in order to harness the full power and scope of exascale.

Achieving the delicate balance between software and hardware, in an energy efficient manner and avoiding an impractically low mean time between failures (MTBF) score (the time that elapses before a system breaks down under strain) is the challenge facing the HPC industry.

15 years ago as we started the discussion on exascale, we hypothesized that it would need to be done in 20 mega-watts (MW); later that was changed to 40 MW. With Fugaku, we see that we are about halfway to a 64-bit exaFLOPS at the 40 MW power envelope, which shows that an exaFLOPS is in reach today, explained Brent Gorda, Senior Director HPC at UK-based chip manufacturer Arm.

We could hit an exaFLOPS now with sufficient funding to build and run a system. [But] the size of the system is likely to be such that MTBF is measured in single digit number-of-days based on todays technologies and the number of components necessary to reach these levels of performance.

When it comes to building a machine capable of breaching the exascale barrier, there are a number of other factors at play, beyond technological feasibility. An exascale computer can only come into being once an equilibrium has been reached at the intersection of technology, economics and politics.

One could in theory build an exascale system today by packing in enough CPUs, GPUs and DPUs. But what about economic viability? said Gilad Shainer of NVIDIA Mellanox, the firm behind the Infiniband technology (the fabric that links the various hardware components) found in seven of the ten fastest supercomputers.

Improvements in computing technologies, silicon processing, more efficient use of power and so on all help to increase efficiency and make exascale computing an economic objective as opposed to a sort of sporting achievement.

According to Paul Calleja, who heads up computing research at the University of Cambridge and is working with Dell on the Open Exascale Lab, Fugaku is an excellent example of what is theoretically possible today, but is also impractical by virtually any other metric.

If you look back at Japanese supercomputers, historically theres only ever been one of them made. They have beautifully exquisite architectures, but theyre so stupidly expensive and proprietary that no one else could afford one, he told TechRadar Pro.

[Japanese organizations] like these really large technology demonstrators, which are very useful in industry because it shows the direction of travel and pushes advancements, but those kinds of advancements are very expensive and not sustainable, scalable or replicable.

So, in this sense, there are two separate exascale landmarks; the theoretical barrier, which will likely be met first by a machine of Fugakus ilk (a technological demonstrator), and the practical barrier, which will see exascale computing deployed en masse.

Geopolitical factors will also play a role in how quickly the exascale barrier is breached. Researchers and engineers might focus exclusively on the technological feat, but the institutions and governments funding HPC research are likely motivated by different considerations.

Exascale computing is not just about reaching theoretical targets, it is about creating the ability to tackle problems that have been previously intractable, said Andy Grant, Vice President HPC & Big Data at IT services firm Atos, influential in the fields of HPC and quantum computing.

Those that are developing exascale technologies are not doing it merely to have the fastest supercomputer in the world, but to maintain international competitiveness, security and defence.

In Japan, their new machine is roughly 2.8x more powerful than the now-second place system. In broad terms, that will enable Japanese researchers to address problems that are 2.8x more complex. In the context of international competitiveness, that creates a significant advantage.

In years gone by, rival nations fought it out in the trenches or competed to see who could place the first human on the moon. But computing may well become the frontier at which the next arms race takes place; supremacy in the field of HPC might prove just as politically important as military strength.

Once exascale computers become an established resource - available for businesses, scientists and academics to draw upon - a wealth of possibilities will be unlocked across a wide variety of sectors.

HPC could prove revelatory in the fields of clinical medicine and genomics, for example, which require vast amounts of compute power to conduct molecular modelling, simulate interactions between compounds and sequence genomes.

In fact, IBM Summit and a host of other modern supercomputers are being used to identify chemical compounds that could contribute to the fight against coronavirus. The Covid-19 High Performance Computing Consortium assembled 16 supercomputers, accounting for an aggregate of 330 petaFLOPS - but imagine how much more quickly research could be conducted using a fleet of machines capable of reaching 1,000 petaFLOPS on their own.

Artificial intelligence, meanwhile, is another cross-disciplinary domain that will be transformed with the arrival of exascale computing. The ability to analyze ever-larger datasets will improve the ability of AI models to make accurate forecasts (contingent on the quality of data fed into the system) that could be applied to virtually any industry, from cybersecurity to e-commerce, manufacturing, logistics, banking, education and many more.

As explained by Rashid Mansoor, CTO at UK supercomputing startup Hadean, the value of supercomputing lies in the ability to make an accurate projection (of any variety).

The primary purpose of a supercomputer is to compute some real-world phenomenon to provide a prediction. The prediction could be the way proteins interact, the way a disease spreads through the population, how air moves over an aerofoil or electromagnetic fields interact with a spacecraft during re-entry, he told TechRadar Pro.

Raw performance such as the HPL benchmark simply indicates that we can model bigger and more complex systems to a greater degree of accuracy. One thing that the history of computing has shown us is that the demand for computing power is insatiable.

Other commonly cited areas that will benefit significantly from the arrival of exascale include brain mapping, weather and climate forecasting, product design and astronomy, but its also likely that brand new use cases will emerge as well.

The desired workloads and the technology to perform them form a virtuous circle. The faster and more performant the computers, the more complex problems we can solve and the faster the discovery of new problems, explained Shainer.

What we can be sure of is that we will see the continuous needs or ever growing demands for more performance capabilities in order to solve the unsolvable. Once this is solved, we will find the new unsolvable.

By all accounts, the exascale barrier will likely fall within the next two years, but the HPC industry will then turn its attention to the next objective, because the work is never done.

Some might point to quantum computers, which approach problem solving in an entirely different way to classical machines (exploiting symmetries to speed up processing), allowing for far greater scale. However, there are also problems to which quantum computing cannot be applied.

Mid-term (10 year) prospects for quantum computing are starting to shape up, as are other technologies. These will be more specialized where a quantum computer will very likely show up as an application accelerator for problems that relate to logistics first. They wont completely replace the need for current architectures for IT/data processing, explained Gorda.

As Mansoor puts it, on certain problems even a small quantum computer can be exponentially faster than all of the classical computing power on earth combined. Yet on other problems, a quantum computer could be slower than a pocket calculator.

The next logical landmark for traditional computing, then, would be one zettaFLOPS, equal to 1,000 exaFLOPS or 1,000,000 petaFLOPS.

Chinese researchers predicted in 2018 that the first zettascale system will come online in 2035, paving the way for new computing paradigms. The paper itself reads like science fiction, at least for the layman:

To realize these metrics, micro-architectures will evolve to consist of more diverse and heterogeneous components. Many forms of specialized accelerators are likely to co-exist to boost HPC in a joint effort. Enabled by new interconnect materials such as photonic crystal, fully optical interconnecting systems may come into use.

Assuming one exaFLOPS is reached by 2022, 14 years will have elapsed between the creation of the first petascale and first exascale systems. The first terascale machine, meanwhile, was constructed in 1996, 12 years before the petascale barrier was breached.

If this pattern were to continue, the Chinese researchers estimate would look relatively sensible, but there are firm question marks over the validity of zettascale projections.

While experts are confident in their predicted exascale timelines, none would venture a guess at when zettascale might arrive without prefacing their estimate with a long list of caveats.

Is that an interesting subject? Because to be honest with you, its so not obtainable. To imagine how we could go 1000x beyond [one exaFLOPS] is not a conversation anyone could have, unless theyre just making it up, said Calleja, asked about the concept of zettascale.

Others were more willing to theorize, but equally reticent to guess at a specific timeline. According to Grant, the way zettascale machines process information will be unlike any supercomputer in existence today.

[Zettascale systems] will be data-centric, meaning components will move to the data rather than the other way around, as data volumes are likely to be so large that moving data will be too expensive. Regardless, predicting what they might look like is all guesswork for now, he said.

It is also possible that the decentralized model might be the fastest route to achieving zettascale, with millions of less powerful devices working in unison to form a collective supercomputer more powerful than any single machine (as put into practice by the SETI Institute).

As noted by Saurabh Vij, CEO of distributed supercomputing firm Q Blocks, decentralized systems address a number of problems facing the HPC industry today, namely surrounding building and maintenance costs. They are also accessible to a much wider range of users and therefore democratize access to supercomputing resources in a way that is not otherwise possible.

There are benefits to a centralized architecture, but the cost and maintenance barrier overshadows them. [Centralized systems] also alienate a large base of customer groups that could benefit, he said.

We think a better way is to connect distributed nodes together in a reliable and secure manner. It wouldnt be too aggressive to say that, 5 years from now, your smartphone could be part of a giant distributed supercomputer, making money for you while you sleep by solving computational problems for industry, he added.

However, incentivizing network nodes to remain active for a long period is challenging and a high rate of turnover can lead to reliability issues. Network latency and capacity problems would also need to be addressed before distributed supercomputing can rise to prominence.

Ultimately, the difficulty in making firm predictions about zettascale lies in the massive chasm that separates present day workloads and HPC architectures from those that might exist in the future. From a contemporary perspective, its fruitless to imagine what might be made possible by a computer so powerful.

We might imagine zettascale machines will be used to process workloads similar to those tackled by modern supercomputers, only more quickly. But its possible - even likely - the arrival of zettascale computing will open doors that do not and cannot exist today, so extraordinary is the leap.

In a future in which computers are 2,000+ times as fast as the most powerful machine today, philosophical and ethical debate surrounding the intelligence of man versus machine are bound to be played out in greater detail - and with greater consequence.

It is impossible to directly compare the workings of a human brain with that of a computer - i.e. to assign a FLOPS value to the human mind. However, it is not insensible to ask how many FLOPS must be achieved before a machine reaches a level of performance that might be loosely comparable to the brain.

Back in 2013, scientists used the K supercomputer to conduct a neuronal network simulation using open source simulation software NEST. The team simulated a network made up of 1.73 billion nerve cells connected by 10.4 trillion synapses.

While ginormous, the simulation represented only 1% of the human brains neuronal network and took 40 minutes to replicate 1 seconds worth of neuronal network activity.

However, the K computer reached a maximum computational power of only 10 petaFLOPS. A basic extrapolation (ignoring inevitable complexities), then, would suggest Fugaku could simulate circa 40% of the human brain, while a zettascale computer would be capable of performing a full simulation many times over.

Digital neuromorphic hardware (supercomputers created specifically to simulate the human brain) like SpiNNaker 1 and 2 will also continue to develop in the post-exascale future. Instead of sending information from point A to B, these machines will be designed to replicate the parallel communication architecture of the brain, sending information simultaneously to many different locations.

Modern iterations are already used to help neuroscientists better understand the mysteries of the brain and future versions, aided by advances in artificial intelligence, will inevitably be used to construct a faithful and fully-functional replica.

The ethical debates that will arise with the arrival of such a machine - surrounding the perception of consciousness, the definition of thought and what an artificial uber-brain could or should be used for - are manifold and could take generations to unpick.

The inability to foresee what a zettascale computer might be capable of is also an inability to plan for the moral quandaries that might come hand-in-hand.

Whether a future supercomputer might be powerful enough to simulate human-like thought is not in question, but whether researchers should aspire to bringing an artificial brain into existence is a subject worthy of discussion.

Continued here:

I confess, I'm scared of the next generation of supercomputers - TechRadar

Bradykinin Hypothesis of COVID-19 Offers Hope for Already-Approved Drugs – BioSpace

A group of researchers at Oak Ridge National Lab in Tennessee used the Summit supercomputer, the second-fastest in the world, to analyze data on more than 40,000 genes from 17,000 genetic samples related to COVID-19. The analysis took more than a week and analyzed 2.5 billion genetic combinations. And it came up with a new theory, dubbed the bradykinin hypothesis, on how COVID-19 affects the body.

Daniel Jacobson, a computational systems biologist at Oak Ridge, noted that the expression of genes for significant enzymes in the renin-angiotensin system (RAS), which is involved in blood pressure regulation and fluid balance, was abnormal. He then tracked the abnormal RAS in the lung fluid samples to the kinin cascade, which is an inflammatory pathway closely regulated by the RAS.

In the kinin system, bradykinin, which is a key peptide, causes blood vessels to leak, allowing fluid to accumulate in organs and tissue. And in COVID-19 patients, this system was unbalanced. People with the disease had increased gene expression for the bradykinin receptors and for enzymes known as kallikreins that activate the kinin pathway.

Jacobson and his team published the research in the journal eLife. They believe that this research explains many aspects of COVID-19 that were previously not understood, including why there is an abnormal accumulation of fluid in the patients lungs.

From the research, SARS-CoV-2 infection typically starts when the virus enters the body via ACE2 receptor in the nose, where they are common. The virus then moves through the body, integrating into cells that also have ACE2, including the intestines, kidneys and heart. This is consistent with some of COVID-19s cardiac and gastrointestinal symptoms.

But the virus does not appear to stop there. Instead, it takes over the bodys systems, upregulating ACE2 receptors in cells and tissues where theyre not common, including the lungs. Or as Thomas Smith writes in Medium, COVID-19 is like a burglar who slips in your unlocked second-floor window and starts to ransack your house. Once inside, though, they dont just take your stuffthey also throw open all your doors and windows so their accomplices can rush in and help pillage more efficiently.

The final result of all this is what is being called a bradykinin storm. When the virus affects the RAS, the way the body regulates bradykinin runs amuck, bradykinin receptors are resensitized, and the body stops breaking down bradykinin, which is typically degraded by ACE. They believe it is this bradykinin storm that is responsible for many of COVID-19s deadliest symptoms.

The researchers wrote that the pathology of COVID-19 is likely the result of Bradykinin Storms rather than cytokine storms, which have been observed in COVID-19 patients, but that the two may be intricately linked.

Another researcher, Frank van de Veerdonk, an infectious disease researcher at the Radboud University Medical Center in Netherlands, had made similar observations in mid-March. In April, he and his research team theorized that a dysregulated bradykinin system was causing leaky blood vessels in the lungs, which was a potential cause of the excess fluid accumulation.

Josef Penninger, director of the Life Sciences Institute at the University of British Columbia in Vancouver, who identified that ACE2 is the essential in vivo receptor for SARS, told The Scientist that he believes bradykinin plays a role in COVID-19. It does make a lot of sense. And Jacobsons study supports the hypothesis, but additional research is needed for confirmation. Gene expression signatures dont tell us the whole story. I think it is very important to actually measure the proteins.

Another aspect of Jacobsons study is that via another pathway, COVID-19 increases production of hyaluronic acid (HLA) in the lungs. HLA is common in soaps and lotions because it absorbs more than 1,000 times its weight in fluid. Taking into consideration fluid leaking into the lungs and increased HLA, it creates a hydrogel in the lungs of some COVID-19 patients, which Jacobson describes as like trying to breathe through Jell-O.

This provides a possible explanation for why ventilators have been less effective in severe COVID-19 than physicians originally expected. It reaches a point, Jacobson says, where regardless of how much oxygen you pump in, it doesnt matter, because the alveoli in the lungs are filled with this hydrogel. The lungs become like a water balloon.

The bradykinin hypothesis also explains why about 20% of COVID-19 patients have heart damage, because RAS controls aspects of cardiac contractions and blood pressure. It also supports COVID-19s neurological effects, such as dizziness, seizures, delirium and stroke, which is seen in as much as 50% of hospitalized patients. French-based research identified leaky blood vessels in the brains of COVID-19 patients. And at high doses, bradykinin can break down the blood-brain barrier.

On the positive side, their research suggests that drugs that target components of RAS are already FDA approved for other diseases and might be effective in treating COVID-19. Some, such as danazol (to treat endometriosis, fibrocystic breast disease, and hereditary angioedema), stanazolol (an anabolic steroid derived from testosterone), and ecallantide (marketed as Kalbitor for hereditary angioedema (HAE) and the prevention of blood loss in cardiothoracic surgery), decrease bradykinin production. Icatibant, also used to treat HAE, and is marketed as Firazyr, decreases bradykinin signaling and could minimize its effects once its in the body. Vitamin D may potentially be useful, because it is involved in the RAS system and may reduce levels of REN, another compound involved in the system.

The researchers note that the testing of any of these pharmaceutical interventions should be done in well-designed clinical trials.

More here:

Bradykinin Hypothesis of COVID-19 Offers Hope for Already-Approved Drugs - BioSpace

Stranger than fiction? Why we need supercomputers – TechHQ

In2001: A Space Odyssey, the main villain is a supercomputer named Hal-9000 that was responsible for the death ofDiscovery Onescrew.

Need some help remembering Douglas Rains chilling voice as the sentient computer?

Even though HAL-9000 met with a slow, painful death by disconnection, it remains one of the most iconic supercomputers on screen and in fiction. The villainous systems display of humanity in its last moment, singing the lullaby of Daisy Bell urges viewers to recognize the strong sense of self that the machine possesses. However, in the real world, supercomputers are far less sentimental, if not far off in terms of their data processing and problem-solving ability.

What truly separates supercomputers from your not-so-super-computers is the way they process the workload. Supercomputers, fundamentally, adopt a technique called parallel processing that uses multiple compute resources to solve a computational problem. In contrast, our regular computers run on serial computing that solves computational problems one at a time, following a sequence.

For a sense of just how powerful these systems are, supercomputers are frequently used for simulating reality, including astronomical events like two galaxies colliding or predicting how a nuclear attack would play out.

Supercomputers can simulate astronomical events. Source: Unsplash

Now, scaling it down from the fate of the universe, supercomputers are also used for enterprise-wide applications.

Over the years, the power of supercomputers in simulating reality has given humankind a better ability to make predictions or boost product designs. In manufacturing, this ability users can test out countless product designs to discern which prototypes are best suited to the real world. In this sense, supercomputing significantly slashes the number of physical testing resources and helps organizations get products to market quicker, allowing them to seize opportunities to lead in their respective markets and gain extra profit.

Jack Dongarra, a leading supercomputer expert,noted that the industrial use of supercomputers is widespread: Industry gets it. They are investing in high-performance computers to be more competitive and to gain an edge on their competition. And they feel that money is well spent. They are investing in these things to help drive their products and innovation, their bottom line, their productivity, and their profitability, Dongarra said.

Supercomputers are also helping scientists and researchers in developing new life-saving medicines. Presently, supercomputers all over the world are united over the singular goal in the research and development of a COVID-19 vaccine.

Equipped with the capabilities of supercomputers, researches gain unique opportunities to explore the structure and behavior of the infamous virus at a molecular stage. Since a supercomputer can simulate a myriad of interactions between the virus and human body cells, researchers are able to forecast the spread of the disease and seek for promising treatments or vaccine materials.

Japans Fugaku supercomputer, located at the RIKEN Center for Computational Science in Kobe, was recently crowned the worlds fastest. Around 3,000 researchers use it to search and model new drugs, study weather, and natural disaster scenarios, even the fundamental laws of physics and nature. Recently, researchers have been experimenting with using Fugaku for COVID-19 research into diagnostics, therapeutics, and simulations that replicate the spread patterns of the virus.

Fugaku was developed based on the idea of achieving high performance on a variety of applications of great public interest [] and we are very happy that it has shown itself to be outstanding on all the major supercomputer benchmarks, Satoshi Matsuoka, director of the RIKEN Center, said. I hope that the leading-edge IT developed for it will contribute to major advances on difficult social challenges such as COVID-19.

InIBMs company blog, the Director of IBM Research, Dario Gil writes: The supercomputers will run myriad calculations in epidemiology, bioinformatics, and molecular modeling, in a bid to drastically cut the time of discovery of new molecules that could lead to a vaccine.

A supercomputers parallel computing makes it uniquely suited to screen through a deluge of data and, at its core, solve complex problems that require a lot of number-crunching. Erik Lindahl, a professor of biophysics,sharedto date, supercomputers enable scientists to see how liquids diffuse around the proteins, and no other experimental method is capable of that.

We could not do what we do without computers. The computers enable us to see things that we could never see in experiments otherwise.

While Hals infamous line Im sorry Dave, Im afraid I cant do that left viewers to debate if Hal was truly evil or just obeying orders, perhaps its time we bring this conversation back to life and focus on the extraordinary capabilities of these supercomputers.

View post:

Stranger than fiction? Why we need supercomputers - TechHQ

Google Says It Just Ran The First-Ever Quantum Simulation of a Chemical Reaction – ScienceAlert

Of the many high expectations we have of quantum technology, one of the most exciting has to be the ability to simulate chemistry on an unprecedented level. Now we have our first glimpse of what that might look like.

Together with a team of collaborators, the Google AI Quantum team has used their 54 qubit quantum processor, Sycamore, to simulate changes in the configuration of a molecule called diazene.

As far as chemical reactions go, it's one of the simplest ones we know of.Diazene is little more than a couple of nitrogens linked in a double bond, each towing a hydrogen atom.

However, the quantum computer accurately described changes in the positions of hydrogen to form different diazene isomers.The team also used their system to arrive at an accurate description of the binding energy of hydrogen in increasingly bigger chains.

As straight-forward as these two models may sound, there's a lot going on under the hood. Forget the formulaic chemical reactions from your school textbooks - on a level of quantum mechanics, chemistry is a complicated mix of possibilities.

In some ways, it's the difference between knowing a casino will always make a profit, and predicting the outcomes of the individual games being played inside. Restricted to the predictable rules of classical computers, an ability to represent the infinite combinations of dice rolls and royal flushes of quantum physics has been just too hard.

Quantum computers, on the other hand, are constructed around these very same principles of quantum probability that govern chemistry on a fundamental level.

Logical units called qubits exist in a fuzzy state of 'either/or'. When combined with the 'maybe' states of other qubits in a system, it provides computer engineers with a unique way to carry out computations.

Algorithms specially formulated to take advantage of these quantum mechanics allow for shortcuts, reducing down to minutes that which would take a classical super computer thousands of years of grinding.

If we're to have a hope of modelling chemistry on a quantum level, we're going to need that kind of power, and some.

Just calculating the sum of actions that determine the energy in a molecule of propane would hypothetically take a supercomputer more than a week.But there's a world of difference between a snapshot of a molecule's energy, and calculating all the ways they might change.

The diazene simulation used 12 of the 54 qubits in the Sycamore processor to perform its calculations. This in itself was still twice the size of any previous attempts at chemistry simulations.

The team also pushed the limits on an algorithm designed to marry classical with quantum processes, one designed to iron out the errors that arise all too easily in the delicate world of quantum computing.

It all adds up to possibilities of increasingly bigger simulations in the future, helping us design more robust materials, sift out more effective pharmaceuticals, and even unlock more secrets of our Universe's quantum casino.

Diazene's wandering hydrogens is just the start of the kinds of chemistry we might soon be able to model in a quantum landscape.

This research was published in Science.

Go here to see the original:

Google Says It Just Ran The First-Ever Quantum Simulation of a Chemical Reaction - ScienceAlert

This Equation Calculates the Chances We Live in a Computer Simulation – Discover Magazine

The Drake equation is one of the more famous reckonings in science. It calculates the likelihood that we are not alone in the universe by estimating the number of other intelligent civilizations in our galaxy that might exist now.

Some of the terms in this equation are well known or becoming better understood, such as the number of stars in our galaxy and the proportion that have planets in the habitable zone. But others are unknown, such as the proportion of planets that develop intelligent life; and some may never be known, such as the proportion that destroy themselves before they can be discovered.

Nevertheless, the Drake equation allows scientists to place important bounds on the numbers of intelligent civilizations that might be out there.

However, there is another sense in which humanity could be linked with an alien intelligence our world may just be a simulation inside a massively powerful supercomputer run by such a species. Indeed, various scientists, philosophers and visionaries have said that the probability of such a scenario could be close to one. In other words, we probably are living in a simulation.

The accuracy of these claims is somewhat controversial. So a better way to determine the probability that we live in a simulation would be much appreciated.

Enter Alexandre Bibeau-Delisle and Gilles Brassard at the University of Montreal in Canada. These researchers have derived a Drake-like equation that calculates the chances that we live in a computer simulation. And the results throw up some counterintuitive ideas that are likely to change the way we think about simulations, how we might determine whether we are in one and whether we could ever escape.

Bibeau-Delisle and Brassard begin with a fundamental estimate of the computing power available to create a simulation. They say, for example, that a kilogram of matter, fully exploited for computation, could perform 10^50 operations per second.

By comparison, the human brain, which is also kilogram-sized, performs up to 10^16 operations per second. It may thus be possible for a single computer the mass of a human brain to simulate the real-time evolution of 1.4 10^25 virtual brains, they say.

In our society, a significant number of computers already simulate entire civilizations, in games such as Civilization VI, Hearts of Iron IV, Humankind and so on. So it may be reasonable to assume that in a sufficiently advanced civilization, individuals will be able to run games that simulate societies like ours, populated with sentient conscious beings.

So an interesting question is this: of all the sentient beings in existence, what fraction are likely to be simulations? To derive the answer, Bibeau-Delisle and Brassard start with the total number of real sentient beings NRe, multiply that by the fraction with access to the necessary computing power fCiv; multiply this by the fraction of that power that is devoted to simulating consciousness fDed (because these beings are likely to be using their computer for other purposes, too); and then multiply this by the number of brains they could simulate Rcal.

The resulting equation is this, where fSim is the fraction of simulated brains:

Here RCal is the huge number of brains that fully exploited matter should be able to simulate.

The sheer size of this number, ~10^25, pushes Bibeau-Delisle and Brassard toward an inescapable conclusion. It is mathematically inescapable from [the above] equation and the colossal scale of RCal that fSim 1 unless fCiv fDed 0, they say.

So there are two possible outcomes. Either we live in a simulation or a vanishingly small proportion of advanced computing power is devoted to simulating brains.

Its not hard to imagine why the second option might be true. A society of beings similar to us (but with a much greater technological development) could indeed decide it is not very ethical to simulate beings with enough precision to make them conscious while fooling them and keeping them cut off from the real world, say Bibeau-Delisle and Brassard.

Another possibility is that advanced civilizations never get to the stage where their technology is powerful enough to perform these kinds of computations. Perhaps they destroy themselves through war or disease or climate change long before then. There is no way of knowing.

But suppose we are in a simulation. Bibeau-Delisle and Brassard ask whether we might escape while somehow hiding our intentions from our overlords. They assume that the simulating technology will be quantum in nature. If quantum phenomena are as difficult to compute on classical systems as we believe them to be, a simulation containing our world would most probably run on quantum computing power, they say.

This raises the possibility that it may be possible to detect our alien overlords since they cannot measure the quantum nature of our world without revealing their presence. Quantum cryptography uses the same principle; indeed, Brassard is one of the pioneers of this technology.

That would seem to make it possible for us to make encrypted plans that are hidden from the overlords, such as secretly transferring ourselves into our own simulations.

However, the overlords have a way to foil this. All they need to do is to rewire their simulation to make it look as if we are able to hide information, even though they are aware of it all the time. If the simulators are particularly angry at our attempted escape, they could also send us to a simulated hell, in which case we would at least have the confirmation we were truly living inside a simulation and our paranoia was not unjustified..., conclude Bibeau-Delisle and Brassard, with their tongues firmly in their cheeks.

In that sense, we are the ultimate laboratory guinea pigs: forever trapped and forever fooled by the evil genius of our omnipotent masters.

Time for another game of Civilization VI.

Ref: Probability and Consequences of Living Inside a Computer Simulation. arxiv.org/abs/2008.09275

Read more:

This Equation Calculates the Chances We Live in a Computer Simulation - Discover Magazine