12345...102030...


Supermicro Details Its Hardware for MN-3, the Most Efficient Supercomputer in the World – HPCwire

In June, HPCwire highlighted the new MN-3 supercomputer: a 1.6 Linpack petaflops system delivering 21.1 gigaflops per watt of power, making it the most energy-efficient supercomputer in the world at least, according to the latest Green500 list, the Top500s energy-conscious cousin. The system was built by Preferred Networks, a Japanese AI startup that used its in-house MN-Core accelerator to help deliver the MN-3s record-breaking efficiency. Collaborating with Preferred Networks was modular system manufacturer Supermicro, which detailed the hardware and processes behind the chart-topping green giant in a recent report.

As Supermicro tells it, Preferred Networks was facing challenges on two fronts: first, the need for a much more powerful system to solve its clients deep learning problems; and second, the exorbitant operating costs of the system they were envisioning. With increasing power costs, a large system of the size PFN was going to need, the operating costs of both the power and associated cooling would exceed the budget that was allocated, Supermicro wrote. Therefore, the energy efficiency of the new solution would have to be designed into the system, and not become an afterthought.

Preferred Networks turned to partnerships to help resolve these problems. First, they worked with researchers at Kobe University to develop the MN-Core accelerator, specializing it for deep learning training processes and optimizing it for energy efficiency. After successfully benchmarking the MN-Core above one teraflop per watt in testing, the developers turned to the rest of the system and thats where Supermicro entered the picture.

On a visit to Japan, Clay Chen general manager of global business development at Supermicro sat down with Preferred Networks to hear what they needed.

At first I was asking them, you know, what type of GPU they are using, Chen said in an interview with HPCwire. They say, oh, no, were not using any type were going to develop our own GPU. And that was quite fascinating to me.

Preferred Networks selected Supermicro for the daunting task: fitting four MN-Core boards, two Intel Xeon Platinum CPUs, up to 6TB of DDR4 memory and Intel Optane persistent memory modules in a single box without sacrificing the energy efficiency of the system.

Supermicro based its design on one of its preexisting GPU server models that was designed to house multiple GPUs (or other accelerators) and high-speed interconnects. Working with Preferred Networks engineers, Supermicro ran simulations to determine the optimal chassis design and component arrangement to ensure that the MN-Core accelerators would be sufficiently cooled and efficiency could be retained.

Somewhat surprisingly, the custom server is entirely fan-cooled. Our concept is: if we can design something with fan cooling, why would we want to use liquid cooling? Chen said. Because essentially, all the heat being pulled out from the liquid is going to cool somewhere. When you take the heat outside the box, you still need to cool the liquid with a fan.

The end result, a customized Supermicro server just for Preferred Networks, is pictured below.

The servers four MN-Core boards are connected to PCIe x16 slots on a Supermicro motherboard and to the MN-Core Direct Connect board that enables high-speed communication between the MN-Core boards.

These custom servers each 7U high were then rack-mounted into what would become the MN-3 supercomputer: 48 servers, four interconnect nodes and five 100GbE switches. In total, the systems 2,080 CPU cores, delivering 1,621 Linpack teraflops of performance, required just 77 kW of power for the Top500 benchmarking run. This efficiency level is just 15 percent short of the 40-megawatt limit targeted by planned exascale systems like Aurora, Frontier and El Capitan.

We are very pleased to have partnered with Supermicro, who worked with us very closely to build MN-3, which was recognized as the worlds most energy-efficient supercomputer, said Yusuke Doi, VP of computing infrastructure at Preferred Networks. We can deliver outstanding performance while using a fraction of the power that was previously required for such a large supercomputer.

Go here to read the rest:

Supermicro Details Its Hardware for MN-3, the Most Efficient Supercomputer in the World - HPCwire

I confess, I’m scared of the next generation of supercomputers – TechRadar

Earlier this year, a Japanese supercomputer built on Arm-based Fujitsu A64FX processors snatched the crown of worlds fastest machine, blowing incumbent leader IBM Summit out of the water.

Fugaku, as the machine is known, achieved 415.5 petaFLOPS by the popular High Performance Linpack (HPL) benchmark, which is almost three times the score of the IBM machine (148.5 petaFLOPS).

It also topped the rankings for Graph 500, HPL-AI and HPCH workloads - a feat never before achieved in the world of high performance computing (HPC).

Modern supercomputers are now edging ever-closer to the landmark figure of one exaFLOPS (equal to 1,000 petaFLOPS), commonly described as the exascale barrier. In fact, Fugaku itself can already achieve one exaFLOPS, but only in lower precision modes.

The consensus among the experts we spoke to is that a single machine will breach the exascale barrier within the next 6 - 24 months, unlocking a wealth of possibilities in the fields of medical research, climate forecasting, cybersecurity and more.

But what is an exaFLOPS? And what will it mean to break the exascale milestone, pursued doggedly for more than a decade?

To understand what it means to achieve exascale computing, its important to first understand what is meant by FLOPS, which stands for floating point operations per second.

A floating point operation is any mathematical calculation (i.e. addition, subtraction, multiplication or division) that involves a number containing a decimal (e.g. 3.0 - a floating point number), as opposed to a number without a decimal (e.g. 3 - a binary integer). Calculations involving decimals are typically more complex and therefore take longer to solve.

An exascale computer can perform 10^18 (one quintillion/100,000,000,000,000,000) of these mathematical calculations every second.

For context, to equal the number of calculations an exascale computer can process in a single second, an individual would have to perform one sum every second for 31,688,765,000 years.

The PC Im using right now, meanwhile, is able to reach 147 billion FLOPS (or 0.00000014723 exaFLOPS), outperforming the fastest supercomputer of 1993, the Intel Paragon (143.4 billion FLOPS).

This both underscores how far computing has come in the last three decades and puts into perspective the extreme performance levels attained by the leading supercomputers today.

The key to building a machine capable of reaching one exaFLOPS is optimization at the processing, storage and software layers.

The hardware must be small and powerful enough to pack together and reach the necessary speeds, the storage capacious and fast enough to serve up the data and the software scalable and programmable enough to make full use of the hardware.

For example, there comes a point at which adding more processors to a supercomputer will no longer affect its speed, because the application is not sufficiently optimized. The only way governments and private businesses will realize a full return on HPC hardware investment is through an equivalent investment in software.

Organizations such as the Exascale Computing Project (EPC) the ExCALIBUR programme are interested in solving precisely this problem. Those involved claim a renewed focus on algorithm and application development is required in order to harness the full power and scope of exascale.

Achieving the delicate balance between software and hardware, in an energy efficient manner and avoiding an impractically low mean time between failures (MTBF) score (the time that elapses before a system breaks down under strain) is the challenge facing the HPC industry.

15 years ago as we started the discussion on exascale, we hypothesized that it would need to be done in 20 mega-watts (MW); later that was changed to 40 MW. With Fugaku, we see that we are about halfway to a 64-bit exaFLOPS at the 40 MW power envelope, which shows that an exaFLOPS is in reach today, explained Brent Gorda, Senior Director HPC at UK-based chip manufacturer Arm.

We could hit an exaFLOPS now with sufficient funding to build and run a system. [But] the size of the system is likely to be such that MTBF is measured in single digit number-of-days based on todays technologies and the number of components necessary to reach these levels of performance.

When it comes to building a machine capable of breaching the exascale barrier, there are a number of other factors at play, beyond technological feasibility. An exascale computer can only come into being once an equilibrium has been reached at the intersection of technology, economics and politics.

One could in theory build an exascale system today by packing in enough CPUs, GPUs and DPUs. But what about economic viability? said Gilad Shainer of NVIDIA Mellanox, the firm behind the Infiniband technology (the fabric that links the various hardware components) found in seven of the ten fastest supercomputers.

Improvements in computing technologies, silicon processing, more efficient use of power and so on all help to increase efficiency and make exascale computing an economic objective as opposed to a sort of sporting achievement.

According to Paul Calleja, who heads up computing research at the University of Cambridge and is working with Dell on the Open Exascale Lab, Fugaku is an excellent example of what is theoretically possible today, but is also impractical by virtually any other metric.

If you look back at Japanese supercomputers, historically theres only ever been one of them made. They have beautifully exquisite architectures, but theyre so stupidly expensive and proprietary that no one else could afford one, he told TechRadar Pro.

[Japanese organizations] like these really large technology demonstrators, which are very useful in industry because it shows the direction of travel and pushes advancements, but those kinds of advancements are very expensive and not sustainable, scalable or replicable.

So, in this sense, there are two separate exascale landmarks; the theoretical barrier, which will likely be met first by a machine of Fugakus ilk (a technological demonstrator), and the practical barrier, which will see exascale computing deployed en masse.

Geopolitical factors will also play a role in how quickly the exascale barrier is breached. Researchers and engineers might focus exclusively on the technological feat, but the institutions and governments funding HPC research are likely motivated by different considerations.

Exascale computing is not just about reaching theoretical targets, it is about creating the ability to tackle problems that have been previously intractable, said Andy Grant, Vice President HPC & Big Data at IT services firm Atos, influential in the fields of HPC and quantum computing.

Those that are developing exascale technologies are not doing it merely to have the fastest supercomputer in the world, but to maintain international competitiveness, security and defence.

In Japan, their new machine is roughly 2.8x more powerful than the now-second place system. In broad terms, that will enable Japanese researchers to address problems that are 2.8x more complex. In the context of international competitiveness, that creates a significant advantage.

In years gone by, rival nations fought it out in the trenches or competed to see who could place the first human on the moon. But computing may well become the frontier at which the next arms race takes place; supremacy in the field of HPC might prove just as politically important as military strength.

Once exascale computers become an established resource - available for businesses, scientists and academics to draw upon - a wealth of possibilities will be unlocked across a wide variety of sectors.

HPC could prove revelatory in the fields of clinical medicine and genomics, for example, which require vast amounts of compute power to conduct molecular modelling, simulate interactions between compounds and sequence genomes.

In fact, IBM Summit and a host of other modern supercomputers are being used to identify chemical compounds that could contribute to the fight against coronavirus. The Covid-19 High Performance Computing Consortium assembled 16 supercomputers, accounting for an aggregate of 330 petaFLOPS - but imagine how much more quickly research could be conducted using a fleet of machines capable of reaching 1,000 petaFLOPS on their own.

Artificial intelligence, meanwhile, is another cross-disciplinary domain that will be transformed with the arrival of exascale computing. The ability to analyze ever-larger datasets will improve the ability of AI models to make accurate forecasts (contingent on the quality of data fed into the system) that could be applied to virtually any industry, from cybersecurity to e-commerce, manufacturing, logistics, banking, education and many more.

As explained by Rashid Mansoor, CTO at UK supercomputing startup Hadean, the value of supercomputing lies in the ability to make an accurate projection (of any variety).

The primary purpose of a supercomputer is to compute some real-world phenomenon to provide a prediction. The prediction could be the way proteins interact, the way a disease spreads through the population, how air moves over an aerofoil or electromagnetic fields interact with a spacecraft during re-entry, he told TechRadar Pro.

Raw performance such as the HPL benchmark simply indicates that we can model bigger and more complex systems to a greater degree of accuracy. One thing that the history of computing has shown us is that the demand for computing power is insatiable.

Other commonly cited areas that will benefit significantly from the arrival of exascale include brain mapping, weather and climate forecasting, product design and astronomy, but its also likely that brand new use cases will emerge as well.

The desired workloads and the technology to perform them form a virtuous circle. The faster and more performant the computers, the more complex problems we can solve and the faster the discovery of new problems, explained Shainer.

What we can be sure of is that we will see the continuous needs or ever growing demands for more performance capabilities in order to solve the unsolvable. Once this is solved, we will find the new unsolvable.

By all accounts, the exascale barrier will likely fall within the next two years, but the HPC industry will then turn its attention to the next objective, because the work is never done.

Some might point to quantum computers, which approach problem solving in an entirely different way to classical machines (exploiting symmetries to speed up processing), allowing for far greater scale. However, there are also problems to which quantum computing cannot be applied.

Mid-term (10 year) prospects for quantum computing are starting to shape up, as are other technologies. These will be more specialized where a quantum computer will very likely show up as an application accelerator for problems that relate to logistics first. They wont completely replace the need for current architectures for IT/data processing, explained Gorda.

As Mansoor puts it, on certain problems even a small quantum computer can be exponentially faster than all of the classical computing power on earth combined. Yet on other problems, a quantum computer could be slower than a pocket calculator.

The next logical landmark for traditional computing, then, would be one zettaFLOPS, equal to 1,000 exaFLOPS or 1,000,000 petaFLOPS.

Chinese researchers predicted in 2018 that the first zettascale system will come online in 2035, paving the way for new computing paradigms. The paper itself reads like science fiction, at least for the layman:

To realize these metrics, micro-architectures will evolve to consist of more diverse and heterogeneous components. Many forms of specialized accelerators are likely to co-exist to boost HPC in a joint effort. Enabled by new interconnect materials such as photonic crystal, fully optical interconnecting systems may come into use.

Assuming one exaFLOPS is reached by 2022, 14 years will have elapsed between the creation of the first petascale and first exascale systems. The first terascale machine, meanwhile, was constructed in 1996, 12 years before the petascale barrier was breached.

If this pattern were to continue, the Chinese researchers estimate would look relatively sensible, but there are firm question marks over the validity of zettascale projections.

While experts are confident in their predicted exascale timelines, none would venture a guess at when zettascale might arrive without prefacing their estimate with a long list of caveats.

Is that an interesting subject? Because to be honest with you, its so not obtainable. To imagine how we could go 1000x beyond [one exaFLOPS] is not a conversation anyone could have, unless theyre just making it up, said Calleja, asked about the concept of zettascale.

Others were more willing to theorize, but equally reticent to guess at a specific timeline. According to Grant, the way zettascale machines process information will be unlike any supercomputer in existence today.

[Zettascale systems] will be data-centric, meaning components will move to the data rather than the other way around, as data volumes are likely to be so large that moving data will be too expensive. Regardless, predicting what they might look like is all guesswork for now, he said.

It is also possible that the decentralized model might be the fastest route to achieving zettascale, with millions of less powerful devices working in unison to form a collective supercomputer more powerful than any single machine (as put into practice by the SETI Institute).

As noted by Saurabh Vij, CEO of distributed supercomputing firm Q Blocks, decentralized systems address a number of problems facing the HPC industry today, namely surrounding building and maintenance costs. They are also accessible to a much wider range of users and therefore democratize access to supercomputing resources in a way that is not otherwise possible.

There are benefits to a centralized architecture, but the cost and maintenance barrier overshadows them. [Centralized systems] also alienate a large base of customer groups that could benefit, he said.

We think a better way is to connect distributed nodes together in a reliable and secure manner. It wouldnt be too aggressive to say that, 5 years from now, your smartphone could be part of a giant distributed supercomputer, making money for you while you sleep by solving computational problems for industry, he added.

However, incentivizing network nodes to remain active for a long period is challenging and a high rate of turnover can lead to reliability issues. Network latency and capacity problems would also need to be addressed before distributed supercomputing can rise to prominence.

Ultimately, the difficulty in making firm predictions about zettascale lies in the massive chasm that separates present day workloads and HPC architectures from those that might exist in the future. From a contemporary perspective, its fruitless to imagine what might be made possible by a computer so powerful.

We might imagine zettascale machines will be used to process workloads similar to those tackled by modern supercomputers, only more quickly. But its possible - even likely - the arrival of zettascale computing will open doors that do not and cannot exist today, so extraordinary is the leap.

In a future in which computers are 2,000+ times as fast as the most powerful machine today, philosophical and ethical debate surrounding the intelligence of man versus machine are bound to be played out in greater detail - and with greater consequence.

It is impossible to directly compare the workings of a human brain with that of a computer - i.e. to assign a FLOPS value to the human mind. However, it is not insensible to ask how many FLOPS must be achieved before a machine reaches a level of performance that might be loosely comparable to the brain.

Back in 2013, scientists used the K supercomputer to conduct a neuronal network simulation using open source simulation software NEST. The team simulated a network made up of 1.73 billion nerve cells connected by 10.4 trillion synapses.

While ginormous, the simulation represented only 1% of the human brains neuronal network and took 40 minutes to replicate 1 seconds worth of neuronal network activity.

However, the K computer reached a maximum computational power of only 10 petaFLOPS. A basic extrapolation (ignoring inevitable complexities), then, would suggest Fugaku could simulate circa 40% of the human brain, while a zettascale computer would be capable of performing a full simulation many times over.

Digital neuromorphic hardware (supercomputers created specifically to simulate the human brain) like SpiNNaker 1 and 2 will also continue to develop in the post-exascale future. Instead of sending information from point A to B, these machines will be designed to replicate the parallel communication architecture of the brain, sending information simultaneously to many different locations.

Modern iterations are already used to help neuroscientists better understand the mysteries of the brain and future versions, aided by advances in artificial intelligence, will inevitably be used to construct a faithful and fully-functional replica.

The ethical debates that will arise with the arrival of such a machine - surrounding the perception of consciousness, the definition of thought and what an artificial uber-brain could or should be used for - are manifold and could take generations to unpick.

The inability to foresee what a zettascale computer might be capable of is also an inability to plan for the moral quandaries that might come hand-in-hand.

Whether a future supercomputer might be powerful enough to simulate human-like thought is not in question, but whether researchers should aspire to bringing an artificial brain into existence is a subject worthy of discussion.

Continued here:

I confess, I'm scared of the next generation of supercomputers - TechRadar

Bradykinin Hypothesis of COVID-19 Offers Hope for Already-Approved Drugs – BioSpace

A group of researchers at Oak Ridge National Lab in Tennessee used the Summit supercomputer, the second-fastest in the world, to analyze data on more than 40,000 genes from 17,000 genetic samples related to COVID-19. The analysis took more than a week and analyzed 2.5 billion genetic combinations. And it came up with a new theory, dubbed the bradykinin hypothesis, on how COVID-19 affects the body.

Daniel Jacobson, a computational systems biologist at Oak Ridge, noted that the expression of genes for significant enzymes in the renin-angiotensin system (RAS), which is involved in blood pressure regulation and fluid balance, was abnormal. He then tracked the abnormal RAS in the lung fluid samples to the kinin cascade, which is an inflammatory pathway closely regulated by the RAS.

In the kinin system, bradykinin, which is a key peptide, causes blood vessels to leak, allowing fluid to accumulate in organs and tissue. And in COVID-19 patients, this system was unbalanced. People with the disease had increased gene expression for the bradykinin receptors and for enzymes known as kallikreins that activate the kinin pathway.

Jacobson and his team published the research in the journal eLife. They believe that this research explains many aspects of COVID-19 that were previously not understood, including why there is an abnormal accumulation of fluid in the patients lungs.

From the research, SARS-CoV-2 infection typically starts when the virus enters the body via ACE2 receptor in the nose, where they are common. The virus then moves through the body, integrating into cells that also have ACE2, including the intestines, kidneys and heart. This is consistent with some of COVID-19s cardiac and gastrointestinal symptoms.

But the virus does not appear to stop there. Instead, it takes over the bodys systems, upregulating ACE2 receptors in cells and tissues where theyre not common, including the lungs. Or as Thomas Smith writes in Medium, COVID-19 is like a burglar who slips in your unlocked second-floor window and starts to ransack your house. Once inside, though, they dont just take your stuffthey also throw open all your doors and windows so their accomplices can rush in and help pillage more efficiently.

The final result of all this is what is being called a bradykinin storm. When the virus affects the RAS, the way the body regulates bradykinin runs amuck, bradykinin receptors are resensitized, and the body stops breaking down bradykinin, which is typically degraded by ACE. They believe it is this bradykinin storm that is responsible for many of COVID-19s deadliest symptoms.

The researchers wrote that the pathology of COVID-19 is likely the result of Bradykinin Storms rather than cytokine storms, which have been observed in COVID-19 patients, but that the two may be intricately linked.

Another researcher, Frank van de Veerdonk, an infectious disease researcher at the Radboud University Medical Center in Netherlands, had made similar observations in mid-March. In April, he and his research team theorized that a dysregulated bradykinin system was causing leaky blood vessels in the lungs, which was a potential cause of the excess fluid accumulation.

Josef Penninger, director of the Life Sciences Institute at the University of British Columbia in Vancouver, who identified that ACE2 is the essential in vivo receptor for SARS, told The Scientist that he believes bradykinin plays a role in COVID-19. It does make a lot of sense. And Jacobsons study supports the hypothesis, but additional research is needed for confirmation. Gene expression signatures dont tell us the whole story. I think it is very important to actually measure the proteins.

Another aspect of Jacobsons study is that via another pathway, COVID-19 increases production of hyaluronic acid (HLA) in the lungs. HLA is common in soaps and lotions because it absorbs more than 1,000 times its weight in fluid. Taking into consideration fluid leaking into the lungs and increased HLA, it creates a hydrogel in the lungs of some COVID-19 patients, which Jacobson describes as like trying to breathe through Jell-O.

This provides a possible explanation for why ventilators have been less effective in severe COVID-19 than physicians originally expected. It reaches a point, Jacobson says, where regardless of how much oxygen you pump in, it doesnt matter, because the alveoli in the lungs are filled with this hydrogel. The lungs become like a water balloon.

The bradykinin hypothesis also explains why about 20% of COVID-19 patients have heart damage, because RAS controls aspects of cardiac contractions and blood pressure. It also supports COVID-19s neurological effects, such as dizziness, seizures, delirium and stroke, which is seen in as much as 50% of hospitalized patients. French-based research identified leaky blood vessels in the brains of COVID-19 patients. And at high doses, bradykinin can break down the blood-brain barrier.

On the positive side, their research suggests that drugs that target components of RAS are already FDA approved for other diseases and might be effective in treating COVID-19. Some, such as danazol (to treat endometriosis, fibrocystic breast disease, and hereditary angioedema), stanazolol (an anabolic steroid derived from testosterone), and ecallantide (marketed as Kalbitor for hereditary angioedema (HAE) and the prevention of blood loss in cardiothoracic surgery), decrease bradykinin production. Icatibant, also used to treat HAE, and is marketed as Firazyr, decreases bradykinin signaling and could minimize its effects once its in the body. Vitamin D may potentially be useful, because it is involved in the RAS system and may reduce levels of REN, another compound involved in the system.

The researchers note that the testing of any of these pharmaceutical interventions should be done in well-designed clinical trials.

More here:

Bradykinin Hypothesis of COVID-19 Offers Hope for Already-Approved Drugs - BioSpace

Stranger than fiction? Why we need supercomputers – TechHQ

In2001: A Space Odyssey, the main villain is a supercomputer named Hal-9000 that was responsible for the death ofDiscovery Onescrew.

Need some help remembering Douglas Rains chilling voice as the sentient computer?

Even though HAL-9000 met with a slow, painful death by disconnection, it remains one of the most iconic supercomputers on screen and in fiction. The villainous systems display of humanity in its last moment, singing the lullaby of Daisy Bell urges viewers to recognize the strong sense of self that the machine possesses. However, in the real world, supercomputers are far less sentimental, if not far off in terms of their data processing and problem-solving ability.

What truly separates supercomputers from your not-so-super-computers is the way they process the workload. Supercomputers, fundamentally, adopt a technique called parallel processing that uses multiple compute resources to solve a computational problem. In contrast, our regular computers run on serial computing that solves computational problems one at a time, following a sequence.

For a sense of just how powerful these systems are, supercomputers are frequently used for simulating reality, including astronomical events like two galaxies colliding or predicting how a nuclear attack would play out.

Supercomputers can simulate astronomical events. Source: Unsplash

Now, scaling it down from the fate of the universe, supercomputers are also used for enterprise-wide applications.

Over the years, the power of supercomputers in simulating reality has given humankind a better ability to make predictions or boost product designs. In manufacturing, this ability users can test out countless product designs to discern which prototypes are best suited to the real world. In this sense, supercomputing significantly slashes the number of physical testing resources and helps organizations get products to market quicker, allowing them to seize opportunities to lead in their respective markets and gain extra profit.

Jack Dongarra, a leading supercomputer expert,noted that the industrial use of supercomputers is widespread: Industry gets it. They are investing in high-performance computers to be more competitive and to gain an edge on their competition. And they feel that money is well spent. They are investing in these things to help drive their products and innovation, their bottom line, their productivity, and their profitability, Dongarra said.

Supercomputers are also helping scientists and researchers in developing new life-saving medicines. Presently, supercomputers all over the world are united over the singular goal in the research and development of a COVID-19 vaccine.

Equipped with the capabilities of supercomputers, researches gain unique opportunities to explore the structure and behavior of the infamous virus at a molecular stage. Since a supercomputer can simulate a myriad of interactions between the virus and human body cells, researchers are able to forecast the spread of the disease and seek for promising treatments or vaccine materials.

Japans Fugaku supercomputer, located at the RIKEN Center for Computational Science in Kobe, was recently crowned the worlds fastest. Around 3,000 researchers use it to search and model new drugs, study weather, and natural disaster scenarios, even the fundamental laws of physics and nature. Recently, researchers have been experimenting with using Fugaku for COVID-19 research into diagnostics, therapeutics, and simulations that replicate the spread patterns of the virus.

Fugaku was developed based on the idea of achieving high performance on a variety of applications of great public interest [] and we are very happy that it has shown itself to be outstanding on all the major supercomputer benchmarks, Satoshi Matsuoka, director of the RIKEN Center, said. I hope that the leading-edge IT developed for it will contribute to major advances on difficult social challenges such as COVID-19.

InIBMs company blog, the Director of IBM Research, Dario Gil writes: The supercomputers will run myriad calculations in epidemiology, bioinformatics, and molecular modeling, in a bid to drastically cut the time of discovery of new molecules that could lead to a vaccine.

A supercomputers parallel computing makes it uniquely suited to screen through a deluge of data and, at its core, solve complex problems that require a lot of number-crunching. Erik Lindahl, a professor of biophysics,sharedto date, supercomputers enable scientists to see how liquids diffuse around the proteins, and no other experimental method is capable of that.

We could not do what we do without computers. The computers enable us to see things that we could never see in experiments otherwise.

While Hals infamous line Im sorry Dave, Im afraid I cant do that left viewers to debate if Hal was truly evil or just obeying orders, perhaps its time we bring this conversation back to life and focus on the extraordinary capabilities of these supercomputers.

View post:

Stranger than fiction? Why we need supercomputers - TechHQ

Google Says It Just Ran The First-Ever Quantum Simulation of a Chemical Reaction – ScienceAlert

Of the many high expectations we have of quantum technology, one of the most exciting has to be the ability to simulate chemistry on an unprecedented level. Now we have our first glimpse of what that might look like.

Together with a team of collaborators, the Google AI Quantum team has used their 54 qubit quantum processor, Sycamore, to simulate changes in the configuration of a molecule called diazene.

As far as chemical reactions go, it's one of the simplest ones we know of.Diazene is little more than a couple of nitrogens linked in a double bond, each towing a hydrogen atom.

However, the quantum computer accurately described changes in the positions of hydrogen to form different diazene isomers.The team also used their system to arrive at an accurate description of the binding energy of hydrogen in increasingly bigger chains.

As straight-forward as these two models may sound, there's a lot going on under the hood. Forget the formulaic chemical reactions from your school textbooks - on a level of quantum mechanics, chemistry is a complicated mix of possibilities.

In some ways, it's the difference between knowing a casino will always make a profit, and predicting the outcomes of the individual games being played inside. Restricted to the predictable rules of classical computers, an ability to represent the infinite combinations of dice rolls and royal flushes of quantum physics has been just too hard.

Quantum computers, on the other hand, are constructed around these very same principles of quantum probability that govern chemistry on a fundamental level.

Logical units called qubits exist in a fuzzy state of 'either/or'. When combined with the 'maybe' states of other qubits in a system, it provides computer engineers with a unique way to carry out computations.

Algorithms specially formulated to take advantage of these quantum mechanics allow for shortcuts, reducing down to minutes that which would take a classical super computer thousands of years of grinding.

If we're to have a hope of modelling chemistry on a quantum level, we're going to need that kind of power, and some.

Just calculating the sum of actions that determine the energy in a molecule of propane would hypothetically take a supercomputer more than a week.But there's a world of difference between a snapshot of a molecule's energy, and calculating all the ways they might change.

The diazene simulation used 12 of the 54 qubits in the Sycamore processor to perform its calculations. This in itself was still twice the size of any previous attempts at chemistry simulations.

The team also pushed the limits on an algorithm designed to marry classical with quantum processes, one designed to iron out the errors that arise all too easily in the delicate world of quantum computing.

It all adds up to possibilities of increasingly bigger simulations in the future, helping us design more robust materials, sift out more effective pharmaceuticals, and even unlock more secrets of our Universe's quantum casino.

Diazene's wandering hydrogens is just the start of the kinds of chemistry we might soon be able to model in a quantum landscape.

This research was published in Science.

Go here to see the original:

Google Says It Just Ran The First-Ever Quantum Simulation of a Chemical Reaction - ScienceAlert

This Equation Calculates the Chances We Live in a Computer Simulation – Discover Magazine

The Drake equation is one of the more famous reckonings in science. It calculates the likelihood that we are not alone in the universe by estimating the number of other intelligent civilizations in our galaxy that might exist now.

Some of the terms in this equation are well known or becoming better understood, such as the number of stars in our galaxy and the proportion that have planets in the habitable zone. But others are unknown, such as the proportion of planets that develop intelligent life; and some may never be known, such as the proportion that destroy themselves before they can be discovered.

Nevertheless, the Drake equation allows scientists to place important bounds on the numbers of intelligent civilizations that might be out there.

However, there is another sense in which humanity could be linked with an alien intelligence our world may just be a simulation inside a massively powerful supercomputer run by such a species. Indeed, various scientists, philosophers and visionaries have said that the probability of such a scenario could be close to one. In other words, we probably are living in a simulation.

The accuracy of these claims is somewhat controversial. So a better way to determine the probability that we live in a simulation would be much appreciated.

Enter Alexandre Bibeau-Delisle and Gilles Brassard at the University of Montreal in Canada. These researchers have derived a Drake-like equation that calculates the chances that we live in a computer simulation. And the results throw up some counterintuitive ideas that are likely to change the way we think about simulations, how we might determine whether we are in one and whether we could ever escape.

Bibeau-Delisle and Brassard begin with a fundamental estimate of the computing power available to create a simulation. They say, for example, that a kilogram of matter, fully exploited for computation, could perform 10^50 operations per second.

By comparison, the human brain, which is also kilogram-sized, performs up to 10^16 operations per second. It may thus be possible for a single computer the mass of a human brain to simulate the real-time evolution of 1.4 10^25 virtual brains, they say.

In our society, a significant number of computers already simulate entire civilizations, in games such as Civilization VI, Hearts of Iron IV, Humankind and so on. So it may be reasonable to assume that in a sufficiently advanced civilization, individuals will be able to run games that simulate societies like ours, populated with sentient conscious beings.

So an interesting question is this: of all the sentient beings in existence, what fraction are likely to be simulations? To derive the answer, Bibeau-Delisle and Brassard start with the total number of real sentient beings NRe, multiply that by the fraction with access to the necessary computing power fCiv; multiply this by the fraction of that power that is devoted to simulating consciousness fDed (because these beings are likely to be using their computer for other purposes, too); and then multiply this by the number of brains they could simulate Rcal.

The resulting equation is this, where fSim is the fraction of simulated brains:

Here RCal is the huge number of brains that fully exploited matter should be able to simulate.

The sheer size of this number, ~10^25, pushes Bibeau-Delisle and Brassard toward an inescapable conclusion. It is mathematically inescapable from [the above] equation and the colossal scale of RCal that fSim 1 unless fCiv fDed 0, they say.

So there are two possible outcomes. Either we live in a simulation or a vanishingly small proportion of advanced computing power is devoted to simulating brains.

Its not hard to imagine why the second option might be true. A society of beings similar to us (but with a much greater technological development) could indeed decide it is not very ethical to simulate beings with enough precision to make them conscious while fooling them and keeping them cut off from the real world, say Bibeau-Delisle and Brassard.

Another possibility is that advanced civilizations never get to the stage where their technology is powerful enough to perform these kinds of computations. Perhaps they destroy themselves through war or disease or climate change long before then. There is no way of knowing.

But suppose we are in a simulation. Bibeau-Delisle and Brassard ask whether we might escape while somehow hiding our intentions from our overlords. They assume that the simulating technology will be quantum in nature. If quantum phenomena are as difficult to compute on classical systems as we believe them to be, a simulation containing our world would most probably run on quantum computing power, they say.

This raises the possibility that it may be possible to detect our alien overlords since they cannot measure the quantum nature of our world without revealing their presence. Quantum cryptography uses the same principle; indeed, Brassard is one of the pioneers of this technology.

That would seem to make it possible for us to make encrypted plans that are hidden from the overlords, such as secretly transferring ourselves into our own simulations.

However, the overlords have a way to foil this. All they need to do is to rewire their simulation to make it look as if we are able to hide information, even though they are aware of it all the time. If the simulators are particularly angry at our attempted escape, they could also send us to a simulated hell, in which case we would at least have the confirmation we were truly living inside a simulation and our paranoia was not unjustified..., conclude Bibeau-Delisle and Brassard, with their tongues firmly in their cheeks.

In that sense, we are the ultimate laboratory guinea pigs: forever trapped and forever fooled by the evil genius of our omnipotent masters.

Time for another game of Civilization VI.

Ref: Probability and Consequences of Living Inside a Computer Simulation. arxiv.org/abs/2008.09275

Read more:

This Equation Calculates the Chances We Live in a Computer Simulation - Discover Magazine

17 of the best computers and supercomputers to grace the planet – Pocket-lint

(Pocket-lint) - Supercomputers, the behemoths of the tech world and inventions by man often to put to specific use to solve incredible problems mere mortals couldn't fathom alone.

From studying the decay of nuclear materials to predicting the path of our planet due to global warming and everything in between, these machines do the processing and crunch the numbers. Calculating in moments what it would take mere mortals decades or more to decipher.

Earth Simulator was the world's fastest supercomputer between 2002 and 2004. It was created in Japan, as part of the country's "Earth Simulator Project" which was intended to model the effects of global warming on our planet.

The original Earth Simulator supercomputer cost the government 60 billion yen but was a seriously impressive piece of technology for the time, with 5120 processors and 10 terabytes of memory.

It was later replaced by Earth Simulator 2 in 2009 and Earth Simulator 3 in 2015.

The original Earth Simulator supercomputer was surpassed in performance by IBM's Blue Gene/L prototype in 2004. Blue Gene was designed to reach petaFLOP operating speeds while maintaining low power consumption. As a result, the various Blue Gene systems have been ranked as some of the most powerful and most power-efficient supercomputers in the world.

The Blue Gene supercomputers were so named because they were designed to help analyse and understand protein folding and gene development. They were most well-known for power and performance though, reaching 596 TFLOPS peak performance. They were then outclassed by IBM's Cell-based Roadrunner system in 2008.

ENIAC was one of the very first supercomputers. It was originally designed by the US Army to calculate artillery firing tables and even to study the possibility of thermonuclear weapons.It was said to be able to calculate in just 30 seconds what it would take a person 20 hours to do.

This supercomputer cost around $500,000 to build (over $6 million in today's money).

Notably, the Electronic Numerical Integrator and Computer was later used to compute 2,037 digits of Pi and it was the first computer to do so. Even that computation took 70 hours to complete.

In 2018, the Chinese supercomputer known as Sunway TaihuLight was listed as the third-fastest supercomputer in the world. This system sported nearly 41,000 processors, each of which had 256 processing cores, meaning a total of over 10 million cores.

This supercomputer was also known to be able to carry out an eye-watering 93 quadrillion calculations per second. IT was designed for all sorts of research from weather forecasting to industrial design, life sciences and everything in between.

The Difference Engine was crafted by Charles Babbage in 1822. This was essentially the first computer or at least one of them. It could be used to calculate mathematical functions but unfortunately cost an astronomical amount for the time.

This machine was impressive for what it could do but also for the machines it inspired in the years and decades that followed.

IBM's Roadrunner supercomputer was a $100 million system built at the Los Alamos National Laboratory in New Mexico, USA.

In 2008, it managed to become one of the fastest supercomputers on the planet, reaching a top performance of 1.456 petaFLOPS.

Despite taking up 296 server racks and covering 6,000 square feet, Roadrunner still managed to be the fourth-most energy-efficient supercomputer at the time.

The system was used in order to analyse the decay of US nuclear weapons and examine whether the nuclear materials would be safe in the following years.

Summit is one of the most recent and most powerful supercomputers built by man. Another incredible system built by IBM, this time used at Oak Ridge National Laboratory and sponsored by the U.S. Department of Energy.

Between 2018 and June 2020, Summit (also known as OLCF-4) achieved the record of being the fastest supercomputer in the world, reaching benchmark scores of 148.6 petaFLOPS. Summit was also the first supercomputer to hit exaflop (a quintillion operations per second) speeds.

Summit boasts 9,216 22-core CPUs and 27,648 Nvidia Tesla V100 GPUs which have been put to work in all manner of complex research from Earthquake Simulation to Extreme Weather simulation as well as predicting the lifetime of Neutrinos in physics.

The Sierra is another supercomputer developed by IBM for the US Government. Like Summit, Sierra packs some serious power, with 1,572,480 processing cores and a peak performance of 125 petaFLOPS.

As with IBM Roadrunner, this supercomputer is used to manage the stockpile of US nuclear weapons to assure the safety of those weapons.

Tianhe-2 is another powerful supercomputer built by the Chinese. It's located at the National Supercomputer Center in Guangzhou, China and cost a staggering 2.4 billion Yuan (US$390 million) to build.

It took a team of 1,300 people to create and their hard work paid off when Tianhe-2 was recognised as the world's fastest supercomputer between 2013 and 2015.

The system sports nearly five million processor cores and 1,375 TiBs of memory, making it able to carry out over 33 quadrillion calculations per second.

The CDC 6600 was built in 1964 for $2,370,000. This machine is thought to be the worlds first supercomputer, managing three megaFLOPS, three times the speed of the previous record holder.

At the time, this system was so successful that it became a "must-have" for those carrying out high-end research and as a result over 100 of them weird built.

The Cray-1 came almost a decade after the CDC 6600, but quickly became one of the most successful supercomputers of the time. This was thanks to its unique design that not only included an unusual shape but also the first implementation of a vector processor design.

This was a supercomputer system that sported 64-bit processor running at 80 MHz with 8 megabytes of RAM which make it capable of a peak performance of 250 megaflops. A significant move forward compared to the CDC 6600 which came a mere decade before.

The Frontera supercomputer is the fastest university supercomputer in the world. In 2019, it achieved 23.5 PetaFLOPS making it able to calculate in a mere second what it would take an average person a billion years to do manually.

The system was designed to help teams at the University of Texas to solve massively difficult problems including everything from molecular dynamics to climate simulations and cancer studies too.

Trinity is yet another supercomputer designed to analyse the effectiveness of nuclear weapons.

With 979,072 processing cores and 20.2 petaFLOPS of performance power, it's able to simulate all manner of data to ensure the country's stockpile of weapons is safe.

In 2019, IBM built Pangea III, a system purported to be the world's most powerful commercial supercomputer. It was designed for Total, a global energy company with operations worldwide.

Pangea III was an AI-optimised supercomputer with a high-performance structure but one that was said to be significantly more power-efficient than previous models.

The system was designed to support seismic data acquisition by geoscientists to establish the location of oil and gas resources. Pangea III has a computing power of 25 petaflops (roughly the same as 130,000 laptops) and ranked 11th in the leaderboards of the top supercomputers at the time.

The Connection Machine 5 is interesting for a number o reasons, not simply because it's a marvellous looking supercomputer but also because it's likely the only system on our list to be featured in a Hollywood Blockbuster. That's right, this supercomputer appeared on the set of Jurassic Park, where it masqueraded as the Park's central control computer.

The Connection Machine 5 was announced in 1991 and later declared the fastest computer in the world in 1993. It ran 1024 cores with peak performance of 131.0 GFLOPS.

It's also said to have been used by the National Security Agency back in its early years.

HPC4 is a Spanish supercomputer that's particularly well-known for being energy efficient while still sporting some serious processing power that includes 253,600 processor cores and 304,320GB of memory.

In 2018, the updated HPC5 supercomputer was combined with HPC4 to result in 70 petaFlops of combined computational capacity. That means this system is capable of performing 70 million billion mathematical operations in a single second.

Selene is Nvidia's supercomputer built on the DGX SuperPOD architecture. This is an Nvidia-powered supercomputer sporting 2,240 NVIDIA A100 GPUs, 560 CPUs and an impressive record that includes being the second most power-efficient supercomputer around.

Selene is particularly impressive when you discover that it was built in just three weeks. We also like that it has its own robot attendant and is able to communicate with human operators via Slack.

Writing by Adrian Willings.

Read the original post:

17 of the best computers and supercomputers to grace the planet - Pocket-lint

Supercomputer finds best way to air out classroom to ward off virus : The Asahi Shimbun – Asahi Shimbun

The worlds fastest supercomputer has found opening just one window and one door diagonally opposite each other is the best way to ventilate an air-conditioned classroom to prevent the novel coronavirus from spreading.

A team of researchers from the Riken Center for Computational Science and other institutions crunched the numbers using Japans supercomputer Fugaku, which ranked No. 1 in the world in June for its calculation speed.

It ran various simulations to determine the best way to ventilate a classroom to prevent the coronavirus from spreading while also keeping the room temperature cool for students to ensure they do not get heatstroke in the hot summer months.

People can let a certain amount of fresh air in a room while keeping the room temperature cool by opening windows diagonally opposite from each other, said Makoto Tsubokura, a professor of computational science at Kobe University who heads the team. They can also take other measures, such as opening windows fully during breaks, at the same time to further lower the risk of infections.

For a classroom measuring 8 square meters, with 40 students sitting at their desks, the team simulated various combinations of having the doors and transom windows facing the corridor and other windows open to find the best way to efficiently ventilate the room while still being cooled by an air conditioner.

With a window in the back of the room and a door in the front of the room diagonally opposite each other left open by 20 centimeters each, the computer found it takes about 500 seconds for the air in the room to be completely replaced with fresh air.

Under two other configurations, it took roughly 100 seconds each time. One was with all the windows open 20 cm each, with transom windows facing the corridor open. The other was when all the windows were open by 20 cm each with doors at the front and back of the room open 40 cm each.

The first simulation required more time than the other two to ventilate the room because the open window area was smaller. But the amount of air replaced in the first setting was calculated at 1,190 cubic meters per hour.

According to the researchers, when that is converted into the amount of air ventilated per person in the room, it is equivalent to the ventilation standards for a common office under the law.

The team concluded that a room can be adequately ventilated by opening windows diagonally opposite from each other when accounting forair conditioning efficiency in the summer and heating in the winter.

See the original post here:

Supercomputer finds best way to air out classroom to ward off virus : The Asahi Shimbun - Asahi Shimbun

The Supercomputer Breaking Online Gaming Records and Modeling COVID-19 – BioSpace

Humanity is obsessed with making and breaking records in absolutely everything, just ask the good people at Guinness. In science, we dont exactly have a land-speed record for sequencing a genome or characterizing a protein, but we do know how long it takes to discover a therapeutic (typically 1 to 6 years) and get it to market (another decade, with all the tests and trials required). Even then, only about 10% get approved. We have gone from identifying a new virus to having multiple vaccine candidates in clinical testing within 6 months that is Earth-shattering record breaking. This was unthinkable with SARS in the mid-2000s, but our rapidly advancing technology and researchers dropping everything to work on SARS-CoV-2 have made next-to-impossible a reality.

Scaled Up Computing for Record Breaking Games

A big part of this has been global advancements in computing and processing power, leveraging the power of the cloud. Hadean, a UK-based company, has developed a cloud-native supercomputing platform. Their Hadean Platform, a distributed computing platform, streamlines running applications via cloud by removing excessive middleware and helping scale the process a journey that has taken them from the world of gaming to the modeling a pandemic.

Our cardinal application is Aether Engine, a spatial simulation engine, but we also have Mesh, the Big Data framework, and we have Muxer, which is a dynamic content delivery network for high performance workloads, said Miriam Keshani, VP of Operations at Hadean.

They took Aether Engine to the biggest gaming conference around the Gaming Developers Conference in San Francisco and were instantly attracted by massive online gaming and specifically EVE Online. The makers demonstrated record-breaking massive scale battles, but that often meant slowing the game down.

Fast forward to GDC 2019. We were there with the makers of EVE Online, CCP games, and together broke their world record for the most number of players in the single game with 14,000 connected clients, a mixture of human and AI, says Keshani.

The company has continued to work with CCP Games as well as Microsofts Minecraft. In parallel, Hadean also took their Aether Engine to a whole new level the molecular level.

Spatial Engines, Scale and Biology

Hadean and Dr. Paul Bates at the Francis Crick Institute in London partnered to investigate protein-protein interactions. The group is pioneering a new technique in the field called Cross-Docking, an approach to find the best holo structures among multiple structures available for a target protein.

The formation of specific proteinprotein interactions is often a key to understanding function. Proteins are flexible molecules and as they interact with each other they change shape / flex in response to each other. These can be major structural changes, or relatively minor movements, but either way a significant challenge in the field is being able to a priori predict the extent of such conformational structure changes and the flexibility of each target, Bates said.

The method can be used to predict protein binding sites useful for studying disease and drug design however, it requires a lot of processing power. This is where the Aether Engine comes in.

Despite promising results, this methods additional pre-processing steps (to choose the best input structures) make it practically difficult to do at scale, Bates said.

Publicly available docking servers rely on shared cloud resources, so a full docking run of all 56 protein pairs investigated [at the Crick Institute] takes weeks to complete. We used Aether Engine to sample tens of thousands of possible conformations for 56 protein pairs, profiled by potential energy, and selected candidates for docking according to features in this energy space, Bates said. This sophisticated sampling of inputs using Aether Engine led to a significant reduction in computation time and negated any additional burden brought on by this pre-processing step.

The research found 10% uplift in quality compared to other approaches, and the Aether Engine significantly reduced bottlenecks around pre-processing and docking, run as a publicly available server.

Modeling Spread of A New Disease

One of the first things we learned about SARS-CoV-2 is how it gains entry to our cells. The Spike protein on the viral envelope binds to Ace2, a receptor on the surface of endothelial cells. By binding Ace2, this effectively acts as a gateway for SARS-CoV-2 to enter our cells and begin replicating, spreading infection throughout the airway.

Buoyed by the success of their first study, the Bates Lab and Hadean renewed their partnership to focus on simulating COVID-19. The Aether Engines simulates a model of the lungs, going down over twenty levels, called generations, at each of which the airway bifurcates.

In the model, the virus is introduced at the top because we assume it was inhaled. There is a partial computational fluid dynamics element to it, as the virus travels down the airway according to a set diffusion rate. As it travels through the lungs there are elements, also known as agents in this type of model, that the virus agent is able to interact with, Keshani said.

The model relies on a number of parameters and can be used to measure the effect of treatments on viral replication in the lungs.

How we tweak these parameters will depend on keeping track of the literature over time. If there is an interaction between these two agents, the virus will invade the cell and ultimately cause it to burst after replicating inside the cell. Some of these agents will go back into the airways and some into the interstitial lung space. But theres other elements at play, the immune system fights back, here shown by the antibody and T-cell response, and anti-viral drug interventions can be added to the mix, Keshani said.

It does have its limitations. The model relies on a number of parameters. Simplifying the complexity of the human body and disease interaction by simulating the effect of what is happening, rather than the actual going events.

It's not always possible, or even necessary, to go into the level of detail that wed love to see. It's about making trade-offs between what's useful and what's reality, Keshani added.

Supercomputing & Future of Drug Discovery

Drug discovery is a long and expensive process. In recent years, artificial intelligence platforms are transforming the process, helping screen drug candidates and shorten the time required to get to clinical trial. Remdesivir was identified by AI platforms scouring existing drugs for potential COVID treatments. But machine and deep learning platforms require a lot of data to train and make better predictions if they are going to break records in drug development outside of a global pandemic. Keshani thinks there is a role for supercomputing here as well.

If you're able to create a simplification of a world that can model emergent behavior, which is the kind of simulation Aether Engine is able to scale massively, you can start building a picture of what could happen if you let different scenarios play out, Keshani said. And if you run that same simulation with slightly different parameters 100,000 times or 200,000 times, its building up a training set.

See the original post here:

The Supercomputer Breaking Online Gaming Records and Modeling COVID-19 - BioSpace

When it comes to hurricane models, which one is best? – KHOU.com

Is the American or European forecast model more accurate? Let's connect the dots!

When it comes to hurricanes a lot of information can come at you fast, but when it comes to storm forecast models is one really better than the other?

Lets connect the dots.

American model vs. European model

The two global models you hear about the most are the American and European. The American is officially called the Global Forecast System model and is created and operate by the National Weather Service.

And it's no rink-dink forecast! It uses a supercomputer considered one of the fastest in the world.

The European model is officially called the European Center for Medium Range Weather Forecasts and is the result of a partnership of 34 different nations.

European model outperforms big supercomputer

Looking at last years forecast, the European model did do better, especially when we were one to two days out from the storm. Thats according to the National Hurricane Center forecast verification report.

According to the Washington Post, it's because the European model is considered computationally more powerful. Thats thanks to raw super computer power and the math behind the model.

Meteorologists weigh in

No matter how much computer power you have you still need humans to interpret these models, and thats where a skilled meteorologist comes in.

They look at all the models, weigh their strengths and weakness and consider the circumstances for each storm. Thats how they let us know when to worry and when to stay calm.

Read the rest here:

When it comes to hurricane models, which one is best? - KHOU.com

Natural Radiation Including Cosmic Rays From Outer Space Can Wreak Havoc With Quantum Computers – SciTechDaily

Study shows the need to shield qubits from natural radiation, like cosmic rays from outer space.

A multi-disciplinary research team has shown that radiation from natural sources in the environment can limit the performance of superconducting quantum bits, known as qubits. The discovery, reported today in the journal Nature, has implications for the construction and operation of quantum computers, an advanced form of computing that has attracted billions of dollars in public and private investment globally.

The collaboration between teams at the U.S. Department of Energys Pacific Northwest National Laboratory (PNNL) and the Massachusetts Institute of Technology (MIT), helps explain a mysterious source of interference limiting qubit performance.

Our study is the first to show clearly that low-level ionizing radiation in the environment degrades the performance of superconducting qubits, said John Orrell, a PNNL research physicist, a senior author of the study, and an expert in low-level radiation measurement. These findings suggest that radiation shielding will be necessary to attain long-sought performance in quantum computers of this design.

Computer engineers have known for at least a decade that natural radiation emanating from materials like concrete and pulsing through our atmosphere in the form of cosmic rays can cause digital computers to malfunction. But digital computers arent nearly as sensitive as a quantum computer.

We found that practical quantum computing with these devices will not be possible unless we address the radiation issue, said PNNL physicist Brent VanDevender, a co-investigator on the study.

Natural radiation may interfere with both superconducting dark matter detectors (seen here) and superconducting qubits. Credit: Timothy Holland, PNNL

The researchers teamed up to solve a puzzle that has been vexing efforts to keep superconducting quantum computers working for long enough to make them reliable and practical. A working quantum computer would be thousands of times faster than even the fastest supercomputer operating today. And it would be able to tackle computing challenges that todays digital computers are ill-equipped to take on. But the immediate challenge is to have the qubits maintain their state, a feat called coherence, said Orrell. This desirable quantum state is what gives quantum computers their power.

MIT physicist Will Oliver was working with superconducting qubits and became perplexed at a source of interference that helped push the qubits out of their prepared state, leading to decoherence, and making them non-functional. After ruling out a number of different possibilities, he considered the idea that natural radiation from sources like metals found in the soil and cosmic radiation from space might be pushing the qubits into decoherence.

A chance conversation between Oliver, VanDevender, and his long-time collaborator, MIT physicist Joe Formaggio, led to the current project.

To test theidea, the research team measured the performance of prototype superconducting qubits in two different experiments:

The pair of experiments clearly demonstrated the inverse relationship between radiation levels and length of time qubits remain in a coherent state.

Natural radiation in the form of X-rays, beta rays, cosmic rays and gamma rays can penetrate a superconducting qubit and interfere with quantum coherence. Credit: Michael Perkins, PNNL

The radiation breaks apart matched pairs of electrons that typically carry electric current without resistance in a superconductor, said VanDevender. The resistance of those unpaired electrons destroys the delicately prepared state of a qubit.

The findings have immediate implications for qubit design and construction, the researchers concluded. For example, the materials used to construct quantum computers should exclude material that emits radiation, the researchers said. In addition, it may be necessary to shield experimental quantum computers from radiation in the atmosphere. At PNNL, interest has turned to whether the Shallow Underground Laboratory, which reduces surface radiation exposure by 99%, could serve future quantum computer development. Indeed, a recent study by a European research team corroborates the improvement in qubit coherence when experiments are conducted underground.

A worker in the ultra-low radiation detection facility at the Shallow Underground Laboratory located at Pacific Northwest National Laboratory. Credit: Andrea Starr, PNNL

Without mitigation, radiation will limit the coherence time of superconducting qubits to a few milliseconds, which is insufficient for practical quantum computing, said VanDevender.

The researchers emphasize that factors other than radiation exposure are bigger impediments to qubit stability for the moment. Things like microscopic defects or impurities in the materials used to construct qubits are thought to be primarily responsible for the current performance limit of about one-tenth of a millisecond. But once those limitations are overcome, radiation begins to assert itself as a limit and will eventually become a problem without adequate natural radiation shielding strategies, the researchers said.

In addition to helping explain a source of qubit instability, the research findings may also have implications for the global search for dark matter, which is thought to comprise just under 85% of the known universe, but which has so far escaped human detection with existing instruments. One approach to signals involves using research that depends on superconducting detectors of similar design to qubits. Dark matter detectors also need to be shielded from external sources of radiation, because radiation can trigger false recordings that obscure the desirable dark matter signals.

Improving our understanding of this process may lead to improved designs for these superconducting sensors and lead to more sensitive dark matter searches, said Ben Loer, a PNNL research physicist who is working both in dark matter detection and radiation effects on superconducting qubits. We may also be able to use our experience with these particle physics sensors to improve future superconducting qubit designs.

For more on this research, read Quantum Computing Performance May Soon Hit a Wall, Due to Interference From Cosmic Rays.

Reference: Impact of ionizing radiation on superconducting qubit coherence by Antti P. Vepslinen, Amir H. Karamlou, John L. Orrell, Akshunna S. Dogra, Ben Loer, Francisca Vasconcelos, David K. Kim, Alexander J. Melville, Bethany M. Niedzielski, Jonilyn L. Yoder, Simon Gustavsson, Joseph A. Formaggio, Brent A. VanDevender and William D. Oliver, 26 August 2020, Nature.DOI: 10.1038/s41586-020-2619-8

The study was supported by the U.S. Department of Energy, Office of Science, the U.S. Army Research Office, the ARO Multi-University Research Initiative, the National Science Foundation and the MIT Lincoln Laboratory.

Read the rest here:

Natural Radiation Including Cosmic Rays From Outer Space Can Wreak Havoc With Quantum Computers - SciTechDaily

The Tech Field Failed a 25-Year Challenge to Achieve Gender Equality by 2020 Culture Change Is Key to Getting on Track – Nextgov

In 1995, pioneering computer scientist Anita Borg challenged the tech community to a moonshot: equal representation of women in tech by 2020. Twenty-five years later, were still far from that goal. In 2018, fewer than 30% of the employees in techs biggest companies and 20% of faculty in university computer science departments were women.

On Womens Equality Day in 2020, its appropriate to revisit Borgs moonshot challenge. Today, awareness of the gender diversity problem in tech has increased, and professional development programs have improved womens skills and opportunities. But special programs and fixing women by improving their skills have not been enough. By and large, the tech field doesnt need to fix women, it needs to fix itself.

As former head of a national supercomputer center and a data scientist, I know that cultural change is hard but not impossible. It requires organizations to prioritize and promote material, not symbolic, change. It requires sustained effort and shifts of power to include more diverse players. Intentional strategies to promote openness, ensure equity, diversify leadership and measure success can work. Ive seen it happen.

Swimming Upstream

I loved math as a kid. I loved finding elegant solutions to abstract problems. I loved learning that Mobius strips have only one side and that there is more than one size of infinity. I was a math major in college and eventually found a home in computer science in graduate school.

But as a professional, Ive seen that tech is skewed by currents that carry men to success and hold women back. In academic computer science departments, women are usually a small minority.

In most organizations I have dealt with, women rarely occupy the top job. From 2001 to 2009, I led a National Science Foundation supercomputer center. Ten years after moving on from that job, Im still the only woman to have occupied that position.

Several years into my term, I discovered that I was paid one-third less than others with similar positions. Successfully lobbying for pay equity with my peers took almost a year and a sincere threat to step down from a job I loved. In the work world, money implies value, and no one wants to be paid less than their peers.

Changing Culture Takes Persistence

Culture impacts outcomes. During my term as a supercomputer center head, each center needed to procure the biggest, baddest machine in order to get the bragging rights and resources necessary to continue. Supercomputer culture in those days was hypercompetitive and focused on dominance of Supercomputings Top500 ranking.

In this environment, women in leadership were unusual and there was more for women to prove, and quickly, if we wanted to get something done. The fields focus on dominance was reflected in organizational culture.

My team and I set out to change that. Our efforts to include a broader range of styles and skill sets ultimately changed the composition of our centers leadership and management. Improving the organizational culture also translated into a richer set of projects and collaborations. It helped us expand our focus to infrastructure and users and embrace the data revolution early on.

Setting the Stage for Cultural Diversity

Diverse leadership is a critical part of creating diverse cultures. Women are more likely to thrive in environments where they have not only stature, but responsibility, resources, influence, opportunity and power.

Ive seen this firsthand as a co-founder of the Research Data Alliance (RDA), an international community organization of more than 10,000 members that has developed and deployed infrastructure to facilitate data sharing and data-driven research. From the beginning, gender balance has been a major priority for RDA, and as we grew, a reality in all leadership groups in the organization.

RDAs plenaries also provide a model for diverse organizational meetings in which speaker lineups are expected to include both women and men, and all-male panels, nicknamed manels, are strongly discouraged. Women both lead and thrive in this community.

Having women at the table makes a difference. As a board member of the Alfred P. Sloan Foundation, Ive seen the organization improve the diversity of annual classes of fellows in the highly prestigious Sloan Research Fellows program. To date, 50 Nobel Prize winners and many professional award winners are former Sloan Research Fellows.

Since 2013, the accomplished community members Sloan has chosen for its Fellowship Selection Committees have been half or more women. During that time, the diversity of Sloans research fellowship applicant pool and awardees have increased, with no loss of quality.

Calming Cultural Currents

Culture change is a marathon, not a sprint, requiring constant vigilance, many small decisions, and often changes in who holds power. My experience as supercomputer center head, and with the Research Data Alliance, the Sloan Foundation and other groups has shown me that organizations can create positive and more diverse environments. Intentional strategies, prioritization and persistent commitment to cultural change can help turn the tide.

Some years ago, one of my best computer science students told me that she was not interested in a tech career because it was so hard for women to get ahead. Cultures that foster diversity can change perceptions of what jobs women can thrive in, and can attract, rather than repel, women to study and work in tech.

Calming the cultural currents that hold so many women back can move the tech field closer to Borgs goal of equal representation in the future. Its much better to be late than never.

The Sloan Foundation has provided funding to The Conversation US.

Francine Bermanis a Hamilton Distinguished Professor of Computer Science at Rensselaer Polytechnic Institute.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

More:

The Tech Field Failed a 25-Year Challenge to Achieve Gender Equality by 2020 Culture Change Is Key to Getting on Track - Nextgov

Cerebras Systems Expands Global Footprint with Toronto Office Opening – HPCwire

TORONTO and LOS ALTOS, Calif., Aug. 26, 2020 Cerebras Systems announced its international expansion in Canada with the opening of its Toronto office. The regional office, which will be focused on accelerating the companys R&D efforts and establishing an AI center of excellence, will be led by local technology industry veteran Nish Sinnadurai. With more than fifteen engineers currently employed, Cerebras plans to triple its Toronto engineering team in the coming year.

Canada is a hotbed of technology innovation, and we look forward to driving AI compute excellence throughout the province of Ontario, said Andrew Feldman, CEO and Co-Founder of Cerebras. We are excited to grow our presence in the region and to attract, hire and develop top local talent in high-performance computing and AI.

I am pleased that Cerebras has chosen to open a Toronto office to take advantage of the local technology and engineering talent and regional growth opportunities, said John Tory, Mayor of Toronto. We welcome and celebrate Cerebras expansion as the company fosters AI growth and innovation in the Toronto Region.

Throughout their due diligence and expansion process, Cerebras System worked closely with Toronto Global, a team of experienced business advisors assisting global businesses to expand into the Toronto Region, as well as with the office of the Ontario Senior Economic Officer based in San Francisco.

Nish Sinnadurai will serve as Toronto Site Lead and Director of Software Engineering. Nish comes to Cerebras Systems with deep technical engineering expertise, having previously served as Director of Software Engineering at the Intel Toronto Technology Centre, where he led a multi-disciplinary organization developing large-scale, high-performance software for state-of-the-art systems. Prior to that, he held various roles at Altera (acquired by Intel) and Research in Motion Ltd (now Blackberry).

I am honored to join the Cerebras team and work alongside a group of world-class engineers who have invented a one-of-a-kind technology with the Wafer-Scale Engine (WSE) and CS-1 system, one of the fastest AI computers ever made, said Nish. I look forward to helping push the boundaries of AI and machine learning and define the future of computing with our talented team in Toronto.

In November 2019, Cerebras announcedCerebras CS-1, the industrys fastest AI computer, which was recently selected as one ofFast CompanysBest World Changing Ideas and a winner ofIEEE SpectrumsEmerging Technology Awards. Cerebras also recently announced CS-1 deployments at some of the largest computer facilities in the U.S., includingArgonne National Laboratory,Lawrence Livermore National LaboratoryandPittsburgh Supercomputing Center(PSC) for its groundbreakingNeocortexAI supercomputer.

Cerebras flagship product, the CS-1, is powered byCerebras Wafer Scale Engine (WSE),which is the industrys first and only wafer scale processor. The WSE contains 400,000 AI optimized compute cores, more than one trillion transistors and measures 46,225 millimeters square. The CS-1 system is also comprised of the CS-1 enclosure, which is a complete computer system and delivers power, cooling and data to the WSE; and the Cerebras software platform, which makes the solution quick to deploy and easy to use. These technologies combine to make the CS-1 the highest performing AI accelerator ever built, allowing AI researchers to use their existing software models without modification.

For more information on Cerebras Systems and the Cerebras CS-1, please visitwww.cerebras.net.

About Cerebras Systems

Cerebras Systemsis a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to build a new class of computer to accelerate artificial intelligence work by three orders of magnitude beyond the current state of the art. The Cerebras CS-1 is the fastest AI computer in existence. It contains a collection of industry firsts, including the Cerebras Wafer Scale Engine (WSE). The WSE is the largest chip ever built. It contains 1.2 trillion transistors, covers more than 46,225 square millimeters of silicon and contains 400,000 AI optimized compute cores. The largest graphics processor on the market has 54 billion transistors and covers 826 square millimeters and has only 6,912 cores. In artificial intelligence work, large chips process information more quickly producing answers in less time. As a result, neural networks that in the past took months to train, can now train in minutes on the Cerebras WSE.

Source: Cerebras Systems

See the article here:

Cerebras Systems Expands Global Footprint with Toronto Office Opening - HPCwire

Here’s the smallest AI/ML supercomputer ever – TechRadar

NEC is known for its vector processor-powered supercomputers, most notably the Earth Simulator. Typically, NECs vector processors have been aimed at numerical simulation and similar workloads, but recently NEC unveiled a platform that makes its latest SX-Aurora Tsubasa supercomputer-class processors usable for artificial intelligence and machine learning workloads.

The vector processor, with advanced pipelining, is a technology that proved itself long ago,wroteRobbert Emery, who is responsible for commercializing NEC Corporations advanced technologies in HPC and AI/ML platform solutions.

Vector processing paired with middleware optimized for parallel pipelining is lowering the entry barriers for new AI and ML applications, and is set to solve the challenges both today and in the future that were once only attainable by the hyperscale cloud providers.

TheSX-Aurora Tsubasa AI Platformsupports bothPython and TensorFlowdevelopment environments as well programming languages such as C/C++ and Fortran.

NEC offersmultiple versionsof its latest SX-Aurora Tsubasa versions for desktops and servers that can handle FHFL cards. The most advanced Vector Engine Processor model is the Type 20 that features 10 cores running at 1.6GHz and paired with 48GB HBM2 memory. The card offers a peak performance of 3.07 FP32 TFLOPS or 6.14 FP16 TFLOPS.

While peak performance numbers offered by the SX-Aurora Tsubasa look rather pale when compared to those offered by the latest GPUs (which are also a class of vector processors), such as NVIDIAs A100, NEC believes that its vector processors can still be competitive, especially on datasets that require 48GB of onboard memory (as NVIDIA only has 40GB).

As an added advantage, the NEC SX-Aurora Tsubasa card can run typical supercomputing workloads in a desktop workstation.

NEC does not publish prices of its SX-Aurora Tsubasa cards, but those who want to try the product can contact the company for quotes. In addition, it is possible totry the hardware in the cloud.

Sources:ITMedia,EnterpriseAI,NEC(viaHPCwire)

Read the original here:

Here's the smallest AI/ML supercomputer ever - TechRadar

CSC’s Supercomputer Mahti is Now Available to Researchers and Students – HPCwire

Our efficient national environment supports Finnish research by offering researchers competitive resources to solve even the toughest challenges in their fields, among the first. It also contributes to facilitating researchers access to more ambitious international collaborative research projects. The COVID-19 pandemic is a good indication of the importance of our research infrastructure. We can react quickly and allocate resources to research when a critical need arises, says Pekka Lehtovuori, Director of Computing Services at CSC.

Mahti is a robust liquid-cooled supercomputer capable of solving heavy computational tasks. Mahti can be used, for example, for computational drug design, extensive molecular dynamics simulations, or to model space weather and climate change.

Modern drug design requires extensive computational resources. Supercomputers can be used to analyze how a drug candidate affects the function of proteins in the body, but also to the side effects caused by the same drug. Personalized medicine is also highly dependent on the CSCs computing environment, says Antti Poso, Professor of Drug Design at the University of Eastern Finland and the University of Tbingen in Germany. Mahti is also contributing to the re-use of drugs, which enables a faster response even in situations such as COVID-19 pandemics

CSCs data management and computing services will also be open to educational use by higher education institutes.

With the new data management and computing environment, more and more people will be able to take advantage of modern research equipment. The CSC environment is now also available for educational use in universities and academic use in research institutes. The expanded user base helps ensure that we get all the benefits from the investment and that expertise is accumulated not only in CSC but also in user organizations, says Erja Heikkinen, Director at the Ministry of Education and Culture.

Mahti is the fastest supercomputer in the Nordic countries

The supercomputer Mahti is aBullSequana XH2000system fromAtoswith two 64-core AMD EPYC (Rome) processors in the nodes. This processor is the latest version of the EPYC family (7H12). The total amount of cores is about 180 000. There are almost 9 Petabytes of storage capacity in Mahti.

The interconnect network represents the latest technology, and its speed up to the nodes is 200 Gbps (Gigabits per second). Mahti is one of the worlds first supercomputers with such a fast interconnect.

Mahti ranked 47th in the Supercomputer Top500 list in June 2020 with a maximum performance of 5.39 Petaflops. The theoretical peak performance is 7.5 Petaflops. Mahti is the fastest supercomputer in the Nordic countriesand when compared to European ones, Mahti ranks as number 17.

On the recent Supercomputer Green500 list, Mahti ranked 44th. The Green500 lists the worlds most energy-efficient supercomputers. There are only 14 supercomputers that are faster and more energy-efficient than Mahti.

I am really proud of this achievement because of the flawless cooperation with the CSC project team, a perfect example of European cooperation. We set a common goal, which is to provide cutting edge HPC technology to researchers in Finnish universities and research institutes and achieved it, says Janne Ahonen,Atos Country Manager Finland & the Baltics.

CSCs new computing environment

The availability of Mahti completes the CSCs new data management and computing environment, which consists of Mahti, Puhti, and Allas.

Mahti is the robust supercomputer in CSCs environment, geared towards medium to large scale simulations. Puhti, a BullSequana X400 system from Atos, is a general-purpose supercomputerfor a wide range of use cases. Puhti was launched in autumn 2019.

Puhti Artificial Intelligence Partition Puhti-AI is a GPU-accelerated supercomputer and is specifically designed for artificial intelligence research and artificial intelligence applications.

The entire CSC computing environment is served by a common data management solution, Allas, that is based on CEPH object storage technology and has a storage capacity of 12 Petabytes.

About CSC

CSCis a Finnish center of expertise in ICT that provides world-class services for research, education, culture, public administration and enterprises, to help them thrive and benefit society at large. http://www.csc.fi

About Atos

Follow this link:

CSC's Supercomputer Mahti is Now Available to Researchers and Students - HPCwire

When it comes to hurricane models, which one is best? – 12newsnow.com KBMT-KJAC

Is the American or European forecast model more accurate? Let's connect the dots!

When it comes to hurricanes a lot of information can come at you fast, but when it comes to storm forecast models is one really better than the other?

Lets connect the dots.

American model vs. European model

The two global models you hear about the most are the American and European. The American is officially called the Global Forecast System model and is created and operate by the National Weather Service.

And it's no rink-dink forecast! It uses a supercomputer considered one of the fastest in the world.

The European model is officially called the European Center for Medium Range Weather Forecasts and is the result of a partnership of 34 different nations.

European model outperforms big supercomputer

Looking at last years forecast, the European model did do better, especially when we were one to two days out from the storm. Thats according to the National Hurricane Center forecast verification report.

According to the Washington Post, it's because the European model is considered computationally more powerful. Thats thanks to raw super computer power and the math behind the model.

Meteorologists weigh in

No matter how much computer power you have you still need humans to interpret these models, and thats where a skilled meteorologist comes in.

They look at all the models, weigh their strengths and weakness and consider the circumstances for each storm. Thats how they let us know when to worry and when to stay calm.

See more here:

When it comes to hurricane models, which one is best? - 12newsnow.com KBMT-KJAC

SberCloud’s Cloud Platform Sweeps Three International Accolades At IT World Awards – Exchange News Direct

AI Cloud, the cloud platform of Sberbank ecosystems SberCloud, has won awards in three categories at the 15th international IT World Awards.

Featuring executives, professionals, and experts from the worlds top IT companies, the judging panel for the award recognized AI Cloud as the gold winner in the New Product-Service of the Year | Artificial Intelligence category, the silver winner in the Data Science Platforms category, and the bronze winner in the Hot Technology of the Year | Artificial Intelligence category.

The award was organized by Network Products Guide, industrys leading technology research and advisory guide from Silicon Valley, California, U.S., which shares insights with top executives of the worlds leading IT companies into the best IT products, solutions, and services.

David Rafalovsky, CTO of Sberbank Group, Executive Vice President, Head of Technology,

We are proud that the unique IT project of Sberbank and SberCloud has gained international recognition from a qualified jury. AI Cloud and the Christofari were designed for convenient and reliable use of artificial intelligence technology by a wide variety of entities and organizations, from startups and small businesses to large companies and research centers. Many of our partners and companies that are Sberbank ecosystem members already use AI Cloud and the Christofari to develop proprietary products and services.

The universal cloud platform AI Cloud has the computing power of the Christofari supercomputer at its core and allows for the use of artificial intelligence (AI) across a raft of business, industry, science, and education domains. AI Cloud users can work with data, create AI models, train neural networks, and tailor microservices from the latter to get things done in the cloud through a single interface as fast as possible.

Sberbank already uses AI algorithms to recognize and understand human speech while also utilizing them in voice assistants, voice interfaces, behavioral analytics, and other workflow situations.

The architecture and computing power of Russias fastest supercomputer Christofari, which was specially made in partnership with NVIDIA to work with AI, let stakeholders train program models based on complex neural networks in record time. Thanks to AI Cloud, the Christofari can be accessed from anywhere in the world where you can get an Internet connection. Its capacity, architecture, and affordability make it a unique supercomputer on a global scale.

SberCloud is a cloud platform developed by Sberbank Group to provide services through the IT architecture of the largest bank in Russia, CIS, and Eastern Europe. The infrastructure, IT platforms, and SberCloud services are the pillars of Sberbank Groups digital ecosystem, also being available to external customers, such as companies and governmental organizations.

The Christofari supercomputer is the fastest supercomputer in Russia. Designed by Sberbank and SberCloud together with NVIDIA, it is based on NVIDIA DGX-2 high performance nodes featuring Tesla V100 computing accelerators. According to the LINPACK benchmarks, the supercomputers Rmax value reached 6.7 PFLOPs, measuring a peak of 8.8 PFLOPs.

Read this article:

SberCloud's Cloud Platform Sweeps Three International Accolades At IT World Awards - Exchange News Direct

Supercomputer Market Growth, Future Prospects And Competitive Analysis (2020-2026) – Bulletin Line

The research report on the global Supercomputer Market offers an all-encompassing analysis of recent and upcoming states of this industry which also analyzes several growth strategies for market growth. The Supercomputer report also focuses on the comprehensive study of the industry environment, and industry chain structure extensively. The Supercomputer report also sheds light on major factors including leading vendors, growth rate, production value, and key regions.

Request for a sample report here @:

https://www.reportspedia.com/report/semiconductor-and-electronics/global-supercomputer-market-report-2020-by-key-players,-types,-applications,-countries,-market-size,-forecast-to-2026-(based-on-2020-covid-19-worldwide-spread)/68930#request_sample

Top Key Players:

FujitsuCrayHPEDellLenovo

Supercomputer Market Fragment by Areas, regional examination covers

United States, Canada, Germany, UK, France, Italy, Russia, Switzerland, Sweden, Poland, , China, Japan, South Korea, Australia, India, Taiwan, Thailand, Philippines, Malaysia, Brazil, Argentina, Columbia, Chile, Saudi Arabia, UAE, Egypt, Nigeria, South Africa and Rest of the World.

The Supercomputer Market report introduces the industrial chain analysis, downstream buyers, and raw material sources along with the correct comprehensions of market dynamics. The Supercomputer Market report is articulated with a detailed view of the Global Supercomputer industry including Global production sales, Global revenue, and CAGR. Additionally, it offers potential insights about Porters Five Forces including substitutes, buyers, industry competitors, and suppliers with genuine information for understanding the Global Supercomputer Market.

Get Impressive discount @:

https://www.reportspedia.com/discount_inquiry/discount/68930

Market segment by Type, the product can be split into:

Commercial IndustriesResearch InstitutionsGovernment EntitiesOthers

Market segment by Application, split into:

LinuxUnixOthers

The Supercomputer Market study projects viability analysis, SWOT analysis, and various other information about the leading companies operating in the Global Supercomputer Market provide a complete efficient account of the viable environment of the industry with the aid of thorough company profiles. However, Supercomputer research examines the impact of current market success and future growth prospects for the industry.

Inquire Before Buying @:

https://www.reportspedia.com/report/semiconductor-and-electronics/global-supercomputer-market-report-2020-by-key-players,-types,-applications,-countries,-market-size,-forecast-to-2026-(based-on-2020-covid-19-worldwide-spread)/68930#inquiry_before_buying

In this study, the years considered to estimate the market size of Supercomputer are as follows:

Table of Contents:

Get Full Table of Content @:

https://www.reportspedia.com/report/semiconductor-and-electronics/global-supercomputer-market-report-2020-by-key-players,-types,-applications,-countries,-market-size,-forecast-to-2026-(based-on-2020-covid-19-worldwide-spread)/68930#table_of_contents

See the original post:

Supercomputer Market Growth, Future Prospects And Competitive Analysis (2020-2026) - Bulletin Line

A continent works to grow its stake in quantum computing – University World News

AFRICA

South Africa is a few steps ahead in the advancement of quantum computing and quantum technologies in general, said Mark Tame, professor in photonics at Stellenbosch University in the Western Cape.

South Africas University of KwaZulu-Natal has also been working on quantum computing for more than a decade, gradually building up a community around the field.

The buzz about quantum computing in South Africa just started recently due to the agreement between [Johannesburgs] University of the Witwatersrand and IBM, said Professor Francesco Petruccione, interim director, National Institute for Theoretical and Computational Science, and South African Research Chair in Quantum Information Processing and Communication at the School of Chemistry and Physics Quantum Research Group, University of KwaZulu-Natal.

Interest was intensified by Googles announcement last October that it had developed a 53-qubit device which it claimed took 200 seconds to sample one instance of a quantum circuit a million times. The IT company claimed it would take a state-of-the-art digital supercomputer 10,000 years to achieve this.

A University of Waterloo Institute for Quantum Computing paper stresses quantum computers ability to express a signal (a qubit) of more than one value at the same time (the superposition ability) with that signal being manifested in another device independently, but in exactly the same way (the entanglement ability). This enables quantum computers to handle much more complex questions and problems than standard computers using binary codes of ones and zeros.

The IBM Research Laboratory in Johannesburg offers African researchers the potential to harness such computing power. It was established in 2015, part of a 10-year investment programme through the South African governments Department of Trade and Industry.

It is a portal to the IBM Quantum Experience, a cloud-based quantum computing platform accessible to other African universities that are part of the African Research Universities Alliance (ARUA), which involves 16 of the continents leading universities (in Ethiopia, Ghana, Kenya, Nigeria, Rwanda, Senegal, Tanzania, Uganda and South Africa).

Levelling of the playing field

The IBM development has levelled the playing field for students, [giving them] access to the same hardware as students elsewhere in the world. There is nothing to hold them back to develop quantum applications and code. This has been really helpful for us at Stellenbosch to work on projects which need access to quantum processors not available to the general public, said Tame.

While IBM has another centre on the continent, at the Catholic University of Eastern Africa in Nairobi, Kenya, in 2018 the University of the Witwatersrand became the first African university to join the American computing giants Quantum Computing Network. They are starting to increase the network to have an army of quantum experts, said Professor Zeblon Vilakazi, a nuclear physicist, and vice-chancellor and principal of the University of the Witwatersrand.

At a continental level, Vilakazi said Africa is still in a learning phase regarding quantum computing. At this early stage we are still developing the skills and building a network of young students, he said. The university has sent students to IBMs Zurich facility to learn about quantum computing, he said.

To spur cooperation in the field, a Quantum Africa conference has been held every year since 2010, with the first three in South Africa, and others in Algeria and Morocco. Last years event was in Stellenbosch, while this years event, to be hosted at the University of Rwanda, was postponed until 2021 due to the COVID-19 pandemic.

Growing African involvement

Rwanda is making big efforts to set up quantum technology centres, and I have former students now working in Botswana and the Gambia. It is slowly diffusing around the continent, said Petruccione.

Academics participating at the Stellenbosch event included Yassine Hassouni of Mohammed V University, Rabat; Nigerian academic Dr Obinna Abah of Queens University Belfast; and Haikel Jelassi of the National Centre for Nuclear Sciences and Technologies, Tunisia.

In South Africa, experimental and theoretical work is also being carried out into quantum communications the use of quantum physics to carry messages via fibre optic cable.

A lot of work is being done on the hardware side of quantum technologies by various groups, but funding for these things is not the same order of magnitude as in, say, North America, Australia or the UK. We have to do more with less, said Tame.

Stellenbosch, near Cape Town, is carrying out research into quantum computing, quantum communication and quantum sensing (the ability to detect if a quantum-sent message is being read).

I would like it to grow over the next few years by bringing in more expertise and help the development of quantum computing and technologies for South Africa, said Tame.

Witwatersrand is focusing on quantum optics, as is Petrucciones team, while there is collaboration in quantum computing with the University of Johannesburg and the University of Pretoria.

University programmes

Building up and retaining talent is a key challenge as the field expands in Africa, as is expanding courses in quantum computing.

South Africa doesnt offer a masters in quantum computing, or an honours programme, which we need to develop, said Petruccione.

This is set to change at the University of the Witwatersrand.

We will launch a syllabus in quantum computing, and were in the process of developing courses at the graduate level in physics, natural sciences and engineering. But such academic developments are very slow, said Vilakazi.

Further development will hinge on governmental support, with a framework programme for quantum computing being developed by Petruccione. There is interest from the [South African] Department of Science and Innovation. Because of [the economic impact of] COVID-19, I hope some money is left for quantum technology, but at least the government is willing to listen to the community, he said.

Universities are certainly trying to tap non-governmental support to expand quantum computing, engaging local industries, banks and pharmaceutical companies to get involved in supporting research.

We have had some interesting interactions with local banks, but it needs to be scaled up, said Petruccione.

Applications

While African universities are working on quantum computing questions that could be applicable anywhere in the world, there are plans to look into more localised issues. One is drug development for tuberculosis, malaria and HIV, diseases that have afflicted Southern Africa for decades, with quantum computings ability to handle complex modelling of natural structures a potential boon.

There is potential there for helping in drug development through quantum simulations. It could also help develop quantum computing networks in South Africa and more broadly across the continent, said Vilakazi.

Agriculture is a further area of application. The production of fertilisers is very expensive as it requires high temperatures, but bacteria in the soil do it for free. The reason we cant do what bacteria do is because we dont understand it. The hope is that as quantum computing is good at chemical reactions, maybe we can model it and that would lead to cheaper fertilisers, said Petruccione.

With the world in a quantum computing race, with the US and China at the forefront, Africa is well positioned to take advantage of developments. We can pick the best technology coming out of either country, and that is how Africa should position itself, said Vilakazi.

Petrucciones group currently has collaborations with Russia, India and China. We want to do satellite quantum communication. The first step is to have a ground station, but that requires investment, he said.

Go here to see the original:

A continent works to grow its stake in quantum computing - University World News

Supercomputer predicts where Spurs will finish in the 2020/21 Premier League table – The Spurs Web

With the Premier League fixtures for the 2020/21 season being released last week, it is that time of the year again when fans and pundits start predicting who will finish where.

There is some cause for optimism for Tottenham fans heading into the season, given the strong manner in which Jose Mourinhos men finished the 2019/20 campaign.

Tottenham only lost once in their final nine games after the restart, a run which enabled the club to sneak into sixth place and book their place in the Europa League.

Spurs will be hoping to carry on the momentum into the start of next season but they will be aiming a lot higher than sixth in what will be Mourinhos first full season in charge.

However, according to the predictions of Unikrns supercomputer (as relayed by The Mirror), the Lilywhites will once again miss out on the top four next season.

Based on its calculations, Spurs will end the season in fifth place, one place ahead of their North London rivals Arsenal. Manchester City are predicted to win the title next season just ahead of Liverpool, with Manchester United and Chelsea rounding off the top four.

Spurs Web Opinion

It is too early to be making any predictions considering the transfer market will be open for a while. I believe we will finish in the top four next season as long as we make at least three more intelligent additions (a right-back, centre-back and a striker).

Continued here:

Supercomputer predicts where Spurs will finish in the 2020/21 Premier League table - The Spurs Web


12345...102030...