Rent this $11000 supercomputer for only $0.39/hour but there’s a catch – TechRadar

No, you won't be able to play Crysis on this but for those that require a bit more oomph for their web hosting projects, there are dedicated servers and this Black Friday and Cyber Monday, Ionos is offering $100 credit on any server configuration for the first month, making it a great platform for testers and makers.

(PSA: by the way, we are going to update our Black Friday web hosting deals and Black Friday website builder deals page at least once per day till Cyber Monday)

With per-minute billing, no minimum contract and no setup fees, the Intel-based 4XL-192HDD is the most powerful affordable dedicated server available at Ionos and is probably more powerful than the top supercomputer from the late 1990s, the Intel-powered ASCII Red which had nearly 10,000 cores, a peak performance of 3.21 Tflops and had a cool price tag of $55 million in todays dollars.

The 4XL-192-HDD runs on a single Intel Xeon Gold 6210U CPU. This Cascade-Lake processor has 20 cores and is clocked at 2.5GHz and has a peak performance of more than 1.5Tflops. It is also one of Intels leading chips when it comes to price per Flops.

This is backed by a staggering 192GB DDR4 ECC memory and a pair of 4TB SATA hard disk drives, configured in RAID-1 for redundancy. Ubuntu 20.04 is included as the default operating system and you can upgrade it to WIndows Server 2019.

As with all Ionos dedicated servers, there's also a 1Gbps unlimited, consume-as-much-as-you-want data pipe, load balancing and you can choose the location of your server (either United States, Spain, United Kingdom or Germany).

A similar Dell Poweredge R640 rack server with the same CPU and smaller SATA HDD drives can be had for about $11,000 excluding rebates. Users will be billed per month rather than per hour, and you can cancel your package risk free for 30 days. If you're not completely satisfied, Ionos says you can cancel your contract directly in the control panel for a full refund.

See the rest here:

Rent this $11000 supercomputer for only $0.39/hour but there's a catch - TechRadar

Will Nepos supercomputer give him world chess title edge over Carlsen? – The Guardian

It would count as one of the more seismic shocks in modern chess history if Magnus Carlsen were to lose his world title over the next three weeks here in Dubai. Yet when his Russian opponent Ian Nepomniachtchi plays the first move of their 14-game match on Friday he will be armed with two potentially intriguing advantages.

The first is that Nepomniachtchi or Nepo as he is widely known holds a 4-1 record in classical chess over Carlsen, dating back to when they first met as promising 12-year-olds. The second? He also has one of Russias fastest supercomputers, originally built for machine learning and artificial intelligence, as part of his team.

After qualifying to face Carlsen by winning the Fide candidates tournament in Yekaterinburg this year, Nepomniachtchi credited the Zhores supercomputer, based in the Skolkovo Institute of Science and Technology in Moscow, as helping him and his team evaluate tens of millions of positions per second. This week the Russian confirmed to the Guardian that he was using it again to prepare for Carlsen.

It cant harm my chances, he said. And this particular supercomputer, because it is a huge data centre which can be used for scientific research, is hopefully more effective than others.

The use of computers is hardly new at top level chess. But having a machine that can calculate much faster and potentially see deeper than others can potentially help players come up with surprise opening novelties or better evaluate positions they may face on the board.

Youre more sure that your analysis is good when you see 500 million node positions than, say 100 million, added the 30-year-old, before downplaying how much having a supercomputer on call 24/7 might actually help. In general all the top players have access to something similar. And its the chess engines, such as Stockfish and Leela Chess Zero, which are the main tool in helping us prepare. Everyone has those.

Another juicy subplot to all this is that the chairman of the Skolkovo foundation is Arkady Dvorkovich, who also happens to be the president of chesss world governing body, Fide, who are organising the Carlsen v Nepomniachtchi match.

Nepomniachtchi makes for good company, and he is also happy to expand on the long history between him and Carlsen. The first time we met was in the European under-12 championships, he says. He played quite well, but I didnt feel like he was something spectacular. And he was from Norway, which is not a chess country, so I didnt really take that much notice. But when we played again not long after, and we finished top two at the under-12 world championships, it was clear he was a strong player.

In general I think it makes some difference if youve played a person before and been successful, he adds. But some of our games were played nearly 20 years ago. So while it is good the score is in my favour, it would be quite foolish to rely on this alone.

Instead Nepomniachtchi credits a change in mindset from turning him from a brilliant but erratic player into a true challenger for the crown.

Before I was maybe the least hard working person out of the worlds top 20, he admits. Normally if chess players have a week or two between tournaments, they prepare for the next one. But I would be going to the football pitch three times a week or watching Marvel movies. And when the new season of Game of Thrones came out, I thought: Come on, this is pretty nice! But eventually I understood that soon I was going to be 30 and I wasnt being serious, and had done nothing really special.

At some point you have to choose if you want your life to be full of joy and probably youre not choosing to achieve too much or you sacrifice something and then maybe you can move forward. But it took me quite some time to take off with this new approach.

Another problem, he admits, is that sometimes he was too overconfident. This was an issue which hounded me for years he says. It was like: I dont really care who I play, I am going to beat them. Sometimes I lacked respect for my opponents. But after I corrected my mindset my results became better.

Sign up to The Recap, our weekly email of editors picks.

Nepomniachtchis change in mindset is also reflected in the fact he has recently lost 10kg from training camps which typically consisted of playing sports in the morning, before working on his chess for four to five hours from 3pm, followed by more exercise in the evening. The schedule was quite boring, he says, smiling. But it helped.

Meanwhile Carlsen also appears fit and in good form, following a recent training camp in Cadiz. Usually players spend the last couple of months before a world title encounter squirrelled away. However he has surprised observers by crushing all-comers in a series of one-minute bullet and three-minute blitz games online this month. Asked by the Guardian to explain his unusual preparation, Carlsen replied: I would say its a few different factors. Mostly it was because I had a cold and I couldnt really go outside much, or do anything. But I also think that any practice you can get is useful, especially in blitz.

So who will win? The general view is Carlsen is a warm favourite, but Nepomniachtchi is talented enough that if he hits a hot streak anything could happen. As Vishy Anand, who held the title between 2007 and 2013, puts it: Nepo is the one guy who doesnt seem scared of Magnus. That is important. Because you cannot give him that respect. You have to believe you can beat him. Nepo does.

However Anand concedes that Norwegian remains the clear favourite, with his Fide rating of 2855 being 73 points clear of his Russian opponent. Magnus doesnt stop, he adds. Thats probably the thing that intimidates most people. And he doesnt make glaring mistakes, which means his opponents have to keep the level really high, and sustain it, to land a hit.

That doesnt mean Magnus cant collapse sometimes. And there are certain kinds of positions that he dislikes. But its much harder to catch him out.

More here:

Will Nepos supercomputer give him world chess title edge over Carlsen? - The Guardian

Northern College hosts gaming tournament and hopes to attract future students with a new computer program – CTV News Northern Ontario

SUDBURY -

Northern College's Timmins Campus is doing something different to connect with the community--it's hosting a local gaming group called 'Gold Hearted Fighters' for a Super Smash Bros. tournament.

Its a whole kind of other world, things that we dont normally do but were looking to expand and reach out into the community more and be a part of the places that we are," said Amanda MacLeod, a coordinator of marketing, communications and external relations with Northern College.

MacLeod said these participants are people they want to inform about the College's new dual credential program in its computer science program.

One of the program's coordinators and professors said graduates will receive more than a diploma from Northern College and a degree from Algoma University.

Youre also getting a number of different micro credentials as well and so what micro credentials are, theyre basically digital badges so depending on the courses you complete here, you get a digital badge that you can display on social media or if youre going to a job interview you can share that with your employer to show heres the proof right," said Eric Lapajne.

A participant in the gaming tournament and a computer technician student at Northern College said there are bound to be others like him who want a quality education without having to uproot to get one.

Just looking at various schools, most of them werent as regarded or were equally as regarded as the program here and the drive here was forty minutes and the drive to other schools was several hours so it made a lot more sense for me to come here," said Dustin Brousseau.

Students interested can get the new computer engineering technician diploma and a bachelor of computer science degree in just three years of full-time study.

See original here:

Northern College hosts gaming tournament and hopes to attract future students with a new computer program - CTV News Northern Ontario

Inside the C.D.C.s Pandemic Weather Service – The New York Times

Scientists have been modeling infectious-disease outbreaks since at least the early 1900s, when the Nobel laureate Ronald Ross used mosquito-reproduction rates and parasite-incubation periods to predict the spread of malaria. In recent decades, Britain and several other European countries have managed to make forecasting a routine part of their infectious-disease control programs. So why, then, has forecasting remained an afterthought, at best, in the United States? For starters, the quality of any given model, or resulting forecast, depends heavily on the quality of data that goes into it, and in the United States, good data on infectious-disease outbreaks is hard to come by: poorly collected in the first place; not easily shared among different entities like testing sites, hospitals and health departments; and difficult for academic modelers to access or interpret. For modeling, its crucial to understand how the data were generated and what the strengths and weaknesses of any data set are, says Caitlin Rivers, an epidemiologist and the associate director of the C.F.A. Even simple metrics like test-positivity rates or hospitalizations can be loaded with ambiguities. The fuzzier those numbers are, and the less modelers understand about that fuzziness, the weaker their models will be.

Another fundamental problem is that the scientists who make models and the officials who use those models to make decisions are often at odds. Health officials, concerned with protecting their data, can be reluctant to share it with scientists. And scientists, who tend to work in academic centers and not government offices, often fail to factor the realities faced by health officials into their work. Misaligned incentives also prevent the two from collaborating effectively. Academia tends to favor advances in research whereas public-health officials need practical solutions to real-world problems. And they need to implement those solutions on a large scale. Theres a gap between what academics need to succeed, which is to publish, and whats needed to have real impact, which is to build systems and structures, Rosenfeld says.

These shortcomings have hampered every real-world outbreak response so far. During the H1N1 pandemic of 2009, for example, scientists struggled to communicate effectively with decision makers about their work and in many cases failed to access the data they needed to make useful projections about the viruss spread. They still built many models, but almost none of them managed to influence the response effort. Modelers faced similar hurdles with the Ebola outbreak in West Africa five years later. They managed to guide successful vaccine trials by pinpointing the times and places where cases were likely to surge. But they were not able to establish any coherent or enduring system for working with health officials. The network that exists is very ad hoc, Rivers says. A lot of the work that gets done is based on personal relationships. And the bridges that you build during any given crisis tend to evaporate as soon as that crisis is resolved.

Nov. 28, 2021, 9:43 p.m. ET

Scientists and health officials have made many attempts to close these gaps. Theyve created several programs, collaborations and initiatives in the past two decades each one meant to improve the science and practice of real-world outbreak modeling. How well those efforts fared depends on whom you ask: One such effort changed course after its founder retired, some ran out of funding, others still exist but are too limited in scope to tackle the challenges at hand. Marc Lipsitch, an infectious-disease epidemiologist at Harvard and the C.F.A.s director for science, says that, nonetheless, each contributed something to the current initiative: Its those previous efforts that helped lay the groundwork for what we are doing now.

At the pandemics outset, for example, modelers relied on the lessons they learned from FluSight, an annual challenge in which scientists develop real-time flu forecasts that are then gathered on the C.D.C.s website and compared with one another, to build a Covid-focused system that they called the Covid-19 Forecast Hub. By early April 2020, this new hub was publishing weekly forecasts on the C.D.C.s website that would eventually include death counts, case counts and hospitalizations at both the state and national levels. This was the first time modeling was formally incorporated into the agencys response at such a large scale, George, who is director for operations for the C.F.A., told me. It was a huge deal. Instead of an informal network of individuals, you had somewhere in the realm of 30 to 50 different modeling groups that were helping with Covid in a consistent, systematic way.

But if those projections were painstaking and modest scientists ultimately decided that any forecasts more than two weeks out were too uncertain to be useful they were also no match for the demands of the moment. As the coronavirus epidemic turned into a pandemic, scientists of every ilk were flooded with calls. School officials and health officials, mayors and governors, corporate leaders and event organizers all wanted to know how long the pandemic would last, how it would unfold in their specific communities and what measures they should employ to contain it. People were just freaking out, scouring the internet and calling any name they could find, Rosenfeld told me. Not all of those questions could be answered: Data was scant, and the virus was novel. There was only so much that could be modeled with confidence. But when modelers balked at these requests, others stepped into the void.

Read the rest here:

Inside the C.D.C.s Pandemic Weather Service - The New York Times

2021 NFL Playoff Picture: Here are the projected postseason chances for all 32 teams heading into Week 12 – CBSSports.com

After 11 weeks of the regular season, all 32 teams are still alive to make the playoffs, but that's something that could be changing in Week 12.

With a loss on Thanksgiving Day, the Detroit Lions could become the first team to officially be eliminated from playoff contention this year. However, it's not that simple. Even if the Lions lose, several other things would have to work against themSunday for their elimination to happen.

For now, every team is still mathematically alive, which brings us to to this week's playoff projections. The projections here are based on data fromnumber-cruncher Stephen Oh of SportsLine.com. Oh plugged some numbers into his SportsLine computer this week and then simulated the rest of the NFL season, and using those numbers, we were able to figure out the playoff chances for all 32 teams. We also projected the 14-team playoff field.

With that in mind, let's get to this week's playoff projections. Actually, before we do that, here's a mock draft that fans of the Jaguars, Jets, Texans and Lions might want to read. Although those four teams haven't been technically eliminated yet, the computer is basically giving them a 0% chance of making the playoffs, so a mock draft might be more exciting to read than this projection if you're a fan of one of those four teams.

For everyone else, let's get to the projection.

Note:Remember, this is a projection based on how we think the rest of the regular season will play out. If you want to see the current playoff standings,be sure to click here. For a breakdown of the current playoff picture, be sure to click here.

With that in mind, let's get to the projections..

Here's a list of the playoff chances for all the other AFC teams (their percentage chances of getting into the playoffs is listed next to them in parentheses):Bengals (41.4%), Steelers (34.5%), Browns (22.8%), Broncos (14.8%), Raiders (10.4%), Dolphins (1.1%), Jaguars (0.0%), Jets (0.0%), Texans (0.0%).

Note: The Jets, Jaguars and Texans haven't been eliminated from playoff contention, but they have a 0% chance of making it because the computer hates them. Actually, the computer doesn't love or hate, it has no feelings, it just doesn't think there's a mathematical chance for any of those teams to make it.

Here's a list of the playoff chances for all the other NFC teams (their percentage chances of getting into the playoffs is listed next to them in parentheses): Eagles (41.5%), Saints (37.0%), Panthers (8.2%), Washington (7.8%), Giants (3.7%), Seahawks (3.5%), Falcons (2.9%), Bears (0.4%), Lions (0%).

AFC

(7) Colts at (2) Ravens(6) Chiefs at (3) Bills(5) Patriots at (4) Chargers

Bye: Titans

NFC

(7) Vikings at (2) Packers(6) 49ers at (3) Cowboys(5) Rams at (4) Buccaneers

Bye: Cardinals

Read the original:

2021 NFL Playoff Picture: Here are the projected postseason chances for all 32 teams heading into Week 12 - CBSSports.com

AI Future: Why The University Of Florida Added 100 AI Faculty And The 22nd Fastest Supercomputer In The World – Forbes

The University of Florida recently turned on the eighth most powerful supercomputer in higher education and 22nd most powerful supercomputer in the world. And added 100 new AI-focused faculty to the already several hundred who are engaged in AI.

Why?

Its a complete transformation of higher education, built on artificial intelligence as a core competency.

Were doing AI in medicine, AI in drugs, AI in agriculture, AI in business, Dr. Joseph Glover, Provost and Senior VP of Academic Affairs, told me recently on the TechFirst podcast. The College of Business just made AI a required introductory course for their entering freshmen ... we believe that this is going to be a transformational initiative for the University of Florida. We think that this is where higher education is going to inevitably go.

An NVIDIA AI chip similar to those in HiPerGatorAI.

The machine is a room-scale supercomputer that draws 1.1 megawatts at full capacity. Its called HiPerGator AI, and its built with 291,024 cores using 148 NVIDIA DGX systems and 1,120 NVIDIA A100 processors which are optimized for AI operations. When running it chews through calculations at 17,200 teraflops/second, peaking at 21,314.7 teraflops/second.

(For comparison, a PlayStation 4 can hit 1.84 teraflops and the Xbox Series X can do 12 teraflops.)

And in supercomputing, speed matters.

The University of Florida has already used HiPerGator AI to analyze about a billion words of medical records from the past decade in its hospital system. The goal: find hidden patterns and medically valuable insights. Preliminary results are promising, says Glover.

But the plan is intentionally cross-curricular.

And its focused on key challenges facing the United States in general and Florida specificaly.

In the field of medicine, were looking at better medical outcomes, were looking to bend the cost curve of medicine, says Glover. In agriculture ... the southeast is going to end up as the nations food basket, and so we are [investigating] that. As you mentioned in your intro, climate change is a challenge for the state of Florida in terms of sea level rise and the changing environment. Were trying to get a handle on that. All of these things involve huge amounts of data, and this is where the HiPerGator AI really excels.

Adding 100 AI-focused faculty to hundreds of other AI-using professors in an organization that graduates 10,000 students annually is a deliberate attempt to infuse artificial intelligence in everything.

Glover says that if even half of those 10,000 graduate with AI competencies, thatll make a huge difference to the economy of Florida and the nation as a whole.

The initial focus: big-picture hard-problem issues like national security. Food security. Climate change. Bringing medical costs down. Global competitiveness, especially with China, which now has more supercomputers than any other country.

Part of the plan: creating a 21st century AI-enabled workforce.

Thats one of the reasons NVIDIA helped the University of Florida build the supercomputer. NVIDIA, of course, builds and sells tools for AI.

And an AI-enabled workforce is one that needs more AI tools.

Theres opportunities to re-skill the workforce, but to be able to bring in those skilled workers is incredibly important, says Cheryl Martin, who leads higher education for the company. The work that Florida is doing to look at this cross-discipline is ... in the retail industry, its in the healthcare industry, its in automotive, its in every single industry, right? So the workforce readiness piece is extremely important, and so NVIDIA is doing a lot of work to ensure that we help build the skills across the spectrum.

The end result is that students and faculty at the University of Florida are literally able to access a world-class supercomputer for research and classwork. (Commercial enterprises can also use HiPerGatorAI, for a fee.)

The system came online early this year, so we really wont know how effective it is at building AI competency and preparing tomorrows technology and business leaders with AI skills for potentially years as they graduate and move into jobs and careers.

But clearly, its a valuable initiative that shows significant promise.

Im really pleased to say that the faculty have embraced this, says Glover. They see this as the future. They see it as a wonderful tool and something thats going to be a great advantage to the students to have in their skills portfolio. And equally importantly, the United States federal government has identified the creation of a 21st century AI-enabled workforce as one of the nations critical security problems both from the point of view of literally national security, but also economic security. In order to build a 21st century AI-enabled workforce, you have to educate people at scale. And so, we believe that educating all of our students across the entire university is going to contribute significantly to growing this workforce.

Subscribe to TechFirst, or get a full transcript.

Read more here:

AI Future: Why The University Of Florida Added 100 AI Faculty And The 22nd Fastest Supercomputer In The World - Forbes

IBM’s fastest supercomputer will be used to find better ways to produce green electricity – ZDNet

The US Department of Energy awarded a total of over seven million node hours on the Summit supercomputer to 20 research teams.

Energy giant General Electric (GE) will be using one of the world's most powerful supercomputers, IBM's Summit, to run two new research projects that could boost the production of cleaner power.

Last month, the US Department of Energy (DoE), which hosts Summit in Oak Ridge National Laboratory, awarded a total of over seven million node hours on the supercomputer to 20 research teams, two of which belong to GE Research.

The Summit supercomputing system is the second most powerful in the world,behind the Fugaku supercomputer located in Japan. Built by IBM, Summit boasts system power equivalent to 70 million iPhone 11s, which scientists can leverage to run large computations such as simulating systems' behavior or solving complex physics problems.

SEE: Supercomputers are becoming another cloud service. Here's what it means

GE has now lifted the veil on the two projects that were selected by the DoE to run on Summit, and theywill both address sticking points in the generation of renewable energy.

One team, led by GE researcher Jing Li, received 240,000 node hours to advance research in the field of offshore wind power. Using the Summit supercomputer, Li hopes to be able to run complex simulations to study new ways of controlling and operating offshore turbines to best optimize wind production.

In particular, Li's team will be looking at a wind phenomenon known as coastal low-level jets, which occur along many coastlines and can affect the performance and reliability of offshore wind turbines. Thanks to high-fidelity computational models, the researchers will simulate interactions between wind farms and coastal low-level jets, to inform future, more efficient designs for the farms.

The findings will also be used to guide the DoE's ExaWind project, which is designed to accelerate the US's deployment of onshore and offshore wind plants.

Doing so requires a precise understanding of the ways that natural wind phenomena interact with the built infrastructure. Simulating these interactions, however, comes at a large computational cost, due to the many factors at play. Most research projects are currently only able to predict the behavior of a small number of turbines.

The ExaWind project is aiming to generate predictive simulations of wind farms with tens of megawatt-scale wind turbines dispersed over an area of many square kilometers with complex terrain a computation that could involve simulations with up to 100 billion grid points.

The huge compute power that has been granted to Li's team with Summit is, therefore, a promising step towards achieving the ExaWind challenge.

GE researcher Michal Osusky was also awarded another 256,000 node hours on Summit for a separate research project that focuses on applying machine-learning methods to improve the design of physical machines like jet engines or power generation turbines.

Combining machine learning and simulation, Osusky's team could mimic real-world engines quickly and run virtual tests to verify designs faster than with conventional means.

"These simulations would provide unprecedented insight into what's happening in these complex machines, way beyond what is possible through today's experimental tests," said Osusky. "The hope is we can utilize a platform like this to accelerate the discovery and validation process for cleaner, more efficient engine designs that further promote our decarbonization goals."

The Summit supercomputer, with its 200 petaflops-worth of compute power, is likely to significantly push Li and Osusky's research efforts but GE already has its eyes on even bigger systems.

The DoE is effectively investing in the next generation of supercomputing, known as exascale, and which will refer to systems capable of performing a quintillion calculations each second. Oak Ridge National Laboratory is currently in the process of launching the US's first exascale system, called Frontier.

Set to debut in 2021 and to open to users in 2022, Frontier is a $600 million device that will deliver a performance of 1.5 exaflops that is, 50 times faster than today's top supercomputers. Eight research projectshave already been selected to gain early access to the system. They range from simulating a Milky Way-like galaxy to studying the way that viruses enter host cells, through understanding the universal properties of turbulence.

SEE: Fastest supercomputer in the UK is ready to go: Here's what it's going do

Other laboratories across the country are also racing to launch exascale devices. Argonne National Laboratory, for example,has partnered with HPE to deliver an exascale system called Polaris.

And GE is keen to be part of the upgrade when exascale supercomputers come online. "Government agencies have collaborated with industry and academic partners to propel the computational science and engineering workforce and ecosystem from the gigascale of the 90s through terascale to today's petascale and the imminent exascale each leap in 'scale' 1,000 times the capability of the prior," said Richard Arthur, senior director of computational methods research at GE Research.

"The marvel of sustained exponential breakthroughs in hardware and software technologies, enduring decades, shapes computational modeling into a foundational instrument for scientific insights."

Some experts, however, have previously voiced doubts that the exascale revolution is anywhere near. Eric Strohmaier, one of the authors of the well-established Top500 list, which regularly lists the 500 most powerful supercomputers in the world, recently predicted that a supercomputer capable of achieving one exaflop should not be expected before the second half of the 2020s a forecast that some of his colleagues described as optimistic.

Read more:

IBM's fastest supercomputer will be used to find better ways to produce green electricity - ZDNet

Explained: Supercomputer simulates what will happen to El Nio, La Nia in a warmer world results are worrying – The Indian Express

El Nio and La Nia, the two natural climate phenomena occurring across the tropical Pacific Ocean, influence the weather conditions all over the world. While the El Nio period is characterised by warming or increased sea surface temperatures in the central and eastern tropical Pacific Ocean, a La Nia event causes the water in the eastern Pacific Ocean to be colder than usual. Together, they are called ENSO or El Nio-Southern Oscillation.

There is a growing body of research suggesting that climate change can cause extreme and more frequent El Nio and La Nia events.

What are the recent findings?

A paper published last week in Nature Climate Change noted that increasing atmospheric carbon dioxide can cause a weakening of future simulated ENSO sea surface temperature variability. They note that the intensity of the ENSO temperature cycle can weaken as CO2 increases.

Prof Axel Timmermann, the co-corresponding author, explained in a release: Our research documents that unabated warming is likely to silence the worlds most powerful natural climate swing which has been operating for thousands of years. We dont yet know the ecological consequences of this potential no-analog situation, but we are eager to find out. Timmermann is the director of the IBS Center for Climate Physics (ICCP) at Pusan National University in South Korea.

How did they find this?

The team used one of South Koreas fastest supercomputers, Aleph. According to the ICCP, it would take a single human 45 million years to complete the calculations that the supercomputer can perform in one second.

Our supercomputer ran non-stop for over one year to complete a series of century-long simulations covering present-day climate and two different global warming levels. The model generated 2 quadrillion bytes of data; enough to fill up about 2,000 hard disks, said one of the authors, Dr. Sun-Seon Lee, in a release.

The team conducted climate model simulations to understand ENSOs response to global warming what will happen when there are CO2-doubling (2CO2) and CO2-quadrupling (4CO2) scenarios? They noticed sea-surface temperature anomalies at CO2-doubling conditions and it became robust at CO2 quadrupling.

How does this collapse happen?

The team studied the movement of atmospheric heat to decode the collapse of the ENSO system. They explain that future El Nio events will lose heat to the atmosphere more quickly due to the evaporation of water vapour. Also, in the future there will be a reduced temperature difference between the eastern and western tropical Pacific, inhibiting the development of temperature extremes during the ENSO cycle.

Newsletter | Click to get the days best explainers in your inbox

The team also studied tropical instability waves, a prominent feature in the equatorial Pacific. They note that there can be a weakening of these waves in the projected future, which can cause a disruption of the La Nia event.

There is a tug-of-war between positive and negative feedback in the ENSO system, which tips over to the negative side in a warmer climate. This means future El Nio and La Nia events cannot develop their full amplitude anymore, comments ICCP alumni Prof. Malte Stuecker, co-author of the study in a release.

Continue reading here:

Explained: Supercomputer simulates what will happen to El Nio, La Nia in a warmer world results are worrying - The Indian Express

NFL 100: At No. 7, Peyton Manning, the QB with the gridiron supercomputer in his head – The Athletic

Welcome to the NFL 100, The Athletics endeavor to identify the 100 best players in football history. Every day until the season begins, well unveil new members of the list, with the No. 1 player to be crowned on Wednesday, Sept. 8.

During the spring of 2012, Peyton Manning was trying to work his way back from a neck injury that sidelined him for the 2011 NFL season and was several weeks into his on-field training program at Duke University in Durham, N.C.

At the time, Manning was 12 years into his NFL career and a four-time NFL MVP. But at that moment, his future hung in the balance.

The spinal fusion procedure Manning underwent the previous September was his third neck surgery in 19 months and the riskiest and most complicated of the three. The Indianapolis Colts, the only team Manning had played for since being drafted with the first pick in 1998, had made it known they planned to release him. But before Manning could find another team, he first had to regain his old form, which was not assured given the severity of his injury.

Manning was at Duke because of David Cutcliffe, the Blue Devils head football coach. Cutcliffe knew Manning as well as anyone, having coached him for four years as the offensive coordinator at the University of Tennessee. One of the most respected quarterback coaches in the nation, Cutcliffe was a longtime friend and trusted confidante of the Manning family.

For the first two months of 2012, Cutcliffe rebuilt Mannings game from scratch. He sent him through hour after hour of fundamental drills, catching shotgun snaps, taking snaps from center, handoff drills and footwork. Day by day, throw by throw, Manning gradually started to regain his form.

See the original post:

NFL 100: At No. 7, Peyton Manning, the QB with the gridiron supercomputer in his head - The Athletic

CFD-Expert Wirth Research Turns to Verne Global to Go Carbon Zero – HPCwire

LONDON, Sept. 1, 2021 Verne Global, provider of sustainable data center solutions for high intensity computing, today announced that Wirth Research, an engineering, design technology and advanced Computational Fluid Dynamics (CFD) consultancy, has relocated its supercomputer to Verne Globals data center campus in Iceland. The move enables Wirth Research to analyze, optimize and verify the performance of designs for its industry customers with zero carbon cost.

With its roots in motorsports, Wirth Research was founded by Nick Wirth, former Simtek Grand Prix owner and Benetton F1 chief designer. Since 2003, the company has pioneered the use of advanced virtual engineering technologies that reduce the need for costly physical tests and the wasteful manufacture of prototypes. Wirth Research uses high resolution CFD analyses to design and develop innovative airflow solutions for a wide variety of sectors and uses. These include identifying key airflow mechanisms that minimize the airborne transport of viruses, like Covid, in public spaces, as well as controlling and targeting airflow to make supermarket refrigeration more efficient and city streets more comfortable for pedestrians through wind engineering of tall buildings.

CFD is incredibly power-intensive and requires high intensity computing environments. With its in-house high performance computing (HPC) servers nearing end-of-life, Wirth Research sought to improve the speed, performance and reliability of its compute-intensive applications, while at the same time, increasing efficiency and reducing its energy footprint. After extensive consideration, Wirth Research chose to colocate its new Dell EMC PowerEdge servers powered by AMD Epyc CPUs and Nvidia Tesla T4 GPUs at Verne Globals Icelandic campus due to its first-in-class performance and on-site HPC support. Crucially, Verne Global also offers a sustainable alternative to fossil-fuelled compute power 100% renewable hydro-electric and geothermal energy, with free cooling from Icelands year-round lower temperatures.

Previously, Wirth Researchs headquarters were tethered to where its CFD supercomputer was located, on a site with substantial energy costs and cooling requirements. By moving its high performance compute to Verne Global, and replacing its existing hardware with new hyper-efficient AMD Epyc 2 processors, Wirth Researchs costs were reduced so significantly that the savings in energy usage easily justified the investment in upgraded hardware. As well, Wirth Research was able to move its headquarters to a state-of-the-art, eco-friendly office, more in line with its core values.

At Wirth Research, the work we do is sustainability work, helping our clients implement energy-saving carbon reduction systems that arent just planet-friendly, but also offer compelling returns on investment, said Nick Wirth, President and Technical Director, Wirth Research. Verne Globals renewably powered data center optimized for HPC and supported by a skilled team of engineers is the ideal place to host our high intensity compute.

Wirth Researchs advanced engineering technologies can revolutionize industries, and Verne Global is thrilled to be a part of delivering that innovation at zero carbon cost, said Dominic Ward, CEO, Verne Global. Verne Global was built from the ground up with sustainability, scalability and security front-of-mind, and we look forward to supporting Wirth Researchs future growth.

To learn more about Verne Global, follow us on Twitter, connect with us on LinkedIn and visit us online atwww.verneglobal.com

About Verne Global:

Verne Global delivers data center solutions for high intensity computing, engineered for optimal high performance compute and built upon 100% renewable energy. Our clean grid and stable climate cuts costs and energy usage, and our expert team provides on-site, around-the-clock support to maximize performance and flexibility for customer workloads.

Founded in 2012, our Icelandic data center campus powers some of the worlds most innovative and demanding industries, including financial services, earth sciences, life sciences, engineering, scientific research, and AI.

About Wirth:

Founded by esteemed motorsport designer and former youngest-ever Fellow of the Royal Institution of Mechanical Engineers, Nick Wirth, Wirth Researchs mission is to make life more enjoyable and more sustainable through technology. They are able to make buildings better to live, work and shop in; to make vehicles more energy-efficient; and do all of this while providing our clients with a compelling return on their investment.

Source: Verne Global

Go here to see the original:

CFD-Expert Wirth Research Turns to Verne Global to Go Carbon Zero - HPCwire

Bull Of The Day: AMD (AMD) – Yahoo Finance

AMD AMD has taken the world of advanced chip technology by storm, with revolutionary CEO Lisa Su transforming this discount semiconductor enterprise into a leading-edge innovator. Since Lisa Su took the helm in 2014, AMD shares have skyrocketed an incomprehensible 3,200% (a $1000 investment would have yielded you $32,000 in returns).

The pandemic's digitalizing economic impact pulled forward an enormous amount of demand for AMD's innovation-driven chips, demand that will only grow from here. This semiconductor powerhouse has produced record top and bottom-line results for the past 4 consecutive quarters, blowing past analysts' estimates each time. AMD achieved record profit margins, and management raised its guidance for the remainder of the 2021. Now, analysts across the board are driving up their EPS estimates propelling AMD into a Zacks Rank #1 (Strong Buy).

The last time AMD reached a Zacks Rank #1, it shot up 67.5% in just 1.5 months (July 17th to September 1st, 2020). AMD's August consolidation looks to presenting us with an excellent entry point as demand for next-generation chip technology continues to soar, providing AMD with pricing power and an incentive to push the boundaries of innovation.

AMD's Sights Set To The Future

AMD is already pushing the limits of possibilities with its latest patent filing, which unveiled a quantum-computing processor that would utilize teleportation. This patent addresses the stability and scalability issues that current quantum-computing frameworks have been struggling with and could revolutionize the world of computing if achieved. The technology may still be years away from commercial viability, but this patent filing illustrates AMD's focus on the 4th Industrial Revolution.

Quantum computing is a nascent space, but there is an enormous amount of capital flowing into its development due to the astronomical competitive advantage it would provide. In 2019, Google's GOOGL quantum-computer Sycamore proved its ability to solve a complex mathematical equation 158 million times faster than the world's fastest binary supercomputer (IBM's Summit). If AMD could attain a competitive edge in the quantum-computing space, the profit potential would be boundless.

Story continues

As for near-term goals, AMD is expected to release its 5nm 'Zen 4' high-performance CPU in the second quarter of 2022, which will sustain this chip designer's high-performance leadership in the space. This next-generation computer processor will be up to 40% faster than the currently available 'Zen 3,' and will almost certainly be the go-to CPU for data centers (EPYC) and consumer desktops & mobile processors (RYZEN) alike, as Intel lags the innovative curve.

AMD Takeover

While Intel INTC has seemingly fallen asleep at the wheel with its once leading CPUs, AMD was provided with the rare opportunity to jump in the driver's seat of a market that had been monopolized for half a century. Intel's inability to match Taiwan Semi's TSM third party manufacturing abilities (AMD's preferred fabricator) with its one in-house operations combined with other systemic supply chain issues has propelled AMD at least 3 years ahead of it (on a generous scale).

Following a strongly worded letter from an activist investor group, Intel decided enough was enough and brought Pat Gelsinger on as the new CEO in February of this year. The company will be hard-pressed in this game of innovative catch-up to maintain its long-standing corporate relationships with AMD's CPU technology, showing clear performance advantages.

According to PassMark, AMD now controls 40% of the total CPU space, while Intel sits at 60%. AMD has more than doubled its market share in the past 5 years and will progressively control more in the coming years as Intel attempts to restore its leadership. TSMC's accelerating capabilities will be the backbone to AMD's future success, and I don't see Intel's in-house manufacturing catching up to TSMC anytime soon.

AMD is also a leader in the graphic processing unit (GPU) duopoly with NVIDIA NVDA. However, they have not been as successful in competing with this revolutionary chip maker, who has been taking a growing portion of market share in this space. Still, its GPU segment provides AMD with a further diversified product portfolio that provides a hedge for the volatile chip business cycles.

The Financials

AMD has demonstrated accelerating revenue growth with its sales swelling by 99% in this past quarter, which flowed down to margin expanded profits that drove up 350% from a year prior. This chip innovator is expected to see double-digit annualized growth on both its top and bottom-line for years to come.

AMD's balance sheet is a fortress, with more liquid capital than total liabilities, meaning the risk of default is effectively 0, especially when factoring in its exponentially appreciating quarterly cash-flows.

AMD is a seemingly expensive stock with a forward P/E of 38.4x, far above the semiconductor industry average of 22x. However, when you factor growth into this valuation multiple (PEG), the company is trading at a discount to both the chip sector and its own 3-year average.

Final Thoughts

My bet in AMD is a bet on Lisa Su. She has been AMD's innovation catalyzer and invigorated this discount chipmaker into a high-performance, high-growth market leader. I am confident that she will continue to drive this technological backbone above and beyond expectations.

17 out of 25 analysts call AMD a buy today (0 sell ratings), with recent price targets being raised as high as $150 a share (over 35% upside from here). The 4th Industrial Revolution is upon us, and it's time to start investing in it.

Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free reportIntel Corporation (INTC) : Free Stock Analysis ReportAdvanced Micro Devices, Inc. (AMD) : Free Stock Analysis ReportNVIDIA Corporation (NVDA) : Free Stock Analysis ReportTaiwan Semiconductor Manufacturing Company Ltd. (TSM) : Free Stock Analysis ReportAlphabet Inc. (GOOGL) : Free Stock Analysis ReportTo read this article on Zacks.com click here.Zacks Investment Research

See more here:

Bull Of The Day: AMD (AMD) - Yahoo Finance

KIT Supercomputer Among the 15 Fastest in Europe – HPCwire

June 29, 2021 The new high-performance computer of the Karlsruhe Institute of Technology (KIT) is among the 15 fastest computers in Europe. The Hochleistungsrechner Karlsruhe (short HoreKa) ranks 53rd on the biannual Top 500 list of the worlds fastest computers. In terms of energy efficiency, it even makes the 13th place in the international supercomputer ranking.

In science, high-performance computers crucially contribute to finding fast solutions to our most pressing challenges: This applies to energy and climate research as well as to research for sustainable mobility, and also to materials science and medicine, says KIT President Professor Holger Hanselka. The excellent ranking of HoreKa in the current Top 500 list impressively shows that we at KIT are very well equipped for these tasks with one of the most powerful and at the same time most energy-efficient supercomputers in Europe.

Only a computer that makes it onto this list is considered a real supercomputer, explains Dr. Jennifer Buchmller, Head of High Performance Computing at KITs Steinbuch Centre for Computing (SCC). In order to place HoreKa on the list, its computing power had to be measured with a special benchmark application the so-called High Performance LINPACK, Buchmller says. This benchmark lets the computing units solve a very large system of matematical equations, the time required to complete this process indicates how fast a system is.

Two innovative chip technologies one high-performance system

Unlike most other supercomputers, HoreKa is a hybrid system that consists of two very different modules. HoreKa-Green comprises the part with computing accelerators based on graphics processors (GPUs), Horeka-Blue the one with standard processors (CPUs). The accelerator chips from NVIDIA achieve extremely high performance for certain computing operations that are very important for science, such as solving systems of equations or simulating neural networks in artificial intelligence. For other operations, however, the standard processors from Intel are much better suited. The strengths of both architectures are then combined to achieve the maximum performance.

The system therefore appears twice in the June edition of the Top500 list: 53rd with 8 PetaFLOPS and a second time 220th with 2.33 petaFLOPS. One petaFLOPS corresponds to a performance of one quadrillion computing operations per second, comparable to the performance of around 8,000 laptops. Overall, HoreKa can even deliver a peak performance of 17 PetaFLOPS, which is roughly equivalent to the performance of around 150,000 laptops and would theoretically mean an even higher ranking. However, the scoring of hybrid systems like HoreKa is not provided for by the benchmark application on which the Top500 list is based, says Buchmller.

Fast computers are an indispensable part of science

The faster high-performance computers solve mathematical equations and process data, the more detailed and reliable the simulations are that can be produced using them, Buchmller explains. Consequently, in many scientific disciplines such as Earth System and Climate Sciences, Materials Research, Particle Physics and Engineering, supercomputers have become an indispensable part of researchers daily work.

Also world-class in energy efficiency

Supercomputers require a lot of energy, but it is used much more efficiently than by conventional PCs and laptops. HoreKa also tops in energy efficiency and is currently ranked 13th on the Green500 list of the most energy-efficient supercomputers in the world. The highly energy-efficient hot water cooling system allows us to cool the supercomputer with minimal energy use year-round. In the colder months, the office space can also be heated using the waste heat.

Official inauguration in July

The official inauguration ceremony with the handover to the scientific communities is on Friday, July 30. An invitation to the press event will follow separately.

Top500 list: top500.org

More information: https://www.scc.kit.edu/en/services/horeka.php

About the KIT Center Information Systems Technologies (KCIST):

Being The Research University in the Helmholtz Association, KIT creates and imparts knowledge for the society and the environment. It is the objective to make significant contributions to the global challenges in the fields of energy, mobility, and information. For this, about 9,600 employees cooperate in a broad range of disciplines in natural sciences, engineering sciences, economics, and the humanities and social sciences. KIT prepares its 23,300 students for responsible tasks in society, industry, and science by offering research-based study programs. Innovation efforts at KIT build a bridge between important scientific findings and their application for the benefit of society, economic prosperity, and the preservation of our natural basis of life. KIT is one of the German universities of excellence. https://www.kcist.kit.edu/

Source: KIT

Read more here:

KIT Supercomputer Among the 15 Fastest in Europe - HPCwire

Japan’s Fugaku keeps position as world’s fastest supercomputer – Nikkei Asia

TOKYO -- The Fugaku supercomputer, developed by Fujitsu and Japan's national research institute Riken, has defended its title as the world's fastest supercomputer, beating competitors from China and the U.S.

Fugaku held the top spot on the TOP500 list by achieving a score of 442 petaflops, or quadrillions of floating point operations per second. In second place was IBM's Summit supercomputer, which scored just 148 petaflops. The ranking, compiled by an international panel of experts, is released every June and November.

Fugaku also topped three other categories including performance in artificial intelligence and big data processing capacity.

The next-generation supercomputer is the successor to Japan's K supercomputer, which ranked No. 1 in 2011. The 130 billion yen ($1.22 billion) system became fully operational in March.

Its high computing power has made it an ideal choice for pharmaceutical development, as well as a way to analyze big data.

The Japan Automobile Manufacturers Association plans to use Fugaku to help automakers develop vehicle structures that are more resilient by using AI to study collision impacts.

In developing Fugaku, Riken made sure companies could create software easily. This month, Fujitsu Japan started studying chemicals that can be used to treat COVID-19 by using Fugaku's high simulation capabilities.

Although Fugaku came out on top again, the development of supercomputers has become a two-way race between the U.S. and China in recent years. The two countries intend to use them not only for industrial purposes but also military research, including in the development of nuclear weapons.

The U.S. is working on next-generation machines capable of exascale computing -- 1 exaflops equals 1,000 petaflops -- that will be at least twice as fast as Fugaku. Oak Ridge National Laboratory's Frontier and Argonne National Laboratory's Aurora are said to be in a race to begin operations this year.

Chinese development efforts remain secret, but there have been rumors that it is developing a successor to what was once the world's fastest supercomputer. It could launch its own exascale machine this year. The U.S. and China are also locked in a development race in quantum computers.

The supercomputer development race has become a matter of national prestige for the two countries, as it affects industrial competitiveness and national security.In April, the U.S. Commerce Department essentially banned exports to seven Chinese supercomputing entities by adding them to its Entity List.

Read more here:

Japan's Fugaku keeps position as world's fastest supercomputer - Nikkei Asia

Atos Announces 10 New Supercomputer Entries in the TOP500 Listing – HPCwire

PARIS, June 29, 2021 Atos today announces that 10 new supercomputers, based on its BullSequana X high-performance infrastructure, are in the TOP500 global supercomputing ranking. This makes a total of 36 Atos supercomputers listed, with a combined peak performance of 206 petaflops an increase of 27% in petaflops from the last TOP500 listing announced in November 2020.

Looking back over these last six months Atos has seen a flurry of HPC successes, in particular:

Also announced:University of Edinburgh,Swansea University, Spanish State Meteorological AgencyAEMETand JUWELSat JuelichSupercomputing Center, which is the fastest supercomputing system in Europe with a sustained peak performance of 44.1 HPL petaflops, and ranks #7 on the GREEN500.

About Atos

Atos is a global leader in digital transformation with 105,000 employees and annual revenue of over 11 billion. European number one in cybersecurity, cloud and high performance computing, the Group provides tailored end-to-end solutions for all industries in 71 countries. A pioneer in decarbonization services and products, Atos is committed to a secure and decarbonized digital for its clients. Atos operates under the brands Atos and Atos|Syntel. Atos is a SE (Societas Europaea), listed on the CAC40 Paris stock index.

The purpose of Atos is to help design the future of the information space. Its expertise and services support the development of knowledge, education and research in a multicultural approach and contribute to the development of scientific and technological excellence. Across the world, the Group enables its customers and employees, and members of societies at large to live, work and develop sustainably, in a safe and secure information space. http://www.atos.net

Source: Atos

More:

Atos Announces 10 New Supercomputer Entries in the TOP500 Listing - HPCwire

Volvo XC90 electric SUV will feature LiDAR and AI super computer – The Driven

The flagship electric SUV from Swedish automotive giant Volvo Cars will boast as standard state-of-the-art LiDAR technology which will be powered by an autonomous driving super computer.

The forthcoming all-electric successor to Volvo Cars XC90, set to be revealed in the middle of next year, will apparently come standard with LiDAR technology developed by US-based autonomous vehicle sensor and software company Luminar.

Volvo Cars made the announcement last week, adding that Luminars LiDAR technology will be tied into an autonomous driving computer powered by the NVIDIA DRIVE Orin system-on-a-chip.

The autonomous driving computer will itself use software developed by Volvo Cars and its in-house autonomous driving software development company Zenseact, which will enable the optional Highway Pilot for use on motorways, where allowed and verified safe depending on geographic locations and conditions.

The combination of software and technologies from Volvo Cars, Luminar, and Zenseact, is designed to reduce fatalities and accidents, helping to reduce collisions and allowing over-the-air software updates to increase the safety packages efficacy over time.

Volvo Cars is and always has been a leader in safety, said Hkan Samuelsson, Volvos chief executive. It will now define the next level of car safety. By having this hardware as standard, we can continuously improve safety features over the air and introduce advanced autonomous drive systems, reinforcing our leadership in safety.

Volvo Cars expects that, over time, the technology will mature and become better able to assist and improve the capabilities of a human driver in safety critical situations.

While previous generations of the technology relied largely on warning the driver of potential immediate threats, the new and future versions of the technology will become increasingly able to intervene as needed to prevent collisions.

In our ambition to deliver ever safer cars, our long-term aim is to achieve zero collisions and avoid crashes altogether, added Henrik Green, Volvos chief technology officer. As we improve our safety technology continuously through updates over the air, we expect collisions to become increasingly rare and hope to save more lives.

While more detailed specifications of the technology not to mention the all-electric XC90 itself were lacking from Volvos announcement, the provided images detailing the sensor packages on the XC90 are impressive.

The above image demonstrates the combined sensor reach of the XC90s ultrasonics, LiDAR, radars, and cameras, all of which overlap one another to provide what Volvo Cars hopes will be a complete view of the immediate world around a travelling vehicle.

This is a watershed moment for the industry, and Luminars most significant win towards establishing the next era of safety technology, said Austin Russell, Luminar Founder and CEO.

Going from a select highway pilot option to Luminar powering all next generation Volvo flagship cars as standard will kick off this new safety paradigm, serving as the catalyst for what weve been calling Proactive Safety. Volvo thinks lifesaving technology shouldnt be optional, and we couldnt agree more.

On top of the sensor suite and AI super computer promised for the XC90, the car will also benefit from back-up systems for key vehicle functions such as steering and braking that is intended to make the car hardware-ready for future autonomous driving options.

Continued here:

Volvo XC90 electric SUV will feature LiDAR and AI super computer - The Driven

AMD Quadrupled EPYC’s Top 500 Supercomputer Share In A Year – CRN

AMD has more than quadrupled its share of the 500 fastest supercomputers in the world in a matter of one year, underlining the growing threat that rival Intel faces from AMDs EPYC CPUs.

The chipmakers EPYC processors are now powering 49 of the worlds top supercomputers, according to an updated list released Monday by Top500 during the virtual ISC high-performance computing event. Thats more than double the 21 supercomputers counted using AMD CPUs in Top500s fall 2020 list and more than quadruple the 11 supercomputers that were using the chipmakers processors as of the summer 2020 list.

[Related: AMD: Enterprise Server Sales Will Catch Up With Overall EPYC Growth]

AMDs ascendance has come at a cost to Intel, the historically dominant provider of CPUs in the HPC space that is now planning a comeback under new CEO Pat Gelsinger that will take years to execute. According to Top500s latest figures, Intel saw its share of the top 500 supercomputers shrink to 431 from 459 in the fall of last year, and from 470 a year ago.

Although there wasnt much change to the Top10, that doesnt mean there werent interesting revelations within this years list. To begin, it would appear that there is a marked increase in the use of AMD processors, Top500 wrote in a press release.

Out of the 49 supercomputers using AMD EPYC CPUs, 29 of them are new entries and three are in the top 10, according to an analysis by CRN. This includes Perlmutter, a new supercomputer at the U.S. Department of Energys Lawrence Berkeley National Lab, ranked No. 5 on the Top500 list. It consists of a Hewlett Packard Enterprise Cray EX235n system, which is equipped with 1,536 64-core EPYC 7763 CPUs and 6,159 of Nvidias A100 GPUs. The cluster is expected to add 6,144 EPYC CPUs later this year through a CPU-only node expansion, AMD has previously said.

Intel, on the other hand, saw 26 new supercomputers using its processors added to the Top500 list. The fastest supercomputer using Intel CPUs on the list is ranked at No. 7, two ranks behind AMDs fastest supercomputer, Perlmutter. There are two other Intel-based supercomputers in the top 10.

While both Intel and AMD both face new threats from companies making Arm-based CPUs, only one new supercomputer on the list uses Arm: University of Tokyos Wisteria/BDEC-01 cluster, which is powered by 7,680 of Fujitsus 48-core A64FX CPUs. There are now a total of six supercomputers on the Top500 list with Arm-based CPUs, five of which use Fujitsu CPUs, while the other one uses Marvells ThunderX2, which is part of a line of CPUs that has been canceled. The Fujitsu A64FX-powered Fugaku supercomputer, however, remains the fastest system on the list.

As for accelerators and co-processors, Nvidia saw its share of GPUs in the worlds top supercomputers fall slightly to 138 from 141 in fall of last year. However, Nvidia can now claim that its GPUs power six of the 10 fastest supercomputers in the world, thanks to the addition of DOEs Perlmutter system. With Nvidias InfiniBand interconnects, its total footprint in the top 10 supercomputers grows to eight.

AMD, on the other hand, continued to show no growth in GPU share, with Top500 only listing one supercomputer using the chipmakers Instinct GPU, the same one that had been reported in last years two lists.

Alexey Stolyar, CTO of International Computer Concepts, a Northbrook, Ill.-based server integrator, told CRN that hes not surprised that AMDs share in the top 500 supercomputers has grown so fast, especially given how its EPYC processors can outperform Intels Xeon CPUs. (AMD recently said its new third-generation EPYC processors outperform Intels third-generation Xeon Scalable processors on a per-core and per-socket basis while also offering the best total cost of ownership. Intel, on the other hand, has focused on advantages for a smaller group of emerging workloads.)

When measuring raw performance, AMD just outperforms Intel, he said.

As a result, Intel has become more aggressive in discounting CPU prices to make the processors look more attractive from a price-performance perspective, according to Stolyar. AMD also engages in discounting when it comes to competitive bids, he added, though it was more pronounced when the chipmaker was getting started with its EPYC CPUs a few years ago.

Weve seen significant discounts on both sides, he said, which can make a big difference in total cost of ownership for customers.

The bigger issue, for now, is supply, Stolyar said, due to the global chip shortage that is impacting multiple semiconductor companies--and, consequently, several industries beyond the IT sector.

Im ready [to not have] to deal with it. One of our advantages is were quick with turnaround, and that definitely affects us quite a bit, he said.

Continued here:

AMD Quadrupled EPYC's Top 500 Supercomputer Share In A Year - CRN

Tesla shows off the AI supercomputer training what it hopes will one day be an actual self-driving car – The Register

In Brief If you're wondering what it takes to develop a self-driving car, know that Tesla is using a 1.8-exaFLOP AI supercomputer packed with 5,760 GPUs that train neural networks it hopes one day will power autonomous vehicles.

The machine was described by the automaker's senior director of AI, Andrej Karpathy, during an online academic computer vision conference this week. It is used to develop Tesla's super-cruise-control system Autopilot, and also what could be a fully self-driving system when finished. Tesla has been chasing the autonomous vehicle dream for years; the tech has so far proved elusive.

This is a really incredible supercomputer, Karpathy said. I actually believe that in terms of FLOPS, this is roughly the number five supercomputer in the world."

It should be noted the prowess of a supercomputers are typically measured in FP64 precision. It's not clear what precision Tesla's 1.8 exaFLOPs of compute is running at.

It has 780 compute nodes, each containing up to eight Nvidia A100 80GB GPUs. The super also has 10PB of NVMe storage. Teslas AI models churn through millions of ten-second clips of driving footage recorded at 36 frames per second during training. Computer vision is the bread and butter of what we do and enables Autopilot. For that to work, you need to train a massive neural network and experiment a lot, Karpathy added. Thats why weve invested a lot into the compute.

DeepMind has teamed up with a non-profit research org to use its AI protein-folding model AlphaFold to develop drugs that tackle parasitic diseases.

Specifically, the Drugs for Neglected Diseases Initiative will use DeepMind's machine-learning software to figure out if new drugs can treat Chagas disease and Leishmaniasis. These are spread in tropical Latin America climates, and develop after people are bitten by triatomine bugs and sandflies. They can fatal if left untreated.

AI can be a game-changer: by predicting protein structures for previously unsolvable protein structures, AlphaFold opens new research horizons," said Ben Perry, DNDi discovery lead, according to the Beeb. It is heartening to see powerful cutting-edge drug discovery technologies enabling work on some of the worlds most neglected diseases.

Google launched a machine-learning software tool on its cloud platform that customers in the manufacturing industry can use to automatically spot damaged products or packages during manufacture.

The Visual Inspection AI is designed to process images of items passing through assembly lines, and identify defective or problematic gear. Customers will have to fine-tune the model on the goods they sell and the kind of defects they want to identify. By using the tool, clients can better discard broken items or fix issues on products before theyre shipped, or at least that's the hope.

The benefit of a dedicated solution [like Visual Inspection AI] is that it basically gives you ease of deployment and the peace of mind of being able to run it on the shop floor, Google Clouds Dominik Wee, managing director of manufacturing and industrial, told VentureBeat.

It doesnt have to run the cloud, he added. At the same time, it gives you the power of Googles AI and analytics. What were basically trying to do is get the capability of AI at scale into the hands of manufacturers.

The rest is here:

Tesla shows off the AI supercomputer training what it hopes will one day be an actual self-driving car - The Register

AMD quadrupled top 500 supercomputer share in a year – CRN Australia

AMD has more than quadrupled its share of the 500 fastest supercomputers in the world in a matter of one year, underlining the growing threat that rival Intel faces from AMDs EPYC CPUs.

The chipmakers EPYC processors are now powering 49 of the worlds top supercomputers, according to an updated list released Monday by Top500 during the virtual ISC high-performance computing event. Thats more than double the 21 supercomputers counted using AMD CPUs in Top500s fall 2020 list and more than quadruple the 11 supercomputers that were using the chipmakers processors as of the summer 2020 list.

AMDs ascendance has come at a cost to Intel, the historically dominant provider of CPUs in the HPC space that is now planning a comeback under new CEO Pat Gelsinger that will take years to execute. According to Top500s latest figures, Intel saw its share of the top 500 supercomputers shrink to 431 from 459 in the fall of last year, and from 470 a year ago.

Although there wasnt much change to the Top10, that doesnt mean there werent interesting revelations within this years list. To begin, it would appear that there is a marked increase in the use of AMD processors, Top500 wrote in a press release.

Out of the 49 supercomputers using AMD EPYC CPUs, 29 of them are new entries and three are in the top 10, according to an analysis by CRN. This includes Perlmutter, a new supercomputer at the U.S. Department of Energys Lawrence Berkeley National Lab, ranked No. 5 on the Top500 list. It consists of a Hewlett Packard Enterprise Cray EX235n system, which is equipped with 1,536 64-core EPYC 7763 CPUs and 6,159 of Nvidias A100 GPUs. The cluster is expected to add 6,144 EPYC CPUs later this year through a CPU-only node expansion, AMD has previously said.

Intel, on the other hand, saw 26 new supercomputers using its processors added to the Top500 list. The fastest supercomputer using Intel CPUs on the list is ranked at No. 7, two ranks behind AMDs fastest supercomputer, Perlmutter. There are two other Intel-based supercomputers in the top 10.

While both Intel and AMD both face new threats from companies making Arm-based CPUs, only one new supercomputer on the list uses Arm: University of Tokyos Wisteria/BDEC-01 cluster, which is powered by 7,680 of Fujitsus 48-core A64FX CPUs. There are now a total of six supercomputers on the Top500 list with Arm-based CPUs, five of which use Fujitsu CPUs, while the other one uses Marvells ThunderX2, which is part of a line of CPUs that has been canceled. The Fujitsu A64FX-powered Fugaku supercomputer, however, remains the fastest system on the list.

As for accelerators and co-processors, Nvidia saw its share of GPUs in the worlds top supercomputers fall slightly to 138 from 141 in fall of last year. However, Nvidia can now claim to power six of the 10 fastest supercomputers in the world, thanks to the addition of DOEs Perlmutter system.

AMD, on the other hand, continued to show no growth in GPU share, with Top500 only listing one supercomputer using the chipmakers Instinct GPU, the same one that had been reported in last years two lists.

Alexey Stolyar, CTO of International Computer Concepts, a Northbrook, Ill.-based server integrator, told CRN that hes not surprised that AMDs share in the top 500 supercomputers has grown so fast, especially given how its EPYC processors can outperform Intels Xeon CPUs. (AMD recently said its new third-generation EPYC processors outperform Intels third-generation Xeon Scalable processors on a per-core and per-socket basis while also offering the best total cost of ownership. Intel, on the other hand, has focused on advantages for a smaller group of emerging workloads.)

When measuring raw performance, AMD just outperforms Intel, he said.

As a result, Intel has become more aggressive in discounting CPU prices to make the processors look more attractive from a price-performance perspective, according to Stolyar. AMD also engages in discounting when it comes to competitive bids, he added, though it was more pronounced when the chipmaker was getting started with its EPYC CPUs a few years ago.

Weve seen significant discounts on both sides, he said, which can make a big difference in total cost of ownership for customers.

The bigger issue, for now, is supply, Stolyar said, due to the global chip shortage that is impacting multiple semiconductor companies--and, consequently, several industries beyond the IT sector.

Im ready [to not have] to deal with it. One of our advantages is were quick with turnaround, and that definitely affects us quite a bit, he said.

This article originally appeared at crn.com

More:

AMD quadrupled top 500 supercomputer share in a year - CRN Australia

Intel to Launch Supercomputer CPUs and GPUs Next Year to Catch Up with AMD and Nvidia – Etnews

International supercomputing conference keynoteCPU 'Sapphire Rapids' testing underway at customer sitesApplied enhanced super-pin/DDR5 memory for the first timeGPU 'Ponte Vecchio' to be announc

Intel will mass-produce next-generation central processing units (CPU) and graphics processing units (GPU) next year to preoccupy the high-performance computing (HPC) market. Applying the industry technology for the first time, the products have been differentiated from competitors such as AMD and Nvidia. Having declared 'IDM 2.0' to reenter the foundry business to regain its reputation as the technology of Intel, Intel has expressed its ambition to take the lead in technology in the HPC market such as supercomputers.Trish Damkroger, Intels Vice President and General Manager of the High-Performance Computing Group, recently said at the keynote preview briefing of the International Super Computing Conference (ISC 2021), "We have provided Intels next-generation HPC CPU, Sapphire Rapids, to our customers and been testing them. We plan on starting production at the end of this year and ramp up mass production in the first half of next year. GPU Ponte Vecchio is also scheduled to be announced next year, so we will release the details as the release date approaches."Sapphire Rapids is the sequel to the 3rdgeneration HPC Intel Xeon Scalable processor 'Ice Lake' that Intel unveiled in April.This is the first CPU to apply the 'Enhanced Super Fin' structure, which further improves the 'Super Fin' structure that uses the existing 10-nm process in maximum to increase power consumption and transistor operation speed. Sapphire Rapids also supports the industry's first DDR5 memory, which boasts 0.1V lower power consumption and 38% or more improved bandwidth compared to DDR4. DDR5 supported by Sapphire Rapids is expected to replace DDR4, the current trend in server memory, with high-bandwidth memory (HBM) support that significantly expands the memory bandwidth available to the CPU."Korea will take the lead in high-bandwidth memory production," said Seung-joo Na, Managing Director of Intel Korea. We view that high-bandwidth memory support is important related to Sapphire Rapids for Korean memory makers." This means that domestic memory makers demand for DDR5 will increase along with Intel's next-generation CPU. Intel's next-generation CPU roadmap is interpreted as the aspiration to take the lead in the rapidly growing HPC market, including cloud. Intel is poised to aggressively target the market with new CPUs, in order not to fall behind in the competition with the existing AMD and Nvidia.The mass production of Sapphire Rapids is also a signal that Intel has successfully transitioned to a stable 10nm process. Sapphire Rapids is produced by Intel's own process, but it is projected that it will take time to actualize the 7-nano process that Intel is challenging itself to. Intel added, The Ponte Vecchio, the next-generation GPU, will be produced in a 7-nano process, but it will be outsourced (through foundry).By Staff Reporter Kwon Dong-jun djkwon@etnews.com

Link:

Intel to Launch Supercomputer CPUs and GPUs Next Year to Catch Up with AMD and Nvidia - Etnews

Fully electric Volvo XC90 Recharge teased with lidar and "AI-driven super computer" – Green Car Reports

An all-electric successor to the Volvo XC90likely called XC90 Rechargewill be unveiled next year, Volvo confirmed Thursday in a press release, adding that the electric SUV will feature advanced driver-assist tech.

As the automaker hinted at previously, the XC90 Recharge will get standard lidar sensors from Silicon Valley firm Luminar (which Volvo has a stake in). They'll be part of a sensor suite feeding information to an "AI-driven super computer" featuring Nvidia chips.

The sensors and computer will be combined with software developed in-house by Volvo, as well as mechanical backups for controls like steering and braking, to enable a system called Highway Pilot, Volvo announced.

Volvo Luminar lidar

Highway Pilot will enable highly-automated driving on highways, but will only be activated "when verified safe and legally allowed for individual geographic locations and conditions," Volvo said.

Volvo has hinted at a system like this before, which could be a big step up from the automaker's current Pilot Assist system, and a possible rival to General Motors' Super Cruise or Tesla's Navigate on Autopilot.

Alternatively, it could turn out like Audi's lidar-enabled Traffic Jam Pilot, which promised similar capability but was ultimately cancelled due to regulatory and liability concerns.

The integrated technology will also be able to improve over time, in part through over-the-air (OTA) software updates, Volvo claims. Eventually, it will be able to intervene in more situations to prevent collisions, according to the automaker.

2021 Volvo XC90 T8 Inscription - review update, June 2021

Volvo plans to build the next-generation XC90 in South Carolina, with multiple platforms underneathincluding a skateboard platform underpinning the EV, and the new SPA2 platform for other versions.

It's likely that the the internal-combustions versions of the XC90which are expected to remain in the lineup alongside the Recharge EVwill shift entirely or almost entirely to hybrid or plug-in hybrid.

Volvo just announced a joint-venture battery factory with Sweden's Northvolt. That one won't supply the cells for the XC90 Recharge (initially, at least), but it will make them for an upcoming electric XC60 SUV.

Original post:

Fully electric Volvo XC90 Recharge teased with lidar and "AI-driven super computer" - Green Car Reports