A Super Computer has predicted exactly where Cardiff City will finish in the Championship this season – WalesOnline

A Super Computer has predicted the Championship table for the 2017-18 season... and it's not good news as far as Cardiff City are concerned.

The Bluebirds have been heavily active in the summer transfer window, with Neil Warnock signing no fewer than seven players ahead of his side's league opener against Burton at the Pirelli Stadium on Saturday.

Some bookies have already written off Cardiff's hopes of gaining promotion into the promised land of the Premier League this season while journalists from their 23 Championship rivals also believe the Bluebirds will miss out on a place in the top six .

And TalkSport's Super Computer - which uses a wide range of data to determine the final table - has also predicted the Bluebirds will miss out on promotion, by some distance too.

Their table has Cardiff finishing in 14th place, behind the likes of Wolves, Birmingham and Nottingham Forest, who all finished below the Bluebirds last season.

It has Garry Monk's Middlesbrough down to win the league title, with big-spending Wolves also set to gain automatic promotion into England's top flight.

Aston Villa, Fulham, Norwich and Sheffield Wednesday are the teams the gadget has predicted to make up the play-off places, with Leeds, Brentford, Reading and Hull among those expected to secure top 10 finishes.

It has newly-promoted Millwall down to be relegated along with Nigel Clough's Burton and Cardiff's Severnside rivals Bristol City, who the gadget predicts will finish at the bottom of the pile.

So while we suspect Neil Warnock (and most probably the Cardiff faithful) won't be bothered in the slightest that a computer is tipping them for a bottom half finish, the Yorkshireman will certainly relish proving all of his doubters wrong by enjoying a memorable campaign in the Welsh capital and leading the club into the Premier League next year.

1 - Middlesbrough

2 - Wolves

3 - Aston Villa

4 - Fulham

5 - Norwich

6 - Sheffield Wednesday

7 - Leeds United

8 - Brentford

9 - Reading

10 - Hull

11 - Derby

12 - Nottingham Forest

13 - Birmingham

14 - Cardiff City

15 - Sheffield United

16 - Queens Park Rangers

17 - Sunderland

18 - Preston

19 - Barnsley

20 - Ipswich

21 - Bolton

22 - Millwall

23 - Burton

24 - Bristol City

poll loading

YES, they'll be in the top two YES, via the play-offs NO, they'll lose in the play-offs NO, they'll finish outside the top six

Read more here:

A Super Computer has predicted exactly where Cardiff City will finish in the Championship this season - WalesOnline

Wolves to finish above Aston Villa and Birmingham City – ‘Super Computer’ predicts final Championship table – Birmingham Mail

Wolverhampton Wanderers will finish above Aston Villa and Birmingham City in the Championship next season, according to a so-called Super Computer.

talkSPORT have simulated the Championship season to predict the outcome of what promises to be an intriguing tussle in English footballs second tier.

And Wolves, who made their 11th signing of the summer yesterday in the form of Brazilian forward Leo Bonatini, have been backed to come out on top.

Wolves are predicted to finish in second spot behind Garry Monks Middlesbrough, who are thought by many to be promotion favourites.

Nuno Espirito Santos men just pipped Steve Bruces Aston Villa, who finished in third spot with Fulham, Norwich City and Sheffield Wednesday also in the play-off spots.

Harry Redknapps Birmingham City, who are expected to be the busiest of the West Midlands three second tier clubs in the final month of the transfer window, finished in 13th position.

1 Middlesbrough

2 Wolves

3 Aston Villa

4 Fulham

5 Norwich City

6 Sheffield Wednesday

7 Leeds United

8 Brentford

9 Reading

10 Hull City

11 Derby County

12 Nottingham Forest

13 Birmingham City

14 Cardiff City

15 Sheffield United

16 QPR

17 Sunderland

18 Preston

19 Barnsley

20 Ipswich Town

21 Bolton Wanderers

22 Millwall

23 Burton Albion

24 Bristol City

Read the original:

Wolves to finish above Aston Villa and Birmingham City - 'Super Computer' predicts final Championship table - Birmingham Mail

Chinese Supercomputer Runs Record-Breaking Simulation of … – TOP500 News

Scientists from the Chinese Academy of Science have run the largest simulation of the universe on TaihuLight, the worlds fastest supercomputer.The record-breaking achievement was described last week in the South China Morning Post, which reported that the supercomputer was able to simulate the early expansion of the universe using 10 trillion virtual particles.

Leading the effort was Gao Liang, chair scientist of the computational cosmology group in the National Astronomical Observatories at the Chinese Academy of Sciences, who said that TaihuLight used 10 million processor cores to accomplish the simulation. The 125-petaflop (peak) machine, which is housed at the National Supercomputing Center in Wuxi, is equipped with a total of 10,649,600 cores.

With a Linpack performance mark of 93 petaflops, TaihuLight has been ranked as the fastest supercomputer in the world since June 2016, according to the TOP500 list. The system contains more computational capacity than the next four top-ranked supercomputers on the list, combined.

TaihuLight is powered by the ShenWei SW26010 processor, a 260-core CPU developed in China specifically for HPC work. The fact that the system used this custom chip meant the Gao and his team had to write their own software, rather than relying on existing codes developed for more conventional processors.

According to the Post report, the research was made public on July 26, in an article published in Science and Technology Daily, the official newspaper of Chinas Ministry of Science and Technology of China. The computational run to perform the simulation was performed in May.

Like most codes used to model the universe, the Chinese version was based on the N-body simulation, which approximates the motion of particles, which are driven principally from gravitational forces. As more particles are simulated, the computation effort intensifies accordingly, which effectively restricts these universe-scale simulations to the very largest supercomputers. As the Post report noted:

It was only possible to simulate over 1,000 particles with the best computers in 1970s. In recent years scientists reached the trillion-particle level on some of the worlds most powerful machines such as the Titan in the US, the K computer in Japan and Tianhe-2 in Guangzhou.

The TaihuLight universe simulation broke the record obtained in June by the 19.6-petaflop Piz Paint supercomputer in Switzerland. The Swiss model used 2 trillion particles, which were used to catalogue about 25 billion galaxies. To accomplish this, the Swiss astrophysicists executed their code for 80 hours.

In the TaihuLight effort, the simulation was maintained for just over one hour. But according to Gao, that was due to the fact that the system had other users waiting to use the machine. During this relatively short run, the simulation had run the model to tens of millions of years after the Big Bang. The current age of the universe is around 1.3 billion years.

This is just a warm-up exercise, said Gao. We still have a long way ahead to get what we want.

The rest is here:

Chinese Supercomputer Runs Record-Breaking Simulation of ... - TOP500 News

Championship 2017/18: Super Computer predicts table ahead of the new season – talkSPORT.com

The new EFL season is about to get under way and you can hear 110 regular season matches on talkSPORT and talkSPORT2.

READ MORE: talkSPORT becomes the new home of the English Football League

On Friday night, the Championship will kick off live on talkSPORT as Sunderland host Derby County, while over the opening weekend you can also hear Wolverhampton Wanderers v Middlesbrough AND Aston Villa v Hull City on Saturday, and on Sunday you can listen to Bolton v Leeds.

Everyone has their views on who will win the title, promotion to the Premier League, and who will get relegated, but what does talkSPORTs famous Super Computer make of it?

Click the right arrow above to see how we think the Championship 2017/18 table will finish.

Of course, no one knows for sure what the final table will look like, but it is fun to speculate.

talkSPORT and talkSPORT 2 have exclusive radio rights to the Sky Bet EFL Championship,League OneandLeague Twofor the next three seasons.

The talkSPORT network will be the only place to hear 110 regular season EFL matches as well as the play-off semi-finals and finals - read more here.

Read the original here:

Championship 2017/18: Super Computer predicts table ahead of the new season - talkSPORT.com

AMD reveals PetaFLOP supercomputer in a single rack – Next Big Future

Yesterday, AMD revealed the Project 47 supercomputer was powered by 20 AMD EPYC 7601 processors and 80 Radeon Instinct GPUs. It is a petaFLOP supercomputer in a rack. Other hardware included 10TB of Samsung memory and 20 Mellanox 100G cards (and 1 switch). Project 47 is capable of 1 PetaFLOP of single-precision compute performance or 2 PetaFLOPS of half-precision.

Project 47 is built around the Inventec P47. The P47 is a 2U parallel computing platform designed for graphics virtualization and machine intelligence applications. A single rack of Inventec P47 systems is all that was necessary to achieve 1 PetaFLOP, and it does so while producing 30 GigaFLOPS/Watt, which AMD claims is 25% more efficient than some other competing supercomputing platforms. A petaFLOP system uses 33,333 watts. A thousand of PetaFLOP racks would use 33.3 MW and have an exaFLOP.

Thanks to its 32-core / 64-thread EPYC processors and Radeon Vega GPUs, which feature 4,096 stream processors each, AMD also claims that Project 47 rack has more cores/threads, compute units, I/O lanes and memory channels in use simultaneously than in any other similarly configured system.

In a little over 10 years, AMD has managed to make a system that consumes 98% less power and takes up 99.93% less space compared to the first PetaFLOP supercomputer (the Roadrunner).

Go here to see the original:

AMD reveals PetaFLOP supercomputer in a single rack - Next Big Future

One of the world’s most powerful computers lands at University of Texas – CultureMap Austin

The University of Texas at Austin is now home to one of the world's most powerful computers. In late July, Texas Advanced Computing Center unveiled a new supercomputer, dubbed Stampede2, at the J.J. Pickle Research Campus.

Supercomputers are the bodybuilders of the computer world, andStampede2 has the processing power of about 100,000 of desktop computers. It is the most powerful supercomputer at any U.S. university and the 12th most powerful in the world.

"Stampede2 represents a new horizon for academic researchers in the U.S.," says Dan Stanzione, TACC's executive director, in a release. "It will serve as the workhorse for our nation's scientists and engineers, allowing them to improve our competitiveness and ensure that UT Austin remains a leader in computational research for the national open science community."

Thanks to a $30 million award from the National Science Foundation, TACC designed and constructed Stampede2 with help from Dell, Intel, and Seagate. TACC also received an additional $24 million to cover operations costs for the system.

UT will collaborate with other universities Clemson University, Cornell University, Indiana University, The Ohio State University, and the University of Colorado at Boulder to tackle problems they never could have tackled before. Early research computed on Stampede2 includes tumor identification with magnetic resonance imaging at UT, real-time weather forecasting at the University of Oklahoma, and earthquake prediction at the University of California.

Stampede2builds on the technology from its predecessor, Stampede, which was introduced in 2013. Stampede helped researchers study a wide range of topics, from the earth's mantle to gravitational waves in space.

But, as the release put it, "supercomputers live fast and retire young." Stampede was retired in 2017 after a five-year run. Stampede2 is expected to operate until 2021.

Excerpt from:

One of the world's most powerful computers lands at University of Texas - CultureMap Austin

Championship 2017/18: Super Computer predicts table ahead of the news season – talkSPORT.com

The new EFL season is about to get under way and you can hear 110 regular season matches on talkSPORT and talkSPORT2.

READ MORE: talkSPORT becomes the new home of the English Football League

On Friday night, the Championship will kick off live on talkSPORT as Sunderland host Derby County, while over the opening weekend you can also hear Wolverhampton Wanderers v Middlesbrough AND Aston Villa v Hull City on Saturday, and on Sunday you can listen to Bolton v Leeds.

Everyone has their views on who will win the title, promotion to the Premier League, and who will get relegated, but what does talkSPORTs famous Super Computer make of it?

Click the right arrow above to see how we think the Championship 2017/18 table will finish.

Of course, no one knows for sure what the final table will look like, but it is fun to speculate.

talkSPORT and talkSPORT 2 have exclusive radio rights to the Sky Bet EFL Championship,League OneandLeague Twofor the next three seasons.

The talkSPORT network will be the only place to hear 110 regular season EFL matches as well as the play-off semi-finals and finals - read more here.

Continue reading here:

Championship 2017/18: Super Computer predicts table ahead of the news season - talkSPORT.com

AMD Manages To Pack A PetaFLOPs Capable Super Computer In A … – Wccftech

AMD recently unveiled something truly remarkable today a server rack that has a total processing power of 1 PetaFLOPs. Thats 10 to the power of 15 floating point operations per second or 20 to the power of 15 half precision FLOPs. Heres the kicker though: a decade ago in 2007, a computer of the same power would have required roughly 6000 square feet of area and thousands of processors to power. A decade ago, this would have been one of the most powerful supercomputers on Earth, and today, its a server rack.

The server rack, ahem supercomputer, named Project 47 is powered by 20x EPYC 7601 32 Core processors and around 80x Radeon Instinct GPUs. It supports around 10 TB of Samsung Memory and 20x Mellanox 100G cards as well as 1 switch. All of this is fitted into a server rack that is roughly the height of 1.25 Lisa Sus with an energy efficiency of 30 GFLOPs per watt. That means the project 47 super computer consumes around 33,333 watts of electricity. Project 47 will be available from Inventec and their principal distributor AMAX sometime in Q4 of this year.

Today at Capsaicin SIGGRAPH, AMD showcased what can be achieved when the worlds greatest server CPU is combined with the worlds greatest GPU, based on AMDs revolutionary Vega architecture. Developed by AMD in collaboration with Inventec, Project 47 is based on Inventecs P-series massively parallel computing platform, and is a rack designed to excel in a range of tasks, from graphics virtualization to machine intelligence.

Back in 2007, you would have found the same power in a supercomputer called the IBM Roadrunner. This was a super computer project that was once the most powerful, well, super computer of its time and built by AMD and IBM for the Los Alamos National Laboratory. The cluster had 696 racks spanning an area of 6000 square feet and consumer 2,350,000 watts of electricity. The cluster consisted primarily of around 64,000 dual core Opteron CPUs and some accelerators.

So basically in a little over 10 years, AMD has managed to make a system that consumes 98% less power and takes up 99.93% less space. We are not yet sure how much Project 47 will cost, but we are pretty sure it will be less than the US $100 Million cost of the original Roadrunner. If that isnt the epitome of modern computational advances, I dont know what is.

So how exactly did AMD manage this feat? Well, usually when talking abut a decade, there are several node shrinks involved as well as architectural gains however, it is clear from the specifications that the rockstar of Project 47 isnt the CPU, its the GPU. While AMD has progressed from the architecture of old of 2007, and the occasional node shrink excepted, the progress on the CPU front hasnt been anywhere near as large to justify the simply ridiculous gains seen here. In fact, with 20 EPYC 7601 CPUs you are looking at a core count of just 640 cores which simply pales in comparison to the 128,000 cores in the original roadrunner. Since we certainly did not see IPC increase of 20000% it is clear that the star of the Project 47 is the Radeon Instinct GPU.

With 80 Radeon Instincts inside the server rack, you can already account for roughly 960 TFLOPs (depending on the clock speed) already out of the 1000 TFLOPs that the P47 is rated at. With 128 PCIe lanes per CPU, the EPYC processors will act as the drivers of the Radeon Instinct and wont actually handle the brunt of the load. So basically form an all-CPU based Roadrunner, we have come to P47, which is practically an all-GPU based show. It really speaks volumes for the bonkers growth in power we seen in the GPU department. The rapid scaling of core count, architectural gains and node shrinks have really ushered in a new era of computational power.

Share Submit

See the article here:

AMD Manages To Pack A PetaFLOPs Capable Super Computer In A ... - Wccftech

AMD Demos Petaflop-in-a-Rack Supercomputer – TOP500 News

AMD has demonstrated a supercomputer based on its latest AMD EPYC CPUs and Radeon Instinct GPUs that can deliver one petaflop of single precision floating point performance in a single rack.

The demo was presented at AMDs Capsaicin event, which took place in conjunction with the SIGGRAPH conference, an annual gettogether for the content creation community.

AMD CEO Lisa Su introduced the system, dubbed Project 47, as a platform suitable for both deep learning workloads and image rendering applications. Su said the machine is powered by 20 EPYC CPUs and 80 Radeon Instinct GPUs, and contains 10 TB of DDR4 memory.

Project 47 is a joint collaboration between AMD, Inventec, Samsung, and Mellanox. Inventec, a Taiwan-based original design manufacturer (ODM), built and integrated the system, based on its P-series 2U server platform. Samsung supplied the high bandwidth memory (HBM2) memory for the Radeon Instinct cards, as well as DDR4 main memory modules and NVMe SSD storage devices, while Mellanox contributed its EDR (100 Gbps) adapters, cabling, and switches to hook together the 20 servers.

In this case, each server contains a single EPYC 7601 CPU hooked to four Radeon Instinct MI25 GPUs. The advantage of the single-socket EPYC server is that it is able to support plenty of memory, GPU cards, and other PCIe devices, without have to resort to a second CPU installed solely tohook in more componentry. That saves not only the extra expense of the CPU, but power as well -- something AMD has touted as one of the major advantages of its EPYC design.

As is always the case for such accelerator-based architectures, the majority of the flops are supplied by the GPUs. In this case, each MI25 delivers 12.3 teraflops of single precision floating point (FP32) or 24.6 teraflops of half precision (FP16). Together they account for more than 95 percent of the systems floating point computational power.

The new Radeon Instinct products were designed primarily for deep learning work, but the demonstration played to the local SIGGRAPH crowd, running a variety of image rendering applications. At this point, Su handed the presentation off to Raja Koduri, senior vice president and chief architect of the AMD Radeon Technologies Group.

In the first demo, the rack was used as a virtualized resource for four different rendering applications launched from thin clients. Koduri explained that the Radeon GPUs included hardware virtualization support for the kind of remote execution that they were using, noting that each GPU can support up to 16 users. That meant the rack could theoretically deal with 1280 users simultaneously. The second demo used all 80 GPUs for a single graphics application, in this case, to illustrate how photorealistic rendering could be accomplished in real time. Were redefining high performance computing for the content creation community, said Koduri.

Although the SIGGRAPH demonstration only illustrated the systems graphic prowess, AMD is hoping the machine will attract AI users looking to run their deep learning codes. Theoretically, the system could also be used for more traditional HPC work for applications that can tolerate 32 bits of precision.

One of the main advantages of the Project 47 machine is that it is able to deliver a lot of floating point horsepower within a relatively small power envelope. AMD is claiming the system delivers 30 gigaflops per watt of FP32 operations, which would put it at or near the top of the Green500 list if somehow those FP32 operations could be transformed to FP64. Alas, these latest Radeon parts have little 64-bit capability, making the comparison somewhat irrelevant. The current Green500 champ is TSUBAME 3.0, which turned in a power efficiency of 14.1 gigaflops of performance based on (FP64) Linpack.

Project 47 systems are expected to be available from Inventec and their principal distributor, AMAX, in Q4 of this year. Pricing was not announced.

Read more:

AMD Demos Petaflop-in-a-Rack Supercomputer - TOP500 News

Texas advanced computing centre launches new supercomputer – Scientific Computing World

Texas Advanced Computing Center at the University of Texas has launched a new supercomputer - Stampede2 - with help from a $30 million award from the US National Science Foundation (NSF).

The new systems, the largest at any US university, will support nation's scientists and engineers.

Stampede2 represents a new horizon for academic researchers in the US, said Dan Stanzione, TACC's executive director. It will serve many thousands of our nation's scientists and engineers, allowing them to improve our competitiveness and ensure that UT Austin remains a leader in computational research for the national open science community.

AT the opening ceremony representatives from TACC were joined by leaders from The University of Texas at Austin, The University of Texas System, the National Science Foundation (NSF) and industry partners Dell EMC, Intel and Seagate.

For 16 years, the Texas Advanced Computing Center has earned its reputation for innovation andtechnological leadership, said Gregory L. Fenves, president of UT Austin.It is only fitting thatTACC has designed and now operates the most powerful supercomputer at any university in the US,Stampede2,enabling scientists and engineers to take on the greatest challenges facing society.

Made possible by a $30 million award from NSF, Stampede2 is the newest strategic resource for the nation's academic community and will enable thousands across the US. The supercomputer will allow researchers to answer questions that cannot be addressed through theory or experimentation alone.

Building on the success of the initial Stampede system, the Stampede team has partnered with other institutions as well as industry to bring the latest in forward-looking computing technologies combined with deep computational and data science expertise to take on some of the most challenging science and engineering frontiers, said Irene Qualters, director of NSF's Office of Advanced Cyberinfrastructure.

Stampede2 will be among the first systems to employ new computer processor, memory, networking, and storage technology from its industry partners. Phase 1 of the system, which is currently complete, ranked as the 12th most powerful supercomputer in the world on the June Top500 list and contains 4,200 Intel Xeon Phi processor-based nodes and Intel Omni-Path Architecture. Later this year, Phase 2 will add 1,736 Intel Xeon Scalable processor-based nodes, increasing peak performance to approximately18 petaflops. In addition, Stampede 2 will later add Intels persistent memory, based on 3D XPoint.

Intel and TACC have been collaborating for years to provide the high-performance computing (HPC) community the tools they need to make the scientific discoveries and create solutions to address some of society's toughest challenges, said Trish Damkroger, Vice President of Technical Computing at Intel. Intel's leading solution portfolio for HPC provides the efficient performance, flexible interconnect, and ease of programming to be the foundation of choice for leading supercomputing centers.

You can read the full story from Aaron Dubrow on the TACC website.

See original here:

Texas advanced computing centre launches new supercomputer - Scientific Computing World

TACC debuts $30 million supercomputer | The Daily Texan – UT The Daily Texan

UTs Texas Advanced Computing Center debuted its new $30 million supercomputer Friday, called Stampede2.

Stampede2 will help researchers manage and analyze large amounts of data with high-performing computing capabilities, according to a UT press release.

Stampede2 will be the most powerful computer in the country at an academic institution and will be twice as fast as the previous supercomputer, as well as be more energy efficient, said Aaron Dubrow, TACCs strategic communications specialist.

It will allow tens of thousands of researchers to do their work, Dubrow said. Among those tens of thousand are hundreds that study cancer.

The computer, awarded by the National Science Foundation, will also be among the rst systems to employ cutting-edge computer processor, memory, networking and storage technology, according to a TACC press release.

Karen Vasquez is one of the many researchers at UT who work with the supercomputers. Vasquez and her team use them to study structures that form at hotspots for mutations in cancer.

The computers manage large amounts of data in one week that would otherwise take years, Vasquez said.

We had to search 12 different structures, algorithms, on 40,000 cancer genomes many, many times, Vasquez said. We couldnt have done it (without Stampede2)."

The supercomputers are remotely logged into by users for them to do the same managing and analyzation of data as if the user was at the center, said TACC research associate William Joe Allen.

We maintain the computers and sort of help the users if they need some software installed or if they have questions about how to run certain things or they want to do things more efficiently, Allen said. Thats our area of expertise.

TACC, one of the largest and well-equipped supercomputing systems in the country, works with medical researchers, like Vasquez, from all over.

We provide the resources for anyone in UT system and actually all across the country, Allen said. We have users coming in from all 50 states and many different countries and basically every major research institution in Texas.

Read more from the original source:

TACC debuts $30 million supercomputer | The Daily Texan - UT The Daily Texan

China’s Sunway TaihuLight supercomputer created the biggest … – Wired.co.uk

Sunway TaihuLight, the world's fastest computer

JACK DONGARRA, SUNWAY TAIHULIGHT SYSTEM REPORT

China is already building a supercomputer that's capable of 1,000,000,000,000,000,000 calculations per second and hopes a prototype will be completed this year. Despite this computing prowess, the nation is trying to do more.

The country has already stated its ambitions to ensure it is a leader in artificial intelligence by 2030. It's now created the largest digitally generated version of the Universe, a report claims.

According to the South China Morning Post, China has used the Sunway TaihuLight, the world's most powerful supercomputer, to simulate "the birth and early expansion of the Universe" using 10 trillion digital particles.

The work has reportedly been completed by the National Supercomputer Centre in Wuxi, and involved computer scientists from the Chinese Academy of Sciences in Beijing. The supercomputer used 10 million CPU cores to create the universe known as N-body simulation and involved breaking the universe's mass down into particles.

"This is just a warm-up exercise," Gao Liang, chair scientist of the computational cosmology group at the National Astronomical Observatories said. "We just got to the point of tens of millions of years after the Big Bang. It was still a very young stage for the universe. Most galaxies were not even born".

Subscribe to WIRED

Liang says the universe that was created is five times larger than the previous simulated attempts. However, China's simulation was only sustained for an hour, whereas a European recreation of the universe, which was the previous biggest attempt, existed for 80 hours.

In June, academics from the University of Zurich recreated a catalogue of 25 billion virtual galaxies, which were generated from two trillion digital particles. "The challenge of this simulation was to model galaxies as small as one tenth of the Milky Way, in a volume as large as our entire observable Universe," academics from the University said.

It's hoped China will be able to expand its simulation of the universe with the development of its next supercomputer.

China is currently leading the world in supercomputers. The Sunway TaihuLight has a processing speed of 93 petaflops. At its peak, the computer can perform 93,000 trillion calculations per second. In total, 167 of the most powerful 500 computers in the world reside in China.

The US is developing a number of supercomputers that would be capable of beating the Sunway TaihuLight a 200 petaflop machine called Summit is being developed at the Oak Ridge National Lab and is due to arrive in 2018. Japan is also heavily investing in supercomputing technology and has said it will spend 19.5 billion yen (139 million) on a 130 petaflop computer. However, China's Zhang Ting supercomputer is planned to be an exascale computer, which is capable of at least 1 quintillion (a billion billion) calculations per second.

Read the original:

China's Sunway TaihuLight supercomputer created the biggest ... - Wired.co.uk

Supercomputer Simulation of HIV Virus may Provide New Treatment Options – TrendinTech

After two years of work on the Department of Energys Titan supercomputer, researchers successfully simulated 1.2 microseconds of an HIV capsid navigating through a human cell. The findings, which are published in Nature Communications, gives new details about how the virus infects its host.

Juan R. Perilla, the studys leader and a research scientist from the University of Illinois, is optimistic that these insights will help scientists looking for HIV treatments. He said, We are learning the details of the HIV capsid system, not just the structure but also how it changes its environment and responds to its environment.

After the simulation of the 64 million atoms involved was performed on Titan, another supercomputer, this time Blue Waters at the National Center for Supercomputing Applications at the University of Illinois, was used to analyze the data produced. Physics professor Klaus Schulten, who co-led the study, was the pioneer of the molecular dynamics simulation used to examine the biological system, a method he called computational microscopy.

The simulation study revealed the HIV capsid structure to be a protein cage composed of hexagon and pentagon shaped structures in a complex network. Each capsid has a small pore in the center and contains the RNA of the HIV virus protected inside, safe from the human bodys defenses.

Never before have scientists seen the full scope of what the HIV shuttle can do. Now they know of its ability to transmit information through oscillating frequencies and its delicate ion charge, which has presented a puzzle as to how to defeat the infection cycle of HIV. However, as the simulation has also shown areas of stress on the capsid that can be exploited, researchers can use the vulnerabilities to create new treatments against HIV.

More News to Read

comments

Read the original:

Supercomputer Simulation of HIV Virus may Provide New Treatment Options - TrendinTech

UT Austin’s New Supercomputer Stampede2 Storms Out of the … – UT News | The University of Texas at Austin

In 2016, the National Science Foundation (NSF) announced a $30 million award to the Texas Advanced Computing Center (TACC) at The University of Texas at Austin to acquire and deploy a new large-scale supercomputing system, Stampede 2, as a strategic national resource to provide high-performance computing capabilities for thousands of researchers across the U.S. Photo courtesy of Texas Advanced Computing Center

AUSTIN, Texas The Texas Advanced Computing Center (TACC) at The University of Texas at Austin has launched Stampede2, the most powerful supercomputer at any U.S. university and one of the most powerful in the world.

For 16 years, the Texas Advanced Computing Center has earned its reputation for innovation andtechnological leadership, said Gregory L. Fenves, president of UT Austin.It is only fitting thatTACC has designed and now operates the most powerful supercomputer at any university in the U.S., enabling scientists and engineers to take on the greatest challenges facing society.

Made possible by a $30 million award from the National Science Foundation, Stampede2 is the newest strategic resource for the nations academic community and will enable researchers nationwide from all disciplines to answer questions that cannot be addressed through theory or experimentation alone and that require high-performance computing power.

Researchers will be able to use a wide range of applications, from large-scale simulations and data analyses using thousands of processors simultaneously, to smaller computations or interacting with Stampede2 through web-based community platforms.

Stampede2 represents a new horizon for academic researchers, said Dan Stanzione, TACCs executive director. It will serve as the workhorse for our nations scientists and engineers, allowing them to improve our competitiveness and ensure that UT Austin remains a leader in computational research for the national open science community.

Phase 1 of the system, which is complete, ranked as the 12th most powerful supercomputer in the world on the June Top500 list. Later this summer, Phase 2 will add additional hardware and processors, giving it a peak performance of 18 petaflops, or 18 quadrillion mathematical operations per second. The system will have about the equivalent processing power of 100,000 desktop computers one for every seat in UT Austins Darrell K Royal-Texas Memorial Stadium.

Stampede2 will be the largest supercomputing resource available to researchers through the NSF-supported Extreme Science and Engineering Discovery Environment, which will allocate time on the supercomputer to researchers based on a competitive peer-review process.

"NSF is proud to join with UT Austin in supporting the nation's academic researchers in science and engineering with the latest in advanced computing technology and expertise," said Irene Qualters, NSF division director for advanced cyberinfrastructure. "Stampede2's capabilities will complement and significantly expand the diverse portfolio of computing resources increasingly essential to exploration at the frontiers of science and engineering."

The system continues the important service to the scientific community provided by Stampede1 also supported by NSF which operated from 2013 to 2017 at TACC. Over the course of its existence, that system ran 8 million compute jobs in support of tens of thousands of researchers and more than 3,000 science and engineering projects.

Stampede2 will double the peak performance, memory, storage capacity and bandwidth of its predecessor, but it will occupy half as much physical size and consume half as much power. It will be integrated into TACCs ecosystem of more than 15 advanced computing systems, providing access to long-term storage, scientific visualization, machine learning and cloud computing capabilities. In addition to its massive scale, the new system will be among the first to employ the most advanced computer processor, memory, networking and storage technology from its industry partners DELLEMC, Intel and Seagate.

TACC staff members worked since January to construct Stampede2 in TACCs data center, and they deployed the system ahead of schedule. Since April, researchers have used the system to conduct large-scale scientific studies of gravitational waves, earthquakes, nanoparticles, cancer proteins and severe storms.

A number of universities will collaborate with TACC to provide cyberinfrastructure expertise and services for Stampede2. The partner institutions are Clemson University, Cornell University, Indiana University, Ohio State University and the University of Colorado.

The system comes online at a time when the use of NSF-supported research cyberinfrastructure resources is at an all-time high across all science and engineering disciplines. Since 2005, the number of active institutions using research cyberinfrastructure has doubled, the number of principal investigators has tripled, and the number of active users has quintupled.

Stampede2 will help a growing number of scientists access computation at scale, powering discoveries that change the world, Stanzione said.

Video is available here:https://youtu.be/HoGek4lgl-M.

Link:

UT Austin's New Supercomputer Stampede2 Storms Out of the ... - UT News | The University of Texas at Austin

Chinese scientists created the largest virtual universe – Engadget

Simulations of the cosmos can help astronomers look for the most promising regions of space to investigate and could shed light on its mysterious components, like dark matter and dark energy. The Chinese team's simulation, in particular, recreated the birth and early expansion of the universe, around tens of millions years after the Big Bang. Unfortunately, they had to stop after they reached that point: Team leader Gao Liang said the supercomputer had other clients in line that day.

SCMP says Chinese-made supercomputers typically have major weaknesses and rarely run at full capacity. The team must have found a way to maximize Sunway's powers, because the project reportedly stretched the machine to the limit without breaking it. Gao is hoping to run a simulation from the birth of the universe up until the current era -- it's now around 13.8 billion years old -- but they might wait until Sunway's successor is ready before launching their next attempt. The next-gen supercomputer will apparently be 10 times faster than Sunway and could be up and running sometime in 2019.

Read the rest here:

Chinese scientists created the largest virtual universe - Engadget

UT now home to USA’s most powerful college-operated supercomputer – FOX 29

UT's Texas Advanced Computing Center shows off its new Stampede 2 supercomputer. (Photo: CBS Austin)

Friday, the University of Texas officially dedicated a new supercomputer, the Stampede 2, the most powerful of any you'll find at a US academic institution. What this system can do in the hands of the researchers who will use it is both breathtaking and lifesaving.

Today VIPs got to see the Stampede 2 in action. This machine has the power of 100,000 desktop PCs. It can do 18 quadrillion calculations per second. A quadrillion is a million billion or 1,000,000,000,000,000 per second.

And that helps speed up projects that involve a lot of data. Tommy Minyard is director of advanced computing systems at UTs Texas Advanced Computing Center. He says, We've been working with the storm prediction teams in Oklahoma where they run simulations on our computers at night so that they can predict what the weather will be for that day."

After the Onion Creek floods in 2014, UT researchers created models that explained what happened. Civil Engineering Professor Davis Maidment explained, As soon as you increase the watershed size you increase the flood risk."

But the faster Stampede 2 lets researchers use the most current data from rain gauges and weather stations and get their results sooner. Minyard adds, So they can put the storm researchers, the guys that are actually chasing the storms, in the appropriate areas where they think the storms are going to arrive, where they're going to form during the day."

The National Science Foundation invested $30 million in building UTs new supercomputer which is open to researchers of all stripes. And since they get their results quicker they can move forward faster. Minyard explains: "It can result in innovations and new technologies that they may not have even thought about."

View post:

UT now home to USA's most powerful college-operated supercomputer - FOX 29

Scientists create the largest virtual universe using world’s fastest supercomputer – TechJuice (press release) (blog)

A team of scientists in China has created the worlds largest virtual universe by using the supercomputer called Sunway TaihuLight and scientists are calling it a warm-up exercise. The universe created by Sunway supercomputer was 5 times bigger than the last biggest simulation run by the scientists from the University of Zurich back in June this year. But this simulation run for only an hour and the previous simulation was run for around 80 hours.

Chinese scientists recreated the birth of the universe (Big Bang) and its early expansion around tens of millions of years after the Big Bang under just an hour. But they had to stop the process because there were some other clients waiting in line that day. The simulation will help scientists to look through the universe more deeply and look for the most interesting and useful regions of the space.

Sunway TaihuLight, the supercomputer used in creating the simulation, is the worlds fastest supercomputer. This supercomputer has 10 million CPU cores which can process a huge amount of data in the blink of an eye. It has the processing power of 93 petaflops which is 3 times more than the worlds second fastest supercomputer which has 33.85 petaflops of processing power. The Chinese scientists manufactured Sunway TaihuLight last year with the aim to solve big data problems around the world.

Also Read:

In the future, Chinese scientists are hoping to run a simulation from the Big Bang to the present time and they are creating an even more powerful supercomputer for it. The next most powerful supercomputer will be 10 times more powerful than the Sunway TaihuLight and will be ready by the end of 2019.

TechJuice for Browser: Get breaking news notifications on your browser. Subscribe

previous

LG to launch its new LG V30 flagship on 31 August

See the original post here:

Scientists create the largest virtual universe using world's fastest supercomputer - TechJuice (press release) (blog)

US gets Canadian help to take on China in supercomputer race: ‘A perfect world for D-Wave’ – National Post

A new supercomputing partnership between a Canadian pioneer in quantum computers and a U.S. Department of Energy laboratory, which aims to reach exascale computing speeds within a few years, offers a glimpse of the future of ultra-fast computation, according to the scientist leading the project.

Rather than a wholesale shift from classical to quantum computing as when internal combustion engines replaced steam power the future of supercomputing is likely to involve hybrid strategies, with regular digital computers augmented by other more fancy kinds: quantum computers, graphics processors like the kind that run video games, and neuromorphic machines that mimic the behaviour of the human brain.

Its a perfect world for D-Wave, said Jeff Nichols, associate laboratory director of computing and computational sciences at Oak Ridge National Laboratory in Tennessee.

I do not believe that youll ever replace all of traditional, classical computing with a quantum computer, nor will any of the other more exotic approaches replace classical computers, he said. Youre not going to carry around a quantum computer as your phone.

Burnaby, B.C.-based D-Wave Systems new deal to provide quantum computing power to accelerate Oak Ridges supercomputers also marks a key strategy in the U.S. effort to catch up to China, which has invested heavily in its push to build computers fast enough to reach the exascale, or a quintillion calculations per second. (A quintillion is a billion billion, or 1 with 18 zeroes.)

A major difference in the two countries strategies has to do with the massive energy costs of running such a fast computer, which in the case of Oak Ridges Titan machine is nine megawatts at its peak, at a cost of $9 million.

China has tried to start with the necessary hardware, then bring the energy usage and costs down. But Nichols said Oak Ridge is taking the opposite approach with the strategic placement of quantum accelerators to improve the efficiency of calculation.

In future, he said they might wish to have a quantum computer on site, tightly coupled to their supercomputer but, for the moment, D-Waves service will be provided remotely, over the internet from Canada.

The prize of an exascale computer, for both China and the U.S., would be a vastly improved ability to solve some of sciences most complex problems, such as those about climate change, genetic analysis, protein folding, earthquake prediction, the performance of the electricity grid, and cosmology problems that are too big to simply calculate by running through all the possibilities.

Many of these are what mathematicians call optimization problems, and these are what D-Waves quantum computer is best suited to solve, as they recently did, for example, in a study for Volkswagen about how to optimize traffic flow in Beijing.

The classic example of an optimization problem is of a travelling salesman who needs to visit many towns and wants to know the shortest route.

I do not believe that you will ever replace all of traditional, classical computing with a quantum computer

Classical computers, the kind made with silicon chips, would just calculate each trip and choose the shortest. But in this kind of problem, the number of possibilities soon grows impossibly large. To solve it classically, you would need to be calculating forever with a computer as big as the universe. To quantum computers, however, the math and logic of optimization problems look very different. They do not compute with the strict ones and zeroes of binary code, but rather with the strange quantum properties of superposition and entanglement.

A classical computer calculates with bits, which can be set two ways: one or zero. From this basic binary system, a computer can build up to all the complexities of modern computing.

A quantum computer, however, takes advantage of the strange properties of matter at the subatomic scale. Rather than bits, it calculates with qubits, or quantum bits, which are tiny, fragile physical systems sometimes etched into a chip of metal cooled to near absolute zero, or a gas held in place by a magnetic field, or a sliver of artificial diamond that can be in multiple quantum states at the same time, known as superposition. This property allows a qubit to be either one, zero, or a little bit of both at the same time, allowing for a whole new style of logic and computation.

D-Waves device uses a strategy known as quantum annealing to solve optimization problems not by brute calculation, but by exploiting quantum effects to find the likeliest candidates for solutions.

Using this style of computing to help a supercomputer skip unnecessary calculations helps Oak Ridge to keep its power costs down, while accelerating its performance, Nichols said.

The success of this approach is a key reason that D-Wave president Bo Ewald thinks the future of quantum computing will look different than the rapid expansion and constant improvement of classical computers since the mid-20th century.

He has a long history in top-level computation, for example at Los Alamos National Laboratory and as president of Cray Research, which once made supercomputers that filled a room, cost millions, and are now more or less matched by an off-the-shelf laptop.

I always knew that because of Moores Law, things were going to get faster and shrink, he said. (Moores law says the number of transistors in a computer chip doubles every two years, and it has held true for decades.) But in quantum, youre using more specialized materials, very cold superconducting materials in extreme vacuum, shielded from radio frequency.

It is a more finicky hardware, he said, so he is skeptical that we will all carry quantum computers in our pockets in the future, as we do now with classical computers such as iPhones.

So I think theres a little more challenge to think well have portable quantum computers, he said. I dont think well need them because I think theyll be ubiquitous because of the Cloud.

Email: jbrean@nationalpost.com | Twitter:

See original here:

US gets Canadian help to take on China in supercomputer race: 'A perfect world for D-Wave' - National Post

UT-Austin Launches Stampede2 Most Powerful Supercomputer At Any US University (Video) – Patch.com


Patch.com
UT-Austin Launches Stampede2 Most Powerful Supercomputer At Any US University (Video)
Patch.com
AUSTIN, TX Stampede2 is the most powerful supercomputer at any U.S. university, one of the most powerful in the world, and is based at the University of Texas at Austin. School officials on Friday said the Texas Advanced Computer Center's launch of ...
UT launches one of the most powerful supercomputers in the worldKXAN.com

all 3 news articles »

Read more here:

UT-Austin Launches Stampede2 Most Powerful Supercomputer At Any US University (Video) - Patch.com

China Recreating the Cosmos With World’s Most Powerful Supercomputer –"Will Assist the New Giant FAST Radio … – The Daily Galaxy (blog)

Chinese scientists create biggest virtual universe with worlds fastest computer, dwarfed Switzerlands record set only last month, will help researchers in their efforts to unlock the secrets of the cosmos. China experts said that China was learning to take full advantage of its raw calculation power, which had outpaced other nations in recent years, and recreating the universe was just the first step.

By simulating the creation of the universe on Sunway or its more advanced successors researchers will be able to single out distant areas of space for the telescope to investigate.

Gao Liang, chair scientist of the computational cosmology group in the National Astronomical Observatories, Chinese Academy of Sciences in Beijing, said they simulated the birth and early expansion of the universe using 10 trillion digital particles, doing a quadrillion calculations per second.

This projects scale was five times greater than that of the previous record, which was achieved last month by astrophysicists at the University of Zurich in Switzerland, he added.

But while the European project ran for 80 hours, the Chinese one was maintained for just over an hour. The Chinese work was carried out at the National Supercomputer Centre in Wuxi, Jiangwu two months ago. There were lots of calculations... It must be exhausted, Gao said. He explained that Sunway had used a total of 10 million CPU cores, running multiple instructions on each core to increase the speed of calculation. A billion billion calculations per second: where no computer has gone before

The simulation was disclosed to the public for the first time on Wednesday in an article by Wang Qiao, another scientist taking part in the project, for S cience and Technology Daily, the official newspaper of the Ministry of Science and Technology of China.

The Sunway was stretched to its limit by the task, but it remained heathy, according to Gao. This is just a warm-up exercise. We still have a long way ahead to get what we want, he said.

In astronomy, researchers simulate the universe by breaking down its mass into particles. These particles interact with one another through physical forces such as gravity. The more particles involved, the more precisely the scientists can replay and forecast the universes evolution. This process can shed light on many issues such as the nature and spread of dark energy.

The calculation, also known as N-body simulation, intensified with the increase of particles. It was only possible to simulate over 1,000 particles with the best computers in 1970s. In recent years scientists reached the trillion-particle level on some of the worlds most powerful machines such as the Titan in the US, the K computer in Japan and Tianhe-2 in Guangzhou.

Gao said We just got to the point of tens of millions years after the Big Bang. It was still a very young stage for the universe. Most galaxies were not even born, he said.

China has been building its next-generation high performance computer which will be at least 10 times faster than Sunway. When the machine is finished around 2019, astronomers in China will have more calculation resources than their peers in most other countries to uncover the secrets of the universe, according to Gao.

The supercomputers will work alongside Chinas other large scientific facilities, including Fast, the worlds single largest radio telescope which is 500 meter in diameter, in Guizhou.

The telescope, whose name stands for the Five hundred metre Aperture Spherical Telescope, could obtain detailed information from the distant universe, but first it would need to know where to look. By simulating the evolution of the universe on a computer, researchers can single out promising regions that may offer interesting findings and feed the coordinates to the telescope.

"Having a more sensitive telescope, we can receive weaker and more distant radio messages," Wu Xiangping, director-general of the Chinese Astronomical Society, said of the 500-meter Aperture Spherical Radio Telescope (FAST) nestled in a bowl-shaped valley between hills in the southwestern province of Guizhou "It will help us to search for intelligent life outside of the galaxy and explore the origins of the universe," he added underscoring the China's race to be the first nation to discover the existence of an advanced alien civilization.

After 2020, the weight of new discoveries about the universe may shift to China, Gao said.

The Daily Galaxy via South China Morning Post

Image Credit: Pics About Space

More:

China Recreating the Cosmos With World's Most Powerful Supercomputer --"Will Assist the New Giant FAST Radio ... - The Daily Galaxy (blog)