12345...102030...

NASA Saves Energy, Water with Modular Supercomputer – Energy Manager Today

The supercomputer at NASAs Ames Research Center at Moffett Field, CA, is using an innovative modular approach that is designed to get researchers the answers that they need, while reducing the high level of energy and water traditionally required for these cutting edge machines.

Scientific Computing lays out the issue:

All of todays modern supercomputers must be optimised in some way for energy efficiency because of the huge power consumption of large supercomputers. The Top500 is a prime example of this. Each of the top 10 systems consumes megawatts of power, with the very largest consuming in excess of 15 megawatts.

The story quotes William Thigpen, the Chief of NASAs Advanced Computing Branch as saying that supercomputers use multiple megawatts of power. From 33 percent to 50 percent are used for cooling.

The NASA system, called Electra, is expected to save 1 million kWh and 1.3 million gallons of water annually by virtue of its modular construction. Computing assets are added and thus need to be cooled only as necessary. The system, according to the story at Scientific Computing, is designed to work within a power usage effectiveness (PUE) range of 1.03 to 1.05. The current lead supercomputer for NASA, Pleaides, runs a PUE of about 1.3.

Space Daily describes Electras flexibility. The story says that NASA is considering an expansion to 16 times its current capacity. Some of the energy benefits are indirect: Since researchers can log in remotely to utilize Electra, pressure will be taken off the supercomputers those scientists and engineers would otherwise access. Thus, the overall benefit to the environment is a bit hidden but there nonetheless.

Electra is expected to provide 280 million hours of computing time annually and currently is 39th on the U.S. TOP500 list of computer systems, according to Space Daily (Scientific Computing says Pleaides is 13th.) The modular super computer center at Ames was built and installed by SGI/CommScope and is managed by the NASA Advanced Supercomputing Division.

Modular datacenters use the same basic approach to reduce energy use.

Read more from the original source:

NASA Saves Energy, Water with Modular Supercomputer – Energy Manager Today

Super computer predicts the Premier League results: Will your team win this weekend? – Daily Star

Super computer predicts Premier League results: Gameweek 26 Thursday, 23rd February 2017

HOW are your team going to get on in the top flight this weekend?

1 / 8

Who will come out on top when Leicester face Liverpool on Monday night?

The stats geeks over at Football Web Pages use a super computer algorithm to predict the percentage likelihood of every match, based on previous goals and results.

So who will come out on top when Leicester face Liverpool on Monday night? Do Sunderland have any hope at Everton?

CLICK THROUGH THE GALLERY ABOVE TO SEE THE PREDICTED SCORES.

Read more from the original source:

Super computer predicts the Premier League results: Will your team win this weekend? – Daily Star

Supercomputer-Powered Portal Provides Data, Simulations to Geology and Engineering Community – HPCwire (blog)

Feb. 23 As with many fields, computing is changing how geologists conduct their research. One example: the emergence ofdigital rock physics, where tiny fragments of rock are scanned at high resolution, their 3-D structures are reconstructed, and this data is used as the basis for virtual simulations and experiments.

Digital rock physics complements the laboratory and field work that geologists, petroleum engineers, hydrologists, environmental scientists, and others traditionally rely on. In specific cases, it provides important insights into the interaction of porous rocks and the fluids that flow through them that would be impossible to glean in the lab.

In 2015, the National Science Foundation (NSF) awarded a team of researchers from The University of Texas at Austin and the Texas Advanced Computing Center (TACC) a two-year, $600,000grantto build theDigital Rocks Portalwhere researchers can store, share, organize and analyze the structures of porous media, using the latest technologies in data management and computation.

The project lets researchers organize and preserve images and related experimental measurements of different porous materials, said Maa Prodanovi, associate professor of petroleum and geosystems engineering at The University of Texas at Austin (UT Austin). It improves access to them for a wider geosciences and engineering community and thus enables scientific inquiry and engineering decisions founded on a data-driven basis.

The grant is a part ofEarthCube, a large NSF-supported initiative that aims to create an infrastructure for all available Earth system data to make the data easily accessible and useable.

Small pores, big impacts

The small-scale material properties of rocks play a major role in their large-scale behavior whether it is how the Earth retains water after a storm or where oil might be discovered and how best to get it out of the ground.

As an example, Prodanovi points to the limestone rock above the Edwards Aquifer, which underlies central Texas and provides water for the region. Fractures occupy about five percent of the aquifer rock volume, but these fractures tend to dominate the flow of water through the rock.

All of the rain goes through the fractures without accessing the rest of the rock. Consequently, theres a lot of flooding and the water doesnt get stored, she explained. Thats a problem in water management.

Digital rocks physicists typically perform computed tomography (CT) scans of rock samples and then reconstruct the materials internal structure using computer software. Alternatively, a branch of the field creates synthetic, virtual rocks to test theories of how porous rock structures might impact fluid flow.

In both cases, the three-dimensional datasets that are created are quite large frequently several gigabytes in size. This leads to significant challenges when researchers seek to store, share and analyze their data. Even when data sets are made available, they typically only live online for a matter of months before they are erased due to space issues. This impedes scientific cross-validation.

Furthermore, scientists often want to conduct studies that span multiple length scales connecting what occurs at the micrometer scale (a millionth of a meter: the size of individual pores and grains making up a rock) to the kilometer scale (the level of a petroleum reservoir, geological basin or aquifer), but cannot do so without available data.

The Digital Rocks Portal helps solve many of these problems.

James McClure, a computational scientist at Virginia Tech uses the Digital Rocks Portal to access the data he needs to perform large-scale fluid flow simulations and to share data directly with collaborators.

The Digital Rocks Portal is essential to share and curate experimentally-generated data, both of which are essential to allow for re-analyses and reproducibility, said McClure. It also provides a mechanism to enable analyses that span multiple data sets, which researchers cannot perform individually.

The Portal is still young, but its creators hope that, over time, material studies at all scales can be linked together and results can be confirmed by multiple studies.

When you have a lot of research revolving around a five-millimeter cube, how do I really say what the properties of this are on a kilometer scale? Prodanovi said. Theres a big gap in scales and bridging that gap is where we want to go.

A framework for knowledge sharing

When the research team was preparing the Portal, they visited the labs of numerous research teams to better understand the types of data researchers collected and how they naturally organized their work.

Though there was no domain-wide standard, there were enough commonalities to enable them to develop a framework that researchers could use to input their data and make it accessible to others.

We developed a data model that ended up being quite intuitive for the end-user, said Maria Esteva, a digital archivist at TACC. It captures features that illustrate the individual projects but also provides an organizational schema for the data.

The entire article can be found here.

Source: Aaron Dubrow, TACC

Read this article:

Supercomputer-Powered Portal Provides Data, Simulations to Geology and Engineering Community – HPCwire (blog)

Supercomputer tests ways to divert blood from aneurysm – Futurity: Research News

Engineers have used high-performance computing to examine the best way to treat an aneurysm.

To reduce blood flow into aneurysms, surgeons often insert a flow divertertiny tubes made of weaved metal, like stentsacross the opening. The reduced blood flow into the aneurysm minimizes the risk of a rupture, researchers say.

But, if the opening, or neck, of an aneurysm is large, surgeons will sometimes overlap two diverters, to increase the density of the mesh over the opening. Another technique is to compress the diverter to increase the mesh density and block more blood flow.

When doctors see the simulated blood flow in our models, theyre able to visualize it.

A computational study published in the American Journal of Neuroradiology shows the best option is the single, compressed diverterprovided it produces a mesh denser than the two overlapped diverters, and that it covers at least half of the aneurysm opening.

When doctors see the simulated blood flow in our models, theyre able to visualize it. They see that they need to put more of the dense mesh here or there to diffuse the jets (of blood), because the jets are dangerous, says lead author Hui Meng, a mechanical engineering professor at the University at Buffalo.

Working with the universityssupercomputing facility, the Center for Computational Research, Robert Damiano and Nikhil Paliwal, both PhD candidates in Mengs lab, used virtual models of three types of aneurysmsfusiform (balloons out on all sides), and medium and large saccular (balloons on one side)and applied engineering principles to model the pressure and speed of blood flowing through the vessels.

The engineers modeled three different diverter treatment methodssingle non-compacted, two overlapped, and single compacted, and ran tests to determine how they would affect blood flow in and out of the aneurysm using computational fluid dynamics.

We used equations from fluid mechanics to model the blood flow, and we used structural mechanics to model the devices, Damiano says. Were working with partial differential equations that are complex and typically unsolvable by hand.

These equations are converted to millions of algebraic equations and are solved using the supercomputer. The very small size of the mesh added to the need for massive computing power.

The diverter mesh wires are 30 microns in diameter, Paliwal says. To accurately capture the physics, we needed to have a maximum of 10 to 15 micron grid sizes. Thats why it is computationally very expensive.

The models showed that compressing a diverter produced a dense mesh that covered 57 percent of a fusiform-shaped aneurysm. That proved more effective than overlapping two diverters.

The compacted diverter was less effective in saccular aneurysms. As diverters are compressed, they become wider and bump into the sides of the vessel, so they could not be compressed enough to cover a small opening of an aneurysm. Compression was more effective in a large necked saccular aneurysm, producing a dense mesh that covered 47 percent of the opening.

Because a porous scaffold is needed to allow cell and tissue growth around the neck of the aneurysm, complete coverage using a solid diverter isnt the best option, Paliwal says. Further, solid diverters could risk blocking off smaller arteries.

The team next would like to look back over hundreds of previous cases, to determine how blood flow was affected by the use of diverters. The idea is to build a database so that more definitive conclusions can be drawn.

Were going to look at and model previous cases, and hopefully well have a way to determine the best treatment to cause the best outcome for new aneurysm cases, Damiano says.

Source: University at Buffalo

See original here:

Supercomputer tests ways to divert blood from aneurysm – Futurity: Research News

Super computer predicts the final Prem table: Will Leicester survive? Who makes top four? – Daily Star

The predicted final Premier League table according to stats geeks’ super computer Thursday, 23rd February 2017

THERE are 13 games to go in the Premier League – but how will the table finish?

1 / 20

Who will make the top four? Will champions Leicester dodge the drop?

The stats geeks over at Football Web Pages use a super computer algorithm based on previous goals and results to work out the percentage likelihood of each result for the rest of the season.

Using these results, their predicted table makes for fascinating viewing. So who will make the top four? Will champions Leicester dodge the drop?

CLICK THROUGH THE GALLERY ABOVE TO FIND OUT.

See the original post here:

Super computer predicts the final Prem table: Will Leicester survive? Who makes top four? – Daily Star

IBM super-computer will overhaul NYC 311 – New York’s PIX11 / WPIX-TV

IBM supercomputer will overhaul NYC 311
New York’s PIX11 / WPIX-TV
NYC 311 handled nearly 36 million reports in 2016. The first overhaul of the system is now in the works and it should create a smarter system. Over the next 18 months, IBM will install networks and systems. It includes "Watson," which is the company's …

Originally posted here:

IBM super-computer will overhaul NYC 311 – New York’s PIX11 / WPIX-TV

Wyoming Starts its Largest Climate Change Supercomputer, Cheyenne – The Green Optimistic (blog)

In Cheyenne, Wyoming, a supercomputer started extensive climate-change research notwithstanding the ones doubting the global warming. Now, there is a concern among scientists that the research might be cut from funding with Trumps administration.

The supercomputer is worth $30 million and is federally founded. The supercomputer just started to operate a few weeks ago, modeling air currents at wind farms or predicting the weather months in advance.

Cheyenne is the 20th fastest supercomputer in the world, replacing the supercomputer Yellowstone. Additionally, Cheyenne is 240,000 times faster than a brand new laptop, and it makes 5.34 quadrillion calculations per second.

Although it has been supported by the state, Cheyenne is a matter of concern for the global warming doubters; not only Cheyenne, but there are many other issues that create concern among doubters. In fact, in 2012, the fossil fuel industry has asked the University of Wyoming to remove an artwork that raises awareness about the climate change. The state, Wyoming, has also discussed whether K-12 students should be educated about the climate change.

A climate change skeptic, as he calls himself, Gov. Matt Mead supports the supercomputer in terms of Wyomings improvement in technology. Yet, the scientists still fear that Trump might cut funding for such projects. This is vital for the supercomputer too, as 70% of its funding comes from the National Science Foundation. As a result, 800 scientists from the U.S., including people from the University of Wyoming, signed a letter that urges Trump to take climate change seriously.

The supercomputers role is very important in predicting weather and analyzing the climate change. Regarding Cheyennes importance, Rich Loft, a National Center for Atmospheric Research supercomputing specialist said:

We believe that doing better predictions of those things have apolitical benefits saving lives and saving money, and improving outcomes for businesses and farmers.

Supplying approximately the 40% of the U.S.s coal , Wyoming feeds on its coal, oil, and natural gas sources. Consequently, Wyoming funded a power plant to study the carbon capture for $15 million.

Nevertheless, the supercomputer consumes 1.5 megawatts, which equals to having electricity in 75 homes. Yet, some of its electricity comes from a wind farm 7 miles away.

No matter what, the scientists in Wyoming aim to get great results from the supercomputer to analyze the climate change better.

[via AP]

See original here:

Wyoming Starts its Largest Climate Change Supercomputer, Cheyenne – The Green Optimistic (blog)

China’s new supercomputer to be 10 times faster – The Hindu

China has started to build a new-generation supercomputer that is expected to be 10 times faster than the current world champion, a media report said.

This year, China is aiming for breakthroughs in high-performance processors and other key technologies to build the worlds first prototype exascale supercomputer, the Tianhe-3, said Meng Xiangfei, the director of application at the National Super Computer Tianjin Centre, on Monday.

The prototype is expected to be completed by 2018, the China Daily reported.

Its computing power is on the next level, cementing China as the world leader in supercomputer hardware, Meng said.

It would be available for public use and help us tackle some of the worlds toughest scientific challenges with greater speed, precision and scope, he added.

Tianhe-3 will be made entirely in China, from processors to operating system. It will be stationed in Tianjin and fully operational by 2020, earlier than the US plan for its exascale supercomputer, he said.

Tianhe-1, Chinas first quadrillion-level supercomputer developed in 2009, is now working at full capacity, undertaking more than 1,400 assignments each day, solving problems from stars to cells.

The exascale supercomputer will be able to analyse smog distribution on a national level, while current models can only handle a district, the daily said. Tianhe-3 also could simulate earthquakes and epidemic outbreaks in more detail, allowing swifter and more effective government responses, Meng said.The new machine also will be able to analyse gene sequence and protein structures in unprecedented scale and speed. That may lead to new discoveries and more potent medicine, he said.IANS

Exascale means it will be capable of making a quintillion (1 followed by 18 zeros) calculations per second. That is at least 10 times faster than the worlds current speed champ, the Sunway TaihuLight, Chinas first supercomputer to use domestically designed processors. That computer has a peak speed of 125 quadrillion

(1 followed by 15 zeros) calculations per second.

Read more here:

China’s new supercomputer to be 10 times faster – The Hindu

Video: An Overview of the Blue Waters Supercomputer at NCSA – insideHPC

Blue Waters is one of the most powerful supercomputers in the world. Scientists and engineers across the country use the computing and data power of Blue Waters to tackle a wide range of challenging problems, from predicting the behavior of complex biological systems to simulating the evolution of the cosmos.

Blue Waters, built from the latest technologies from Cray, Inc., uses hundreds of thousands of computational cores to achieve peak performance of more than 13 quadrillion calculations per second. If you could multiply two numbers together every second, it would take you millions of years to do what Blue Waters does each second. Blue Waters also has:

If you are interested in learning more, the Blue Waters team and the International HPC Training Consortium are staging monthly Blue Waters Webinarson visualization, workflows, and other topics.

Download the Slides (PDF)

Sign up for our insideHPC Newsletter

See the article here:

Video: An Overview of the Blue Waters Supercomputer at NCSA – insideHPC

Google Rolls Out GPU Cloud Service – TOP500 News

The largest Internet company on the planet has made GPU computing available in its public cloud. Google announced this week that it has added the NVIDIA Tesla K80 to its cloud offering, with more graphics processor options on the way. The search giant follows Amazon, Microsoft and others into the GPU rental business.

According to a blog posted Tuesday, a user can attach up to four K80 boards, each of which houses two Kepler-generation GK210 GPUs and a total of 24GB of GDDR5 memory. The K80 delivers 2.9 teraflops of double precision performance or 8.73 teraflops of single precision performance, the latter of which is the more relevant metric for deep learning applications. Since were talking about a utility computing environment here, a user may choose to rent just a single GPU (half a K80 board) for their application.

The initial service is mainly aimed at AI customers, but other HPC users should take note as well. Although Google has singled out deep learning as a key application category, the company is also targeting other high performance computing applications, including, computational chemistry, seismic analysis, fluid dynamics, molecular modeling, genomics, computational finance, physics simulations, high performance data analysis, video rendering, and visualization

Googles interest in positioning its GPU offering to deep learning is partially the result of the in-house expertise and software the company has built in this area over the last several years. The new cloud-based GPU instance have been integrated with Googles Cloud Machine Learning (Cloud ML), a set of tools for building and managing deep learning codes. Cloud ML uses the TensorFlow deep learning framework, another Google invention, but which is now maintained as an open source project. Cloud ML helps users employ multiple GPUs in a distributed manner so that the applications can be scaled up, the idea being to speed execution.

The Tesla K80 instance is initially available as a public beta release in the Eastern US, Eastern Asia and Western Europe. Initial pricing is $0.70 per GPU/hour in the US, and $0.77 elsewhere. However, that doesnt include any host processors or memory. Depending on what you want, that can add as little as $0.05 per hour (for one core and 3.75 GB of memory), all the way up to more than $2 per hour (for 32 cores and 208 GB of memory). For a more reasonable configuration, say four host cores and 15 GB of memory, an additional $0.20 per hour would be charged.

That would make it roughly equivalent to the GPU instance pricing on Amazon EC2 and Microsoft Azure, which include a handful of CPU cores and memory by default. Both of those companies, which announced GPU instances for their respective clouds in Q4 2016, have set their pricing at $0.90 per GPU/hour. For users willing to make a three-year commitment, Amazon will cut the cost to $0.425 per GPU/hour via its reserved instance pricing.

IBMs SoftLayer cloud also has a number of GPU options, but they rent out complete servers rather than individual graphics processors. A server with a dual-GPU Tesla K80, two eight-core Intel Xeon CPUs, 128 GB of RAM, and a couple of 800GB SSDs will cost $5.30/hour. Other K80 server configurations are available for longer terms, starting at $1,359/month.

At this point, HPC cloud specialist Nimbix has what is probably the best pricing for renting GPU cycles. Theyre offering a K80-equipped server so two GPUs with four host cores and 32 GB of main memory for $1.06/hour. Thats substantially less expensive than any others cloud providers mentioned, assuming your application can utilize more than a single GPU. Nimbix is also the only cloud provider that currently offers a Tesla P100 server configuration, although that will cost you $4.95 per hour.

Even though the initial GPU offering from Google is confined to the Tesla K80 board, the company is promising NVIDIA Tesla P100 and AMD FirePro configuration are coming soon. The specific AMD device is likely to be the FirePro S9300 x2, a dual-GPU board that offers up to 13.9 teraflops of single precision performance. When Google previewed its accelerator rollout last November, it implied the FirePro S9300 x2 would be aimed at cloud customers interested in GPU-based remote workstations. The P100 is NVIDIAs flagship Tesla GPU, delivering 5.3 or 10.6 teraflops of double or single precision performance, respectively.

At this point, Google is in third place in the fast-growing public cloud space, trailing Amazon and Microsoft, in that order. Adding a GPU option is not likely to change that, but it does illustrate that graphics processor-based acceleration is continuing to spread across the IT datacenter landscape. Whereas once GPU acceleration was confined to HPC, with the advent of hyperscale-based machine learning, it quickly became standard equipment for hyperscale web companies involved in training neural networks. Now that more enterprise customers are looking to mine their own data for machine learning purpose, the GPU is getting additional attention. And for traditional HPC, many of the more popular software packages have already been ported to GPUs.

This all might be good news for Google, but its even better news for NVIDIA, and to a lesser extent AMD, which still stands to benefit from the GPU computing boom despite the companys less cohesive strategy. NVIDIA just announced a record revenue of 6.9 billion for fiscal 2017, driven, in part, by the Tesla datacenter business. That can only get better as GPU availability in the cloud becomes more widespread.

Follow this link:

Google Rolls Out GPU Cloud Service – TOP500 News

Tianhe-3: China says its world-first exascale supercomputer will be ready by 2020 – Deutsche Welle

Fast isn’t the word. If China’s Tianhe-3 supercomputer manages to hit the exascale mark, it will handle one quintillion calculations per second.

NB: 1 quintillion = 1,000 000 000 000 000 000 (yep, that’s 18 zeros)

Meng Xiangfei, a director at the National Supercomputer Center at Tianjin,told the China Daily newspaper that his institute aims to have a prototype of its Tianhe-3 ready by 2018. For that they will need breakthroughs in high-performance processors. But Meng is confident.

If they succeed, Tianhe-3 will be 10 times faster than the current fastest supercomputer in the world, the Sunway TaihuLight.

The Sunway runs at 93 petaFLOPS, with a reported peak speed of 125 quadrillion calculations per second.

1 quadrillion = 1,000 000 000 000 000 (15 zeros)

“Its computing power is on the next level,” Meng told the newspaper. “It will help us tackle some of the world’s toughest scientific challenges with greater speed, precision and scope.”

The Tianhe-3 will be measured in exaFLOPS.

Its sibling, the Tianhe-2 runs at 34 petaFLOPS, while the USA’s next best, Titan, creaks in at 18 peteFLOPS.

If the Tianhe-3 breaks the peta-barrier, its processing speed will leave the rest for dead – which is probably a good thing as supercomputers don’t have the longest life-expectancy.

Super, but compared to what?

How can we even begin to image the Tianhe-3’s processing speeds?

Well, one of the world’s first computers (as we know them) was the Zuse Z3. It was a programmable, digital computer. Based on the same Boolean theory that gave us the zeros and ones of modern computing, the Z3 was the first solid implementation of so-called “flip-flops” and what became “floating point” arithmetic.

A computer’s processing speed is measured (in part) by the number of floating points it can handle per second – and that’s why we refer to a FLOP or FLOPS.

In 1941, the Z3’s average single calculation speeds were 0.8 seconds for addition and 3 seconds for multiplication.

Fast-forward 70 years or so and the average smartphone will perform addition and multiplication almost before we’ve finished entering the numbersImagine that, predictive math!

Smartphones speeds tend to be measured in gigaflops (1 GFLOP = 1,000 000 000), but it’s hard to get a good read on the latest models as the manufacturers are so competitive and as a result secretive. It is said, however, that Apple’s A-series chips, which are made by Imagination Technologies, are years ahead of Qualcomm’s Snapdragon chips, which Samsung and Google use in their phones.

Gaming consoles are a lot faster than smartphones, but then again nothing compared to a supercomputer. It would take more than 18,000 Playstation 4s to match the Tianhe-2 – which, to remind us, is half as fast as China’s Sunway supercomputer, and that is 10 times slower than the Tianhe-3 will be.

Like I said, fast just isn’t the word. But, then, the Tianhe-3 won’t be a toy. Chinese scientists hope to use it to analyze smog distribution, gene sequence and protein structures to help develop new medicines. They also say it will simulate earthquakes and epidemic outbreaks in greater detail, “allowing swifter and more effective government responses.”

See the article here:

Tianhe-3: China says its world-first exascale supercomputer will be ready by 2020 – Deutsche Welle

Supercomputer CIO wins Fed Govt Benchmarks gong – Strategy … – iT News

The Bureau of Meteorology’s replacement of its critical supercomputer, on time and on budget, has earned its CIO the top Federal Government prize in the 2017 iTnews Benchmark Awards.

BoM tech chief Lesley Seebeck was named the federal government CIO of the year for managing to deliver a world-class supercomputer – a foundation stone of her agency’s data crunching abilities – without any overspend or delays.

Seebeck’s team correctly predicted several years ago that the agency’s high performance computing system would no longer be up to the task of processing BoM’s complex climate modelling by 2016.

The switch to a new Cray XC40 facility last September was seamless, and the power offered by the kit cements the bureau within the top ten meteorological agencies in the world.

Seebeck rose to the top of a competitive category that included the establishment ofGovCMS by John Sheridan at the Department of Finance, and the creation of myTax by the Australian Taxation Office and CIO Ramez Katf.

Winners were announced at the iTnews Benchmark Awards held as part of Adapt Venture’sCIO Edge Experienceat the Grand Hyatt Melbourne.

Excerpt from:

Supercomputer CIO wins Fed Govt Benchmarks gong – Strategy … – iT News

Championship 2016/17 table: Super Computer predicts end of season final places – *February update* – talkSPORT.com

Monday, February 20, 2017

With two-thirds of the Championship season played, the table is finally starting to take shape.

Many people expected surprise packages Huddersfield Town and Leeds United to drop off the pace, while big-spending Aston Villa have underachieved in their attempts to make an instant return to the Premier League.

Brighton and Newcastle have been fighting for top spot all season, while Norwich’s recent defeat at Burton leaves a seven-point gap between sixth [the final play-off place] and seventh.

At the other end of the table, barring a miracle upturn in form like last season under Neil Warnock, Rotherham United look destined for the drop and it is a question of who will be joining them in May.

Birmingham City continue to slide underGianfranco Zola, while Bristol City and Aston Villa are on poor runs.

talkSPORT has decided to submit this season’s data into the Super Computer to see howthe Championship table will look in May. At this stage, no one can say with any certainty, but our popular systemhas had a good go of predicting the final standings. Based on information available on February 20, 2016, take a look at how the Super Computer is predicting the final standings – from 24th to first -by clicking the right arrow, above.

The table, of course, should be taken with a pinch of salt! Anything can happen over the next three months, but it is always fun to speculate. Where do you think your team will finish? Share your thoughts by leaving a comment below…

Read more:

Championship 2016/17 table: Super Computer predicts end of season final places – *February update* – talkSPORT.com

China’s new supercomputer will be 10 times faster – Economic Times

BEIJING: China has started to build a new-generation supercomputer that is expected to be 10 times faster than the current world champion, a media report said.

This year, China is aiming for breakthroughs in high-performance processors and other key technologies to build the world’s first prototype exascale supercomputer, the Tianhe-3, said Meng Xiangfei, the director of application at the National Super Computer Tianjin Centre, on Monday.

The prototype is expected to be completed by 2018, the China Daily reported.

“Exascale” means it will be capable of making a quintillion (1 followed by 18 zeros) calculations per second. That is at least 10 times faster than the world’s current speed champ, the Sunway TaihuLight, China’s first supercomputer to use domestically designed processors. That computer has a peak speed of 125 quadrillion (1 followed by 15 zeros) calculations per second, he said.

“Its computing power is on the next level, cementing China as the world leader in supercomputer hardware,” Meng said.

It would be available for public use and “help us tackle some of the world’s toughest scientific challenges with greater speed, precision and scope”, he added.

Tianhe-3 will be made entirely in China, from processors to operating system. It will be stationed in Tianjin and fully operational by 2020, earlier than the US plan for its exascale supercomputer, he said.

Tianhe-1, China’s first quadrillion-level supercomputer developed in 2009, is now working at full capacity, undertaking more than 1,400 assignments each day, solving problems “from stars to cells”.

The exascale supercomputer will be able to analyse smog distribution on a national level, while current models can only handle a district, the daily said.

Tianhe-3 also could simulate earthquakes and epidemic outbreaks in more detail, allowing swifter and more effective government responses, Meng said.

The new machine also will be able to analyse gene sequence and protein structures in unprecedented scale and speed. That may lead to new discoveries and more potent medicine, he said.

Read more from the original source:

China’s new supercomputer will be 10 times faster – Economic Times

Next-Generation TSUBAME Will Be Petascale Supercomputer for AI – TOP500 News

The Tokyo Institute of Technology, also known as Tokyo Tech, has revealed that the TSUBAME 3.0 supercomputer scheduled to be installed this summer will provide 47 half precision (16-bit) petaflops of performance, making it one of the most powerful machines on the planet for artificial intelligence computation. The system is being built by HPE/SGI and will feature NVIDIAs Tesla P100 GPUs.

Source:Tokyo Institute of Technology

For Tokyo Tech, the use of NVIDIAs latest P100 GPUs is a logical step in TSUBAMEs evolution. The original 2006 system used ClearSpeed boards for acceleration, but was upgraded in 2008 with the Tesla S1040 cards. In 2010, TSUBAME 2.0 debuted with the Tesla M2050 modules, while the 2.5 upgrade included both the older S1050 and S1070 parts plus the newer Tesla K20X modules. Bringing the P100 GPUs into the TSUBAME lineage will not only help maintain backward compatibility for the CUDA applications developed on the Tokyo Tech machines for the last nine years, but will also provide an excellent platform for AI/machine learning codes.

In a press release from NVIDIA published Thursday, Tokyo Techs Satoshi Matsuoka, a professor of computer science who is building the system, said, NVIDIAs broad AI ecosystem, including thousands of deep learning and inference applications, will enable Tokyo Tech to begin training TSUBAME 3.0 immediately to help us more quickly solve some of the worlds once unsolvable problems.

For Tokyo Techs supercomputing users, its a happy coincidence that the latest NVIDIA GPU is such a good fit with regard to AI workloads. Interest in artificial intelligence is especially high in Japan, given the countrys manufacturing heritage in robotics and what seems to be almost a cultural predisposition to automate everything.

When up and running, TSUBAME 3.0 will operate in conjunction with the existing TSUBAME 2.5 supercomputer, providing a total of 64 half precision petaflops. That would make it Japans top AI system, although the title is likely to be short-lived. The Tokyo-based National Institute of Advanced Industrial Science and Technology (AIST) is also constructing an AI-capable supercomputer, which is expected to supply 130 half precision petaflops when it is deployed in late 2017 or early 2018.

Although NVIDIA and Tokyo Tech are emphasizing the AI capability of the upcoming system, like its predecessors, TSUBAME 3.0 will also be used for conventional 64-bit supercomputing applications, and will be available to Japans academic research community and industry partners. For those traditional HPC tasks, it will rely on its 12 double precision petaflops, which will likely earn it a top 10 spot on the June TOP500 list if they can complete a Linpack run in time.

The system itself is a 540-node SGI ICE XA cluster, with each node housing two Intel Xeon E5-2680 v4 processors, four NVIDIA Tesla P100 GPUs, and 256 GB of main memory. The compute nodes will talk to each other via Intels 100 Gbps Omni-Path network, which will also be extended to the storage subsystem.

Speaking of which, the storage infrastructure will be supplied by Data Direct Networks (DDN) and will provide 15.9 petabytes of Lustre file system capacity based on three ES14KX appliances. The ES14KX is currently DDNs top-of-the-line file system storage appliance, delivering up 50 GB/seconds of I/O per enclosure. It can theoretically scale to hundreds of petabytes, so the TSUBAME 3.0 installation will be well within the products reach.

Energy efficiency is also like to be a feature of the new system, thanks primarily to the highly proficient P100 GPUs. In addition, the TSUBAME 3.0 designers are equipping the supercomputer with a warm water cooling system and are predicting a PUE (Power Usage Effectiveness) as high as 1.033. That should enable the machine to run at top speed without the need to throttle it back during heavy use. A top 10 spot on the Green500 list is all but assured.

View original post here:

Next-Generation TSUBAME Will Be Petascale Supercomputer for AI – TOP500 News

Inside the race to build the fastest ever supercomputer – EWN – Eyewitness News

The fastest supercomputer in the world, the Sunway TaihuLight, is about to lose its title with the Japanese planning to build something even faster.

FILE: An individual working on a computer. Picture: iStock.

When China unveiled the Sunway TaihuLight in June 2016, it became the fastest supercomputer in the world. It easily surpassed the previous record holder, Tianhe-2. Its almost three times as fast. But now, the title it has held for less than a year is under threat, with the Japanese planning to build something even faster.

The Ministry of Economy, Trade and Industry plans to invest 19.5 billion yen ($172 million) in the new machine, as part of an attempt to revitalise Japans electronics industry and reassert Japan’s technical dominance.

Recent years have seen Japan’s lead challenged by competition from South Korea and China, but the Japanese government hopes to reverse that trend.

IMMENSE COMPUTING POWER

The new machine, called the AI Bridging Cloud Infrastructure, or ABCI, is designed to have a capacity of 130 petaflops. That means it will be able to perform 130 quadrillion calculations a second. Still confused? Well for the sake of easy comparison, thats equal to the computing power of 70,652 Playstation 4s.

As well as out-computing the current Chinese machine it will also be nearly ten times as fast as the Oakforest-PACS, the current fastest Japanese supercomputer, whose 13.6 petaflops will be dwarfed by those of the new machine. As far as we know, there is nothing out there that is as fast, said Satoshi Sekiguchi, director general at Japan’s National Institute of Advanced Industrial Science and Technology.

Perhaps the most ambitious aspect of the proposed machine, though, is its hyper-efficient power consumption. The computers designers are aiming for a power consumption of fewer than three megawatts. This would be five times lower than the TaihuLight, and the same as the Oakforest-PACS, the output of which is ten times lower.

THE APPLICATION

While other countries have optimised their most powerful computers for processes such as atmospheric modelling or nuclear weapon simulations, AIST aims to use the new machine to accelerate advances in AI technology. ABCI could help companies improve driverless vehicles by analysing vast amounts of traffic data. According to Sekiguchi, the supercomputer could also be used to mine medical records to develop new services and applications.

The computer will also be made available to Japanese corporations for a fee, said Sekiguchi, alongside others involved in the project. Japanese companies currently outsource their major data-crunching to foreign firms such as Google or Microsoft.

THE FUTURE

Japan hopes that ABCI will be operational by 2018, whereupon it will take the top spot on the TOP 500s ranking list of supercomputers.

It might not stay there for very long, though. Computer manufacturer Atos has already begun work on the Bull sequana supercomputer for the French Alternative Energies and Atomic Energy Commission (CEA). This machine is projected to have a performance of one exaflop, meaning that it will be able to perform a billion calculations a second – almost seven and a half times faster than the ABCI.

The French machine won’t be operational until 2020 however, meaning ABCI should still enjoy a spot in the supercomputing limelight.

This article was republished courtesy of World Economic Forum.

Written by Robert Guy, content producer, Formative Content.

See more here:

Inside the race to build the fastest ever supercomputer – EWN – Eyewitness News

Touted IBM supercomputer project at MD Anderson on hold after … – Houston Chronicle

By Todd Ackerman, Houston Chronicle

A view of the outside of the MD Anderson Cancer Center in Houston. (Chronicle File Photo)

A view of the outside of the MD Anderson Cancer Center in Houston….

A University of Texas System audit has found irregularities involving more than $40 million paid for outside goods and services as part of MD Anderson Cancer Center’s now stalled effort to enlist IBM supercomputer Watson in the battle against cancer.

Go here to see the original:

Touted IBM supercomputer project at MD Anderson on hold after … – Houston Chronicle

Nvidia Will Power Japan’s fastest AI Super Computer, Tsubame 3.0 Launching This Summer – SegmentNext

Nvidia will partner with Tokyo Institute of technology for Japans fastest AI supercomputer known as Tsubame 3.0. Tsubame 3.0 is said to be much better than its predecessor Tsubame 2.5, Nvidia will provide the Pascal-based Tesla P1000 GPU technology, accelerating the performance in Tsubame 3.0.

See Also: Nvidia GTX 1080 Ti Launch At GDC GeForce GTX Celebration?

Precisely, experts say that newfastest AI supercomputer: Tsubame 3.0 will be 3X efficient than its predecessor. This will be possible only with the latest GPU technology from Nvidia. Reportedly, new GPUs by Nvidia will deliver up to 12.2 petaflops of double precision performance.

Additionally, Nvidia says that a combination of Tsubame 3.0 and Tsubame 2.5 will pack a performance of 64.3 petaflops and this will make a combined system of Supercomputers to rank in top 10 around the globe.

Tokyo Techs Satoshi Matsuoka, a professor of computer science who is building the fastest AI super computer, further praised the partnership with Nvidia and said:

Nvidias broad AI ecosystem, including thousands of deep learning and inference applications, will enable Tokyo Tech to begin training Tsubame 3.0 immediately to help us more quickly solve some of the worlds once unsolvable problem

In parallel news, Nvidias top tier powerful yet reasonable GPU may be revealed soon at Nvidias GDC event. Nvidias GTX 1080 Ti is rumored to be in production and that it may arrive in late March.

The graphics card is expected to launch from March 20th to 23rd. The custom designed AIB versions of Nvidias GTX 1080Ti are expected to come out in the market sometime after the launch. Since the Geforce GTX 1080Ti will be slightly less in power than TITAN X, its price tag is expected to fall in the range of $750 to $900 at launch.

The graphics card is based on Nvidias GP102 Silicon. Nvidia will host its own show at GDC 2017 and we look forward to see and know more on GeForce GTX 1080Ti. To know more on this story, check out here.

Go here to see the original:

Nvidia Will Power Japan’s fastest AI Super Computer, Tsubame 3.0 Launching This Summer – SegmentNext

Stampede Supercomputer Assists With Real-Time MRI Analysis – HPCwire (blog)

Feb. 17 One of the main tools doctors use to detect diseases and injuries in cases ranging from multiple sclerosis to broken bones is magnetic resonance imaging (MRI). However, the results of an MRI scan take hours or days to interpret and analyze. This means that if a more detailed investigation is needed, or there is a problem with the scan, the patient needs to return for a follow-up.

A new, supercomputing-powered, real-time analysis system may change that.

Researchers from the Texas Advanced Computing Center (TACC), The University of Texas Health Science Center (UTHSC) and Philips Healthcare, have developed a new, automated platform capable of returning in-depth analyses of MRI scans in minutes, thereby minimizing patient callbacks, saving millions of dollars annually, and advancing precision medicine.

The team presented a proof-of-concept demonstration of the platform at theInternational Conference on Biomedical and Health Informaticsthis week in Orlando, Florida.

The platform they developed combines the imaging capabilities of the Philips MRI scanner with the processing power of theStampedesupercomputer one of the fastest in the world using the TACC-developedAgave API Platforminfrastructure to facilitate communication, data transfer, and job control between the two.

An API, or Application Program Interface, is a set of protocols and tools that specify how software components should interact. Agave manages the execution of the computing jobs and handles the flow of data from site to site. It has been used for a range of problems, from plant genomics to molecular simulations, and allows researchers to access cyberinfrastructure resources like Stampede via the web.

The Agave Platform brings the power of high-performance computing into the clinic, said William (Joe) Allen, a life science researcher for TACC and lead author on the paper. This gives radiologists and other clinical staff the means to provide real-time quality control, precision medicine, and overall better care to the patient.

The entire article can be found here.

Source:Aaron Dubrow, TACC

Read more:

Stampede Supercomputer Assists With Real-Time MRI Analysis – HPCwire (blog)

Inside the race to build the fastest ever supercomputer – Eyewitness News

When China unveiled the Sunway TaihuLight in June 2016, it became the fastest supercomputer in the world. It easily surpassed the previous record holder, Tianhe-2. Its almost three times as fast. But now, the title it has held for less than a year is under threat, with the Japanese planning to build something even faster.

The Ministry of Economy, Trade and Industry plans to invest 19.5 billion yen ($172 million) in the new machine, as part of an attempt to revitalise Japans electronics industry and reassert Japan’s technical dominance.

Recent years have seen Japan’s lead challenged by competition from South Korea and China, but the Japanese government hopes to reverse that trend.

IMMENSE COMPUTING POWER

The new machine, called the AI Bridging Cloud Infrastructure, or ABCI, is designed to have a capacity of 130 petaflops. That means it will be able to perform 130 quadrillion calculations a second. Still confused? Well for the sake of easy comparison, thats equal to the computing power of 70,652 Playstation 4s.

As well as out-computing the current Chinese machine it will also be nearly ten times as fast as the Oakforest-PACS, the current fastest Japanese supercomputer, whose 13.6 petaflops will be dwarfed by those of the new machine. As far as we know, there is nothing out there that is as fast, said Satoshi Sekiguchi, director general at Japan’s National Institute of Advanced Industrial Science and Technology.

Perhaps the most ambitious aspect of the proposed machine, though, is its hyper-efficient power consumption. The computers designers are aiming for a power consumption of fewer than three megawatts. This would be five times lower than the TaihuLight, and the same as the Oakforest-PACS, the output of which is ten times lower.

THE APPLICATION

While other countries have optimised their most powerful computers for processes such as atmospheric modelling or nuclear weapon simulations, AIST aims to use the new machine to accelerate advances in AI technology. ABCI could help companies improve driverless vehicles by analysing vast amounts of traffic data. According to Sekiguchi, the supercomputer could also be used to mine medical records to develop new services and applications.

The computer will also be made available to Japanese corporations for a fee, said Sekiguchi, alongside others involved in the project. Japanese companies currently outsource their major data-crunching to foreign firms such as Google or Microsoft.

THE FUTURE

Japan hopes that ABCI will be operational by 2018, whereupon it will take the top spot on the TOP 500s ranking list of supercomputers.

It might not stay there for very long, though. Computer manufacturer Atos has already begun work on the Bull sequana supercomputer for the French Alternative Energies and Atomic Energy Commission (CEA). This machine is projected to have a performance of one exaflop, meaning that it will be able to perform a billion calculations a second – almost seven and a half times faster than the ABCI.

The French machine won’t be operational until 2020 however, meaning ABCI should still enjoy a spot in the supercomputing limelight.

This article was republished courtesy of World Economic Forum.

Written by Robert Guy, content producer, Formative Content.

Continued here:

Inside the race to build the fastest ever supercomputer – Eyewitness News


12345...102030...