Met Office supercomputer predicts 10 years of record rain in … – Wired.co.uk

Getty Images / Dan Kitwood / Staff

Prepare to complain about the weather even more than usual - the Met Office has predicted that a record amount of rain could fall each winter for the next decade.

Its new supercomputer, from computer company Cray, has forecast a one in three chance of record-breaking rain falling every year in at least one region in England and Wales between October and March.

The supercomputer simulated hundreds of winters based on the current climate to predict what the weather could look like for years to come.

And the results are not looking sunny. Some of the predicted winters were more extreme than any we've seen, and analysis of these simulated events showed the risk of record monthly rainfall in winter was seven per cent for south east England.

This chance increased to 34 per cent when other regions were included.

"We shouldn't be surprised if events like this occur," says Nick Dunstone, second author of the report. "Some people think this is a crazy, new risk. It's not. If we'd had these simulations before the floods of January 2014 we could have expected them. Models like this aren't perfect, but they give better estimations than observations alone, which are now largely outdated due to the changing climate."

Jim Dale, senior risk meteorologist at British Weather Services, says a prediction is only as good as its outcome, but that doesn't mean it should be taken lightly.

"The crux of this prediction is that the more heat that is in the atmosphere, the more vigorous the storms get as they hold and release more water, making a much wetter climate not just during winter," he says.

"Climate change predictions should be treated with caution rather than disregard or seen as a scare-tactics - we are already seeing the effects of melting icebergs," Dale continues. "It's a crystal ball exercise, but preparation is key. We will see if the government reacts."

The supercomputer, which was fully installed at the beginning of this year following a 97m government grant, is the largest supercomputer dedicated to weather and climate science in the world.

The research was conducted for the National Flood Resilience Review, which asked the Met Office to look into the likelihood of extreme rainfall for the next ten years.

This new research has been named the UNSEEN (UNprecedented Simulated Extremes using ENsembles) method because it predicts future events.

As extreme flooding is relatively rare, simulations can provide data on 1,750 years of winters, whereas real observations can only do so for 35.

If it's developed further, this prediction method could be used to assess the risk of heatwaves, droughts, and cold spells and could help the government, contingency planners and insurers prepare for future events.

Go here to see the original:

Met Office supercomputer predicts 10 years of record rain in ... - Wired.co.uk

MEDIA ADVISORY: Most Powerful Supercomputer at an Academic Institution in the US 12th in the World Coming to … – UT News | The University of Texas…

In 2016, the National Science Foundation (NSF) announced a $30 million award to the Texas Advanced Computing Center (TACC) at The University of Texas at Austin to acquire and deploy a new large scale supercomputing system, Stampede 2, as a strategic national resource to provide high-performance computing capabilities for thousands of researchers across the U.S. Photo courtesy of Texas Advanced Computing Center

What: The Texas Advanced Computing Center (TACC) at The University of Texas at Austin will host a dedication for a new $30 million supercomputer that is the most powerful in the U.S at an academic institution. The system, called Stampede2, will be used for scientific research and serve as a strategic national resource to provide high-performance computing capabilities.

When: 1:45 p.m. 3:45 p.m., Friday, July 28.

Where: J.J. Pickle Research Campus, 10100 Burnet Road, Advanced Computing Building, Building 205. Map and directions can be found at: https://www.tacc.utexas.edu/about/contact/

Media: The event is open to the media. To RSVP, contact Faith Singer-Villalobos at the Texas Advanced Computing Center, rsvp@tacc.utexas.edu.

Background: Funded by the National Science Foundation, Stampede2 builds on the technology and expertise from the first Stampede system, which was also housed at UT Austin and funded by the NSF in 2011. The new supercomputer will have about the equivalent processing power of 100,000 desktop computers and deliver a peak performance of up to 18 petaflops, or 18 quadrillion mathematical operations per second. This increased speed and power will allow scientists and engineers to tackle larger, more complex problems that were not previously possible.

Event activities:

1:45-2:15pm Tour of Stampede2 with TACC executive Dan Stanzione, Human Data Interaction Lab & Machine Room, Advanced Computing Building (ACB) and Research Office Complex (ROC)

2:30-3:30pm Remarks, ACB Auditorium

3:30-3:45pm Q&A from audience and media

3:45-4:45pm General Tours, Human Data Interaction Lab & Machine Room

Tour #1: 3:45-4:15pm

Tour #2: 4:00-4:30pm

Tour #3: 4:15-4:45pm

The system is deployed with vendor partners Dell EMC, Intel Corporation and Seagate Technology. Researchers from Clemson University, Cornell University, the University of Colorado at Boulder, Indiana University and Ohio State University will also be involved.

Read the original post:

MEDIA ADVISORY: Most Powerful Supercomputer at an Academic Institution in the US 12th in the World Coming to ... - UT News | The University of Texas...

Podcast: A Retrospective on Great Science and the Stampede … – insideHPC

https://archive.org/download/Stampede_201707/Stampede.mp3 TACC will soon deploy Phase 2 of the Stampede II supercomputer. In this podcast, they celebrate by looking back on some of the great science computed on the original Stampede machine.

In 2017, the Stampede supercomputer, funded by the National Science Foundation, completed its five-year mission to provide world-class computational resources and support staff to more than 11,000 U.S. users on over 3,000 projects in the open science community. But what made it special? Stampede was like a bridge that moved thousands of researchers off of soon-to-be decommissioned supercomputers, while at the same time building a framework that anticipated the eminent trends that came to dominate advanced computing.

Change was in the air at the National Science Foundation (NSF) in 2010, two years into the operation of the soon-to-be retired Ranger supercomputer of the Texas Advanced Computing Center (TACC). Ranger represented a new class of cutting-edge computing systems designed specifically for getting more people U.S. researchers from all fields of science and engineering to use them. Ranger and a few other systems of the NSF-funded Teragrid cyberinfrastructure, such as Kraken at the National Institute for Computational Sciences at UT Knoxville, were going to come offline in the next few years.

Supercomputers live fast and retire young. Thats because technology advances quickly and each generation of computer processors is significantly faster, and cheaper to operate, than the one before it. Expectations were high for the successor to Ranger, a system called Stampede built by TACC that proved to be 20 times more powerful than Ranger and only used half the electricity.

We knew, as we were designing Stampede that we had to inherit a huge amount of workload from the systems that were going offline, said Dan Stanzione, executive director of TACC and the principal investigator of the Stampede project. And at the same time, you could see that architectural changes were coming, and we had to move the community forward as well. That was going to be a huge challenge, Stanzione said.

The challenge was and still is to match the breakneck speed of change in computer hardware and architectures. With Ranger, one fundamental architectural change was going to four, four-core processors on a computer node. It was clear that this trend was going to continue, Stanzione said.

This trend toward manycore processors, as they are known, would force changes to the programming models that researchers use to develop application software for high-tech hardware. Since scientific software changes its structure much more slowly than hardware, sometimes over the course of years, it was critical to get researchers started down the road to manycore.

We needed to take on this enormous responsibility of all of the old workload that was out there for all of the systems that were retiring, but at the same time start encouraging people to modernize and go towards what we thought systems were going to look like in the future, Stanzione said. It was an exciting time.

Designing the Stampede supercomputer required foresight and awareness of the risks in planning a multi-million dollar computing project that would run seven years into the future. Stanzione and the team at TACC wrote the proposal in 2010 based on hardware the Intel Xeon E5 (Sandy Bridge) processor and Intel Xeon Phi co-processor, as well as the Dell servers that were being developed but didnt yet exist. TACC deployed Stampede on schedule in 2013 and consistently met and exceeded its proposed goals of providing to the open science community the computing power of 10 petaflops. An upgrade in 2016 added Knights Landing processors a standalone processor released by Intel that year and 1.5 petaflops to the system. Whats more, TACC operated a world-class facility to support, educate, and train users in fully using Stampede.

One of the things that Im proud of is that weve been able to execute both on time and on budget. We delivered the exact system we had forecast, Stanzione said.

NSF awarded The University of Texas at Austin $51.5 million for TACC to deploy and support the Stampede supercomputer, which included a hardware upgrade in 2016. During its five years of operations, Stampede completed more than eight million successful computing jobs and clocked over three billion core hours of computation.

Download the MP3

Read the Full Story

Sign up for our insideHPC Newsletter

Continue reading here:

Podcast: A Retrospective on Great Science and the Stampede ... - insideHPC

NVIDIA Releases First V100 GPUs into the Wild | TOP500 … – TOP500 News

NVIDIA has donated 15 V100 Tesla GPUs to researchers attending the recent Computer Vision and Pattern Recognition conference in Honolulu. The giveaway was described in a blog posted by the company on July 22.

Recipients of the first V100 GPUs. Source: NVIDIA

The Volta-class graphics processors were presented to representatives of each of the 15 research institutions attending the conference, and represent the first such GPUs that the company has made available to users. The new hardware is expected to become more widely available in the current (third) quarter of the year.

The V100 is NVIDIAs latest and greatest GPU computing processor, and includes special-purpose hardware called Tensor Cores that are specifically designed to accelerate deep learning applications. The 640 Tensor Cores on the chip deliver 120 teraflops of performance for both training and inferencing neural networks. Thats about five times faster than what can be achieved on the current-generation P100 Tesla GPUs.

The V100 can also deliver 7.5 teraflops of double precision (64-bit) floating point power, but to the AI researchers who received the new GPUs, this feature is unlikely to be used. Most deep learning algorithms use 32-bit, 16-bit, and, in some cases, 8-bit arithmetic to perform their AI magic.

While at the conference, NVIDIA also demonstrated V100 hardware doing inferencing on a Resnet-152 trained network. Running with just one of the four V100 GPUs on a DGX Station and NVIDIAs TensorRT inference optimizer software, the system was able to classify 527 flower images per second. That was 100 times faster than a CPU-only setup equipped with an Intel Skylake processor. Its noteworthy that even the 5 images-per-second rate for the CPU system is faster than what a human could manage.

Speaking at the event, NVIDIA CEO Jensen Huang spoke to the attendees about the significance of artificial intelligence and their research. AI is the most powerful technology force that we have ever known, said Jensen Ive seen everything. Ive seen the coming and going of the client-server revolution. Ive seen the coming and going of the PC revolution. Absolutely nothing compares.

Go here to read the rest:

NVIDIA Releases First V100 GPUs into the Wild | TOP500 ... - TOP500 News

India Gearing Up for Big Supercomputing Expansion | TOP500 … – TOP500 News

A number of news outlets in India are reporting the government is close to deploying six new supercomputers, two of which will deliver a peak performance of two petaflops.

According to a report in the Hindustan Times, the six new systems are part of the initial phase of a three-phase project that eventually result in the deployment of 50 supercomputers across the country. The Indian government has allocated Rs 4,500 crore (close to 700 million USD) for the project, which was approved in March 2016. The effort is being managed by the Centre for Development of Advanced Computing (C-DAC), an R&D institution under Indias Ministry of Electronics and Information Technology.

When the project was announced last year, the government expected to have these first systems installed by August 2017. But the most recent reports from India indicate that the Request for Proposal (RFP) for the project is just wrapping up now. According to Ashutosh Sharma, secretary in the Ministry of Science and Technology, their current goal is to have the systems up and running before the end of the year.

The six initial supercomputers will be installed at four technology centers: Banaras Hindu University, Kanpur, Kharagpur and Hyderabad -- Indian Institute of Science Education and Research, Pune, and Indian Institute of Science, Bengaluru. Of these first machines, two of them will have a peak performance of two petaflops, while the remainder will be the around 500 teraflops.

Currently, Indias most powerful supercomputer is a 1.2-petaflop (peak) Cray XC40, which is housed at the Supercomputer Education and Research Centre (SERC) for the Indian Institute of Science. It currently sits at number 165 on the latest TOP500 list, and is the highest-ranked of the countrys four systems that earned a spot on the list

One of the principle goals of the three-phase project is to develop a domestic capacity to design and manufacture supercomputers part of the countrys Made in India initiative. In the first phase, three of the supercomputers will be imported, while the remaining three will be based on imported parts, but assembled in the country. In the projects second phase, compute nodes, switches and other network componentry will be designed and manufactured domestically. In the final phase of the project, almost the entire system will be built in India.

The project is scheduled to take place over a period of seven years.

Read more from the original source:

India Gearing Up for Big Supercomputing Expansion | TOP500 ... - TOP500 News

LANL Adds Capacity to Trinity Supercomputer for Stockpile Stewardship – insideHPC

Los Alamos National Laboratory has boosted the computational capacity of their Trinity supercomputer with a merger of two system partitions.

Now available for production computing in the Labs classified network, the system now usesXeon Haswell and the Xeon Phi Knights Landing (KNL) processors. Trinity has provided service for the National Nuclear Security Administration (NNSA)s Stockpile Stewardship Program since summer 2016, but it has been dramatically expanded to now provide almost 680,000 advanced technology KNL processors as a key part of NNSAs overall Advanced Simulation and Computing (ASC) Program.

With this merge completed, we have now successfully released one of the most capable supercomputers in the world to the Stockpile Stewardship Program, said Bill Archer, Los Alamos ASC program director. Trinity will enable unprecedented calculations that will directly support the mission of the national nuclear security laboratories, and we are extremely excited to be able to deliver this capability to the complex.

The Trinity project is managed and operated by Los Alamos National Laboratory and Sandia National Laboratories under the New Mexico Alliance for Computing at Extreme Scale (ACES) partnership. The capabilities of Trinity are required for supporting the NNSA Stockpile Stewardship programs certification and assessments to ensure that the nations nuclear stockpile is safe, reliable, and secure.

In June 2017, the ACES team took the classified Trinity-Haswell system down, as planned, and merged the existing Xeon processors (Haswell) partition with the Xeon Phi processors (Knights Landing) partition. The system was back up for production use the first week of July.

The Knights Landing processors were accepted for use in December 2016 and since then they have been used for open science work in the unclassified network, permitting nearly unprecedented large-scale science simulations.

The main benefit of doing open science was to find any remaining issues with the system hardware and software before Trinity is turned over for production computing in the classified environment, said Trinity project director Jim Lujan. In addition, some great science results were realized, he said. Knights Landing is a multicore processor that has 68 compute cores on one piece of silicon, called a die. This allows for improved electrical efficiency that is vital for getting to exascale, the next frontier of supercomputing, and is three times as power-efficient as the Haswell processors, Archer noted.

Trinity now has 301,952 Xeon and 678, 912 Xeon Phi processors all available for classified computing, along with two pebibytes (PiB) of memory. Byte is the standard unit of digital information in a computer, originally the number of bitstypically eightrequired to encode a single text character. A single petabyte would be one quadrillion bytes. (For reference, it has been said that a single petabyte of MP3-encoded music would take 2,000 years to play.) And the binary version, the pebibyte, is 12 percent greater.

Besides blending the well-known Haswell processors with the new, more efficient Knights Landing ones, Trinity benefits from the introduction of solid state storage (burst buffers). This is changing the ratio of disk and tape necessary to satisfy bandwidth and capacity requirements, and it drastically improves the usability of the systems for application input/output. With its new solid-state storage burst buffer and capacity-based campaign storage, Trinity enables users to iterate more frequently, ultimately reducing the amount of time to produce a scientific result.

Trinity Timeline:

Sign up for our insideHPC Newsletter

Here is the original post:

LANL Adds Capacity to Trinity Supercomputer for Stockpile Stewardship - insideHPC

Super computer: Government targets indigenous supercomputers … – Economic Times

NEW DELHI: As part of the Modi government's 'Make in India' initiative, supercomputers will be manufactured in India under a three-phase programme, officials said.

In the initial two phases of the National Supercomputers Mission, the focus will be on designing and manufacturing subsystems such as high-speed Internet switches and compute nodes indigenously.

The Rs 4,500-crore project was approved by the Cabinet Committee on Economic Affairs in March last year.

A Request for Proposal (RPF) for the project is in its final stages and is being executed by the Centre for Development of Advanced Computing (C-DAC), Pune, a research and development institution under the Ministry of Electronics and Information Technology.

The NSM envisages nearly 50 supercomputers in three phases. The government has plans to make these high-precision computing machines available for scientific researches across the country.

Milind Kulkarni, a senior scientist with the Ministry of Science and Technology who is over-looking the project, said the plan is to "have six supercomputers in the first phase".

In the first phase, three supercomputers will be imported. System assemblies for the remaining three will be manufactured abroad, but assembled in India. C-DAC will responsible for the overall system design.

Two supercomputers will have a peak operational capacity of two petaflops and the rest will be of 500 teraflops.

Floating point operations per second (FLOPS) is the standard unit to measure computational power.

A petaflop can be expressed as a thousand trillion floating point operations per second. A teraflop is equal to one million million floating-point operations per second.

The six supercomputers will be placed at four IITs -- Banaras Hindu University, Kanpur, Kharagpur and Hyderabad -- Indian Institute of Science Education and Research, Pune, and Indian Institute of Science, Bengaluru.

"The goal is to have them by the end of this year," Ashutosh Sharma, secretary in the Ministry of Science and Technology said.

In the second phase, major parts like high-speed Internet switches, compute nodes and network systems will be manufactured in India.

Kulkarni said that almost the entire system will be built in India in the third phase.

India started its own supercomputing mission in 1988 under which the first series of Param supercomputers were manufactured. The mission lasted 10 years and since 2000, there has been no major push for the project.

Currently, countries such as the US, Japan, China and the European Union (EU) make up a major share of the top supercomputing machines in the world. The NSM will enable India to leapfrog to the league of world-class computing power nations.

There are nearly 25 supercomputers in India in different institutes. These are used for varied purposes, including to deduce complex phenomena like weather, climate change, nuclear reactions etc.

See more here:

Super computer: Government targets indigenous supercomputers ... - Economic Times

Gazprom Neft to utilise capacity at the St Petersburg Polytech … – PortNews IAA

As part of the ongoing collaboration between the Gazprom Neft Science and Technology Centre and the Peter the Great St Petersburg Polytechnic University (Polytech) in geological prospecting, company specialists have investigated the possibility of processing data using high-performance computing systems - specifically the Polytech supercomputer, the third largest in Russia, the Company said Thursday in a media release.

Computations generated on seismic prospecting, flow simulation, geo-mechanics and fracturing modelling will reduce calculation times by two- to four-fold, as well as allowing greater volumes of data to be processed.

Software for processing seismic data and constructing hydrodynamic models was installed and tested on the Polytech supercomputer in early 2017, whereupon, assisted by specialists at the Supercomputer Centre, the process of launching programme modules and setting testing objectives was optimised and debugged. Test simulations are the most cutting-edge resource-intensive procedures used in processing onshore seismic data. The software installed at the Supercomputer Centre is compatible with that already in place at the Gazprom Neft Science and Technology Centre, and is used to undertake the most high-performance and resource-intensive tasks in building velocity-depth models, performing deep migration transformations, and multivariate modelling, using large amounts of data. Work on the supercomputer will be carried out remotely from computers at the Gazprom Neft Science and Technology Centre.

Mars Khasanov, Director General of Gazprom Nefts Science and Technology Centre commented: The efficiency of oil companies today depends directly on the application of new technologies. The amount of information that any large industrial company has to work with today is colossal. Given the specifics of our industry, we often have to deal with highly diverse and poorly structured data, especially when it comes to complex reserves. Our task is to use the available capacities of the most cutting-edge computer systems for processing information; using a supercomputer is just one of the solutions to such problems.

The Peter the Great St Petersburg Polytechnic University (Polytech) completed the creation of its Supercomputer Centre (SCC) - home to Russias third highest-capacity computer, with a total peak operational capacity of more than 1.2 Petaflops (quadrillion floating point operations per second) in late 2015. Two of its most powerful computing systems, in fact, are separate supercomputers the first being the Polytechnic RSC Tornado (distributed control system) cluster, and the second the massive parallel system, the Polytechnic RSC PetaStream. The SCCs total computing resources comprise 25,000 cores, with total peak power consumption of 1 MW. Systems are equipped with direct fluid cooling.

Link:

Gazprom Neft to utilise capacity at the St Petersburg Polytech ... - PortNews IAA

IBM Watson: why isn’t the supercomputer making money? | WIRED UK – Wired.co.uk

IBM Watson, the supercomputer

Ben Hider / Stringer / Getty

IBMs Watson supercomputer is one of the worlds best-known artificial intelligence systems. But fame, it turns out, doesnt mean fortune. According to one analyst, Watson isnt making any money.

A scathing report from investment bank Jefferies claims that from an earnings per share perspective "it seems unlikely to us under almost any scenario that Watson will generate meaningful earnings results over the next few years".

IBM Watson made its debut as a research project in 2006 and later gained fame after beating two human champions on classic US quiz show Jeopardy!. IBM has since spent a lot of time and money promoting its flagship product, posting more than 200 press releases on Watson, according to Jefferies. The supercomputer has been involved in a range of projects from changing the way doctors diagnose patients to powering friendly service robot Pepper.

IBM has struggled to grow revenue over the last five years and results released this week revealed a $19.3bn (14.8bn) drop since the previous year. While exact figures for Watson arent given, Jefferies pulled together a range of information, including market research data and public filings, to build financial models predicting Watsons future prospects. The news wasnt good.

According to the report, Watson is simply too pricey. While the report by Jefferies equity analyst James Kisner admits that Watson remains "one of the most complete off-the-shelf [AI] platforms available", it concludes that IBM is considered a "Cadillac" option in an increasingly crowded marketplace.

However, IBM maintains that Watson is still accessible for all. "Watson services are offered on either a subscription or a pay-per-use basis and everyone can get started for free," an IBM spokesperson told WIRED.

Subscribe to WIRED

The supercomputer is also very "picky" about the data thats fed into it, with a large amount of preparation and human hand-holding needed. The report cites the case of University of Texas cancer research centre MD Anderson which began working with Watson to assist matching cancer patients to clinical trials. Though initial results were positive, a scathing report from university auditors later explained that the project had been put on hold after it blazed though more than $62 million (47.6 million) without reaching its goals. There was no suggestion that IBMs software was at fault, it highlighted the potential pitfalls and immense costs of actually using it.

"Watson is clearly part of IBM's Strategic Imperatives, whose figures are reported," an IBM spokesperson told WIRED when quizzed on whether the supercomputer is making any money. While it's true that the Strategic Imperatives has performed well, with a 5 per cent on the previous year, this area also includes cloud, analytics, and security, with no specific figures on Watson's performance.

"I don't think it makes sense to compare Watson directly with what the report describes as 'Machine and Deep Learning for speech and image recognition applications' because Watson was at the outset (when playing Jeopardy) aimed at solving problems using a mixture of several AI-based methods, and in that sense it seems aimed at a completely different market," Dr Sean Holden from the University of Cambridge computer lab told WIRED.

"Image recognition, for example, is something that deep learning methods have turned out to be very good at; but Watson seems aimed at a completely different and arguably much harder class of problems.

"The fact that preparation of data might be difficult isn't surprising it's often the case even for more basic machine learning applications.

"From a technical perspective, it doesn't surprise me that applying such a technology might at present be a challenge for many. But I would certainly hesitate in writing it off."

The report also claims that IBM is being "outgunned" in the battle for AI talent by the likes of Amazon and Apple, based on the number of AI-related job postings from each company.

While IBM may have had the AI world pretty much to itself in the past, Gartner predicted this week that AI will feature in almost every new software product by 2020. Other companies are already beginning to catch up, making large investments in AI that could knock Watson off its pedestal.

Read this article:

IBM Watson: why isn't the supercomputer making money? | WIRED UK - Wired.co.uk

IBM Wants You To Use Its "Crowdsourced Supercomputer" To Help Fight Climate Change – IFLScience

Its hard to argue that climate change isnt the number one issue of our times. Its not just a generational problem that needs solving, its an existential one. Its the all-encompassing antagonist that makes every other problem worse, and everyone except the US government, at least knows it.

IBM has also acknowledged the extent of the problem, and has strongly thrown its support behind the Paris Agreement. Now, speaking to IFLScience, the company has revealed that its going to do its part in solving the crisis by unleashing its secret weapon: a crowdsourced supercomputer.

Since 2004, IBM has run the World Community Grid (WCG), an international network of personal computers that, when linked up, contribute their processing power and cloud storage space. This giant web acts as a supercomputer one of the most powerful computational systems on Earth.

We can have an ordinary, solitary supercomputer dedicated to working on these efforts, but eventually even a supercomputer would run out of capacity, Juan Hindo, manager of the WCG, told IFLScience.

-

WCG volunteers download an app to their computers and devices. When theyre not being used, the devices automatically perform virtual experiments driven and directed by a team of researchers all around the world.

This model is infinitely scalable, and also taps into a resource that would otherwise be going to waste. Researchers are given access to a massive amount of computing power for free, along with a community of volunteers who are excited and engaged in learning about the work.

That element of public engagement, and bringing in the public into your research, and raising awareness of your work is something you would not get by doing your work on a normal supercomputer, Hindo said.

It currently has over 730,000 volunteers and millions upon millions of devices working on problems as diverse as Zika, childhood cancer, clean energy, and water filtration technologies.

Environmental science projects have emerged in the last few years, but this push on climate change is the most ambitious venture yet. The WCG will be made available to five innovative climate change research projects, all to the tune of $200 million.

Were casting the net wide, Sophia Tu, Director of Corporate Citizenship at IBM, explains. Were looking for work to get us to solutions, to show us how to adapt to climate change, how to mitigate it.

Migration patterns, the spread of disease, changing drought patterns, crop resilience were open to anything on this. We know that climate change is an interdisciplinary field, so we dont want to rule anything out.

Link:

IBM Wants You To Use Its "Crowdsourced Supercomputer" To Help Fight Climate Change - IFLScience

With EPYC, AMD Offers Serious Competition to Intel in HPC – TOP500 News

While a number of commentators have written off AMDs prospects of competing against Intel in HPC, testing of the latest server silicon from each chipmaker has revealed that the EPYC chip offers some surprising performance advantages against Intels newest "Skylake" Xeon destined for the datacenter.

Since Intel integrated the 512-bit Advanced Vector Extensions (AVX-512) feature into its new Xeon Skylake scalable processor (SP) platform, it can theoretically double floating-point performance (and integer performance) compared its previous Broadwell generation Xeon line. The latter chips supported vector widths of only 256 bits. With EPYC, AMD decided to forego any extra-wide vector support, implementing its floating-point unit with a modest 128-bit capability. That leaves it with a distinct disadvantage on vector-friendly codes.

However, the majority of HPC codes dont take advantage of AVX-512 today, since prior to Skylake the only platform that supported it was Intels Knights Landing Xeon Phi, a processor specifically designed for vector-intensive software. Many HPC applications could certainly be enhanced to use the extra-wide vectors, although for others, like sparse matrix codes, it may not be worth the trouble. In any case, adding AVX-512 support to the code base will be done one application at a time.

Without the performance boost from extra-wide vector instructions, the theoretical floating-point advantage of the new Xeon over the AMD EPYC processor disappears. At least that is what can be concluded from testing done by the gang over at Anandtech. They recently ran a series of floating-point-intensive tests, among other, pitting the EPYC 7601 (32 cores, 2.2 GHz) against the comparable Xeon Platinum 8176 (28 cores, 2.1 GHz). Both are considered high bin server chips from their respective product lines.

The testing comprised benchmarks based on three real-world codes: C-ray, a ray-tracing code that runs out of L1 cache; POV-Ray, a ray-tracing code that runs out of L2 cache; and NAMD, a molecular dynamics code that requires consistent use of main memory. The tests were performed on dual-socket servers running Ubuntu Linux.

Somewhat surprising, the EPYC processor outran the Xeon in all three floating-point benchmarks. For C-ray, the 7601 delivered about 50 percent more renders than the 8176 in a given amount of time, while for POV-Ray, the 7601 scored a more modest 16 percent performance advantage. For NAMD, Anandtech used two implementations, a non-AVX version and an AVX-version that uses Intels compiler vectorization smarts (but not specifically for AVX-512). In both cases, the EPYC processor prevailed by 41 percent, with the older implementation, and by 22 percent with AVX turned on. Anandtechs conclusion was that even though the Zen FP might not have the highest peak FLOPS in theory, there is lots of FP code out there that runs best on EPYC.

Its worth noting that in Anandtech also performed a Big Data benchmark, in which the Xeon edged the EPYC by a little less than 5 percent. In this case, the test was a collection of five Spark-based codes, which measured mostly integer performance and memory accesses. In general, the EPYC processors should do better on data-demanding codes due to its superior memory bandwidth, but it was not clear how memory intensive these particular codes were. It would be interesting to see how these two architectures match up on in-memory database benchmarks.

Execution speed aside, AMD silicon looks even more attractive when you consider price-performance. The Xeon 8176 lists for $8,719, while the EPYC 7601 is priced at $4,000. With the Xeon line, you could move up to a faster clock (2.5 GHz) with the top-of-the-line 8180 for around $10,000, or move down to the Xeon 8160 (same clock, 24 cores) for $4,700. But either way, AMD looks to be undercutting Intel on price for comparably performaning server silicon.

Of course, if an application can take full advantage of AVX-512, the performance advantage would shift to Intel. (Perhaps not a price-performance advantage though.) One other thing to consider is for AVX-512-friendly codes, the Xeon Phi itself offers the best performance and price, not to mention energy efficiency. The only caveat here is threads on the Xeon Phi execute about 1 GHz slower than on their Xeon counterparts, so if single-threaded performance is critical to some portion of your code or codes, youre going to take a pretty significant performance hit.

In a discussion posted on Facebook earlier this week, Forrest Norrod, SVP and GM of Enterprise, Embedded & Semi-Custom Products, said he was pleased with how their new server chip is positioned against its rival. He made particular mention of the favorable floating-point performance, noting the results on EPYC have been tremendous, head-to-head, against the competitor.

He went on to explain that while the EPYC design team considered implementing a wide vector capability, they felt it was too expensive in terms of die area and power requirements to load down the CPU with such a capability. Instead they opted for a more general-purpose floating-point unit, plumbed with dedicated FP pipes to improve performance.

Also part of the Facebook discussion was AMD Engineering Fellow Kevin Lepak, who explained that another facet of the decision to keep the EPYC floating-point unit more generalized was due to AMDs GPU computing product line, which essentially fulfills the role of a dedicated vector processor. The company felt it didnt make much sense to overlap this capability in their CPU platform as long as they were offering both. As noted earlier, Intel made the exact opposite decision, vis--vis their Xeon and Xeon Phi lines.

Norrod and Lepak also delved into the rationale for implementing EPYC as a multi-chip module (MCM) processor, rather than as a monolithic chip, as Intel has done with its SkylakeXeons. A 32-core EPYC processor, for example, is comprised of four eight-core dies glued together with the Infinity Fabric. Intel has been critical of AMD for its MCM approach, claiming it hinders performance at various choke points. AMD counters that its a more effective way to get its extra-large feature set eight memory channels, 128 PCIe lanes, built-in encryption, and so on into the processor, while also serving to lower costs via increased manufacturing yields.

None of these technical arguments amount to much for customers, who will be focused on performance, price-performance and performance-per-watt across their own applications. If AMD can deliver superior numbers on even two ofthese criteria, Intel will likely lose its 90 percent-plus market share in HPCfor the first time in nearly ten years. And that would be a true EPYC event.

Excerpt from:

With EPYC, AMD Offers Serious Competition to Intel in HPC - TOP500 News

Supercomputer maker Cray cutting 14% of workforce, 190 jobs … – GeekWire

(GeekWire File Photo)

Seattle supercomputer maker Cray plans to cut 190 jobs, representing about 14 percent of its global workforce, as part of a restructuring plan meant to cut costs.

In a filing with the U.S. Securities and Exchange Commission Wednesday, the company said the layoffs will affect all organizations and major geographies of the Company. A vast majority of the cuts are set to take effect by the end of the week, the company said in the filing.

Cray said it expects to save $25 million per year as a result of the job cuts. It will take a $10 million restructuring charge, mostly related to severance payments and employment taxes.

Cray says on its website it has more than 1,300 employees globally, with its headquarters in Seattle and engineering and manufacturing facilities in California, Minnesota, Texas and Wisconsin. It has sales and service offices around the world.

Seymour Cray is a legend in high-performance computing, and while the company he founded has gone through a number of iterations as servers evolved over the decades, its still putting out some of the most powerful machines on the planet. The market for those machines is shrinking, however, as cloud services become more and more popular and powerful:Crays revenue and net income declined sharply in 2016compared to the previous year.

The Cray brand, along with much of its intellectual property and some Cray engineers were acquired in 2000 by Tera Computer Company, which immediately re-named itself after the iconic brand. Recently Cray has been working to reinvent itself for the cloud era with a new product that promises supercomputing as a service.

Continue reading here:

Supercomputer maker Cray cutting 14% of workforce, 190 jobs ... - GeekWire

New Supercomputer Brings Deep Learning Capabilities to CSIRO … – TOP500 News

Australias Commonwealth Scientific and Industrial Research Organisation (CSIRO) has deployed a Dell EMC supercomputer outfitted with NVIDIAs P100 GPUs. The system, known as Bracewell, will nearly double the computational power available to CSIRO researchers.

Source: CSIRO

The new machine was built by Dell EMC for $4 million, and is comprised of 114 PowerEdge C4130 servers hooked together with EDR InfiniBand. Aggregate memory across the entire system is 29 TB. Each server is equipped with four NVIDA P100 GPUs and two Intel Xeon 14-core CPUs. The GPUs alone represent over 2.4 petaflops of peak performance.

From a flops perspective, that would easily make it the most powerful supercomputer in Australia. Before Bracewell came online, the most powerful supercomputer in the country was Raijin, a combined Fujitsu- Lenovo system installed at the National Computational Infrastructure National Facility (NCI-NF), in Canberra. It has a peak performance of 1.875 petaflops (1.676 Linpack petaflops), powered by a combination of Xeon CPUs, Xeon Phi processors, and NVIDIA P100 GPUs.

One of the early CSIROs users of Bracewell will be the CData61 Computer Vision research group working on bionic vision. The team, led by Associate Professor Nick Barnes, developed software designed to help restore sight for people with severe vision loss.

"When we conducted our first human trial, participants had to be fully supervised and were mostly limited to the laboratory, but for our next trial we're aiming to get participants out of the lab and into the real world, controlling the whole system themselves," Barnes said.

"This new system will provide greater scale and processing power we need to build our computer vision systems by optimization of processing over broader scenarios, represented by much larger sets of images, to help train the software to understand and represent the world. We'll be able to take our computer vision research to the next level, solving problems through leveraging large-scale image data that most labs around the world arent able to."

In addition to boosting the bionic vision work, the system will also provide computational support for a number of science and engineering efforts at CSIRO, including research in virtual screening for therapeutic treatments, traffic and logistics optimization, modeling of new material structures and compositions, and machine learning for image recognition and pattern analysis.

Bracewell was installed over a period of just five days spanning the end of May and beginning of June. The system came online in early July.

Read the original here:

New Supercomputer Brings Deep Learning Capabilities to CSIRO ... - TOP500 News

When 5G is here, a wireless supercomputer will follow you around – CNNMoney

AT&T (T, Tech30) on Tuesday detailed its plan to use "edge computing" and 5G to move data processing to the cloud, in order to better support these new technologies.

"[Edge computing] is like having a wireless supercomputer follow you wherever you go," AT&T said in a statement.

Rather than sending data to AT&T's core data centers -- which are often hundreds of miles away from customers -- it will be sent to the company's network of towers and offices, located closer to users.

Currently, data is either stored in those data centers or on the device itself.

"[Edge computing] gives the option now to put computing in more than two places," Andre Fuetsch, president of AT&T Labs and chief technology officer, told CNN Tech.

For example, let's say you're wearing VR glasses but the actual virtual reality experience is running in the cloud. There could be a delay in what you see when you move your head if the data center is far away.

Related: AT&T to consider splitting telecom, media divisions after Time Warner deal

AT&T aims to reduce lag time by sending data to locations much closer to you. (AT&T has agreed to acquire Time Warner, the parent company of CNN. The deal is pending regulatory approval.)

5G networks will be driving these efforts. Experts believe 5G will have barely any lag, which means a lot of the computing power currently in your smartphone can be shifted to the cloud. This would extend your phone's battery life and make apps and services more powerful.

In the case of augmented and virtual reality, superimposing digital images on top of the real world in a believable way requires a lot of processing power. Even if a smartphone can deliver that promise, it would eat up its battery life.

With edge computing, data crunching is moved from the device to the "edge" of the cloud, which is the physical points of the network that are closer to customers.

Related: AT&T and Verizon halt Google ads over extremist videos

5G will also enable faster speeds and could even open the door to new robotic manufacturing and medical techniques.

AT&T is rolling out edge computing over the "next few years," beginning in dense urban areas.

CNNMoney (New York) First published July 18, 2017: 3:23 PM ET

Visit link:

When 5G is here, a wireless supercomputer will follow you around - CNNMoney

Trinity Supercomputer’s Haswell and KNL Partitions Are Merged – HPCwire (blog)

Trinity supercomputers two partitions one based on Intel Xeon Haswell processors and the other on Xeon Phi Knights Landing have been fully integrated are now available for use on classified work in the National Nuclear Security Administration (NNSA)s Stockpile Stewardship Program, according to an announcement today. The KNL partition had been undergoing testing and was available for non-classified science work.

The main benefit of doing open science was to find any remaining issues with the system hardware and software before Trinity is turned over for production computing in the classified environment, said Trinity project director Jim Lujan. In addition, some great science results were realized, he said. Knights Landing is a multicore processor that has 68 compute cores on one piece of silicon, called a die. This allows for improved electrical efficiency that is vital for getting to exascale, the next frontier of supercomputing, and is three times as power-efficient as the Haswell processors, Archer noted.

The Trinity project is managed and operated by Los Alamos National Laboratory and Sandia National Laboratories under the New Mexico Alliance for Computing at Extreme Scale (ACES) partnership. In June 2017, the ACES team took the classified Trinity-Haswell system down and merged it with the KNL partition. The full system, sited at LANL, was back up for production use the first week of July.

The Knights Landing processors were accepted for use in December 2016 and since then they have been used for open science work in the unclassified network, permitting nearly unprecedented large-scale science simulations.Presumably the merge is the last step in the Trinity contract beyond maintenance.

Trinity, based on a Cray XC30, now has 301,952 Xeon and 678, 912 Xeon Phi processors along with two pebibytes (PiB) of memory. Besides blending the Haswell and KNL processors, Trinity benefits from the introduction of solid state storage (burst buffers). This is changing the ratio of disk and tape necessary to satisfy bandwidth and capacity requirements, and it drastically improves the usability of the systems for application input/output.With its new solid-state storage burst buffer and capacity-based campaign storage, Trinity enables users to iterate more frequently, ultimately reducing the amount of time to produce a scientific result.

With this merge completed, we have now successfully released one of the most capable supercomputers in the world to the Stockpile Stewardship Program, said Bill Archer, Los Alamos Advanced Simulation and Computing (ASC) program director.Trinity will enable unprecedented calculations that will directly support the mission of the national nuclear security laboratories, and we are extremely excited to be able to deliver this capability to the complex.

Trinity Timeline:

Link:

Trinity Supercomputer's Haswell and KNL Partitions Are Merged - HPCwire (blog)

CSIRO receives deep learning supercomputer from Dell EMC | ZDNet – ZDNet

The Commonwealth Scientific and Industrial Research Organisation (CSIRO) has welcomed a new supercomputer to its Canberra campus, with Dell EMC sending the new Bracewell system live earlier this month.

The new large-scale scientific computing system is expected to expand CSIRO's capability in deep learning, further its artificial intelligence (AI) progress, and allow for the exploration of virtual screening for therapeutic treatments, traffic and logistics optimisation, modelling of new material structures and compositions, machine learning for image recognition, and pattern analysis.

One of the first research teams to benefit from the new processing power will be Data61's Computer Vision group, which develops software for a bionic vision solution that aims to restore sight for those with profound vision loss.

Bracewell will help the research team scale their software to tackle new and more advanced challenges, and give them the ability to use much larger data sets to help train the software to recognise and process more images.

"This is a critical enabler for CSIRO science, engineering, and innovation," said Angus Macoustra, CSIRO deputy chief information officer and head of scientific computing. "As a leading global research organisation, it's important to sustain our global competitiveness by maintaining the currency and performance of our computing and data infrastructures."

Macoustra said the new system will nearly double the aggregate computational power available to CSIRO researchers, and will help transform the way the organisation conducts scientific research and development.

"The new Bacewell cluster is a key facility to power innovation and research," he added.

The Bracewell system comprises 114 PowerEdge C4130 servers with Nvidia Tesla P100 GPUs, NVlink, dual Intel Xeon processors, and 100Gbps Mellanox EDR InfiniBand interconnect.

It boasts 1,634,304 CUDA Compute Cores, 3,192 Xeon Compute Cores, and 29TB of RAM.

Bracewell runs a dual operating system, supporting both Linux and Windows requirements.

With a budget of AU$4 million, CSIRO went to tender in November for the new supercomputing system to replace the existing Bragg accelerator cluster.

Speaking with ZDNet at the time, Macoustra said the Bragg system was used by the organisation to solve big data challenges in fields such as bioscience, image analysis, fluid dynamics modelling, and environmental science.

The Bracewell system replaces the Bragg accelerator cluster and is named after Ronald N Bracewell, an Australian astronomer and engineer who worked in the CSIRO Radiophysics Laboratory during World War II, and whose work led to fundamental advances in medical imaging.

Dell EMC was also awarded a AU$1.2 million contract for the expansion of CSIRO's Pearcey supercomputing system earlier this month.

Named after British-born Australian IT pioneer Dr Trevor Pearcey, who led the CSIRO project team that built one of the world's first digital computers, the Canberra-based Pearcey supercomputer is used to support the organisation's data-driven research to help combat the likes of post-childbirth complications in women.

The upgrades from Dell EMC now sees Pearcey comprise 349 PowerEdge M630 compute nodes, with the additional 119 boasting dual Intel Xeon 10 core CPUs, 128GB RAM, and an FDR InfiniBand network connection that will move data across the supercomputer at 7GB/s per node with ultra-low latency.

The system also contains four individual PowerEdge R90 nodes, each with 3 terabytes of memory for large data-workloads such as data analytics or life science; 7,300 Xeon compute cores; and 52TB of memory.

CSIRO received the Pearcey system in March last year, but in the space of 12 months, Dell EMC ANZ high-performance computing lead Andrew Underwood said the size and complexity of scientific workloads that CSIRO researchers are running on the system have continued to increase.

It is expected the expansion will enable CSIRO researchers to tackle even larger scientific simulations and datasets.

"High-performance computing technologies are increasingly becoming an essential part of Australian industry, as they allow enterprise, government, and academia to compete in global markets where the pace of innovation is 10-times faster than it was a decade ago," Underwood told ZDNet.

"The expanded Pearcey supercomputer will achieve faster results, enable bigger discoveries, and drive the creation of intellectual property from CSIRO's talented and experienced research and professional staff."

Monash University received an M3 high performance supercomputer upgrade last year, using Dell's super compute platform powered by GPU giant Nvidia.

Similarly, the Faculty of Science at the University of Western Australia also welcomed its own high-performance computing cluster to its Perth campus to assist with computational chemistry, biology, and physics.

The CSIRO also went to tender in September to find a new Advanced Technology Cluster to replace the decommissioned Fornax system at the Pawsey Supercomputing Centre in Perth, a national supercomputing joint venture between the CSIRO, Curtin University, Edith Cowan University, Murdoch University, and the University of Western Australia.

With a budget of AU$1.5 million, the CSIRO specified the new ATC was to meet the needs of the radio astronomy research community and high-end researchers in other areas of computational science, such as geosciences, nanotechnology, and biotechnology.

See the rest here:

CSIRO receives deep learning supercomputer from Dell EMC | ZDNet - ZDNet

When 5G is here, a wireless supercomputer will follow you around – WFMZ Allentown

Related Content

NEW YORK (CNNMoney) - Next-generation tech like self-driving cars and augmented reality will need huge amounts of computing power.

AT&T on Tuesday detailed its plan to use "edge computing" and 5G to move data processing to the cloud, in order to better support these new technologies.

"[Edge computing] is like having a wireless supercomputer follow you wherever you go," AT&T said in a statement.

Rather than sending data to AT&T's core data centers -- which are often hundreds of miles away from customers -- it will be sent to the company's network of towers and offices, located closer to users.

For example, let's say you're wearing VR glasses but the actual virtual reality experience is running in the cloud. There could be a delay in what you see when you move your head if the data center is far away.

AT&T aims to reduce lag time by sending data to locations much closer to you. (AT&T has agreed to acquire Time Warner, the parent company of CNN. The deal is pending regulatory approval.)

5G networks will be driving these efforts. Experts believe 5G will have barely any lag, which means a lot of the computing power currently in your smartphone can be shifted to the cloud. This would extend your phone's battery life and make apps and services more powerful.

In the case of augmented and virtual reality, superimposing digital images on top of the real world in a believable way requires a lot of processing power. Even if a smartphone can deliver that promise, it would eat up its battery life.

"This solution moves the data crunching from the device to the cloud at the edge," an AT&T spokesman told CNN Tech. "[This is] one way to reduce latency, but it's less practical due to the effect on the battery or even hardware required."

The "edge" refers to the physical points of the network that are closer to customers.

5G will also enable faster speeds and could even open the door to new robotic manufacturing and medical techniques.

AT&T is rolling out edge computing over the "next few years," beginning in dense urban areas.

Read more here:

When 5G is here, a wireless supercomputer will follow you around - WFMZ Allentown

ANSYS Scales to 200K Cores on Shahin II Supercomputer – insideHPC

Today ANSYS, Saudi Aramco, and KAUST announced a new supercomputing milestone by scaling ANSYS Fluent to nearly 200,000 processor cores enabling organizations to make critical and cost-effective decisions faster and increase the overall efficiency of oil and gas production facilities. This supercomputing record represents a more than 5x increase over the record set just three years ago, when Fluent first reached the 36,000-core scaling milestone.

Todays regulatory requirements and market expectations mean that manufacturers must develop products that are cleaner, safer, more efficient and more reliable, said Wim Slagter, director of HPC and cloud alliances at ANSYS. To reach such targets, designers and engineers must understand product performance with higher accuracy than ever before especially for separation technologies, where an improved separation performance can immediately increase the efficiency and profitability of an oil field. The supercomputing collaboration between ANSYS, Saudi Aramco and KSL enabled enhanced insight in complex gas, water and crude-oil flows inside a separation vessel, which include liquid free-surface, phase mixing and droplets settling phenomena.

The calculations were run on the Shaheen II, a Cray XC40 supercomputer, hosted at the KAUST Supercomputing Core Lab (KSL). By leveraging high performance computing (HPC), ANSYS, Saudi Aramco and KSL sped up a complex simulation of a separation vessel from several weeks to an overnight run. This simulation is critical to all oil and gas production facilities empowering organizations around the world to reduce design development time and better predict equipment performance under varying operational conditions. Saudi Aramco will apply this technology to make more-informed, timely decisions to retrofit separation vessels to optimize operation throughout an oil fields lifetime.

Our oil and gas facilities are among the largest in the world. We selected a complex representative application a multiphase gravity separation vessel to confirm the value of HPC in reducing turnover time, which is critical to our industry, said Ehab Elsaadawy, computational modeling specialist and oil treatment team leader at Saudi Aramcos Research and Development Center. By working with strategic partner, KAUST, we can now run these complex simulations in one day instead of weeks.

KSLs Shaheen II supercomputer is a Cray system composed of 6,174 nodes representing 197,568 processor cores tightly integrated with a richly layered memory hierarchy and interconnection network.

Multiphase problems are complex and require multiple global synchronizations, making them harder to scale than single phase laminar or turbulent flow simulation. Unstructured mesh and complex geometry add further complexity, said Jysoo Lee, director, KAUST Supercomputing Core Lab. Our scalability tests are not just designed for the sake of obtaining scalability at scale. This was a typical Aramco separation vessel with typical operation conditions, and larger core counts are added to reduce the time to solution. ANSYS provides a viable tool for Saudi Aramco to solve their design and analysis problems at full capacity of Shaheen. And for KAUST-Aramco R&D collaboration, this is our first development work. There are more projects in the pipeline.

Sign up for our insideHPC Newsletter

Read more from the original source:

ANSYS Scales to 200K Cores on Shahin II Supercomputer - insideHPC

Dell EMC wraps up $4M CSIRO supercomputer build – ARNnet

Dell EMC has been revealed as the technology partner tasked with building the Australian national science agencys new $4 million supercomputer system, which went live in early July.

The tech company announced on 18 July it had worked with the Commonwealth Scientific and Industrial Research Organisation (CSIRO) to build the agencys new large-scale scientific computing system.

The project is aimed at expanding the CSIROs capability in deep learning, a key approach to furthering progress towards artificial intelligence (AI).

CSIRO put the call out for tenders in November 2016 to build the new system with a $4 million budget. At the time, the agency said it was searching for a technology partner to replace its existing BRAGG supercomputer with a petaflop-grade advanced accelerator compute cluster.

At the time, the new system was slated to be located in the same CSIRO data centre space where the BRAGG system resided atthe Information Management and Technology (IMT) facility in Canberra.

The procurement had a fixed budget of $4 million, which included hardware, software licensing, maintenance, and support requirements, installation, and commissioning costs.

Following Dell EMCs successful tender proposal, the new system was installed in just five days across May and June 2017. The system is now live and began production in early July 2017. It is expected to clock up speeds in excess of one petaflop.

The new system, named Bracewell after Australian astronomer and engineer Ronald N. Bracewell, is built on Dell EMCs PowerEdge platform.

The infrastructure includes other partner technology, such as GPUs for computation and InfiniBand networking, which pieces all the compute nodes together in a low latency and high bandwidth solution designed to be faster than traditional networking.

Dell EMC A/NZ high performance computing lead, Andrew Underwood, said that the installation process was streamlined and optimised for deep learning applications, with Bright Cluster Manager technology helping to put the frameworks in place.

Our system removes the complexity from the installation, management and use of artificial intelligence frameworks, and has enabled CSIRO to speed up its time to results for scientific outcomes, which will in turn boost Australias competitiveness in the global economy. Mr. Underwood said.

The new system includes 114 PowerEdge C4130 with NVIDIA Tesla P100 GPUs, NVLINK, dual Intel Xeon processors and 100Gbps Mellanox EDR InfiniBand, totaling 1,634,304 CUDA Compute Cores, 3,192 Xeon Compute Cores, 29TB RAM, plus Bright Cluster Manager Software 8.0.

In addition to artificial intelligence, the new system is aimed at providing capability for research in areas as diverse as virtual screening for therapeutic treatments, traffic and logistics optimisation, modelling of new material structures and compositions, machine learning for image recognition and pattern analysis.

CSIRO deputy CIO and head of scientific computing, Angus Macoustra, said the system is crucial to the organisations work in identifying and solving emerging science problems.

This is a critical enabler for CSIRO science, engineering and innovation, Macoustra said. As a leading global research organisation, its important to sustain our global competitiveness by maintaining the currency and performance of our computing and data infrastructures.

The power of this new system is that it allows our researchers to tackle challenging workloads and ultimately enable CSIRO research to solve real-world issues. The system will nearly double the aggregate computational power available to CSIRO researchers, and will help transform the way we do scientific research and development, he said.

The system builds on Dell EMCs previous work in the high-performance computing space, including the CSIRO's Pearcey Cluster system, installed in early 2016.

The Pearcey system was designed by CSIRO and Dell, and delivers 230 nodes supporting data- intensive research and computational modelling.

The new system build also follows a number of other such systems DellEMC has helped to buildfor Australian universities, such as the University of Melbourne's Spartan, Monash Universitys MASSIVE3 and the University of Sydneys Artemis system.

Were proud to play a part in evolving the work happening at CSIRO and look forward to enabling scientific progress for years to come, Dell EMC A/NZ commercial and public sector lead, Angela Fox, said.

The call for the projects tender came just months after the CSIRO announced it was looking for a technology service provider to supply, install, and maintain a new Advanced Technology Cluster at its Pawsey Centre in Perth.

The proposed Pawsey Centre procurement was for a three-year contract with a fixed budget of $1.5 million, including hardware, software licensing, maintenance and support requirements, installation, and commissioning costs.

Dell EMC's latest work with the CSIRO comes asDimension Data is awarded a $14 million, multi-year IT services contract by the agency.

Under the terms of the deal with Dimension Data deal, the technology supplier will provide commercial off-the-shelf software, hardware, support and maintenance across networking, unified communications, IT security, and datacentre equipment via the IT providers eProcurement portal system.

According to the CSIRO chief information officer, Brendan Dalton, the contract supports the agencys day-to-day information management and technology operations.

Error: Please check your email address.

Tags CSIRODell EMCSupercomputer

More:

Dell EMC wraps up $4M CSIRO supercomputer build - ARNnet

Financial Analyst Takes Critical Look at IBM Watson | TOP500 … – TOP500 News

A report published by James Kisner, an equity analyst at global investment banking firm Jeffries, shot a few holes in IBMs Watson and the companys cognitive computing strategy. Along the way, Kisner offered some interesting insights into the AI market and some of the major players competing in the space.

The thrust of the report was that even though Watson is currently one of more mature cognitive computing platforms in the market, customer deployments have relied on expensive service and consulting engagements with IBM, which would limit broader adoption. The report also found that other firms were out-recruiting IBM for available AI talent and this would degrade the companys competitive position in the long-term.

Kisner concluded that IBM is likely investing more money into Watson than its currently recouping in sales. At least thats his best guess. As the report notes, IBM has been reticent to share financial data on Watson, both on the investment side and the revenue side. According to a recent 10-K disclosure from IBM though, the company has spent $15 billion on its cognitive computing efforts from 2010 through 2015, which doesnt include the $5 billion in AI-related acquisitions, such as The Weather Channel and Truven Health. Watson R&D is certainly a decent chunk of this overall spending, but no one outside of IBM knows for sure.

Regarding the high price of servicing Watson, Kisner refers to it a Cadillac solution, writing: Our checks suggest that IBMs Watson platform remains one of the most complete off-the-shelf platforms available on the marketplace. However, many new engagements require significant consulting work to gather and curate data. Our checks suggest that Watson is a finicky eater when it comes to data enterprises can feed it in other words, IBM has very exacting standards for data preparation. The halt of and cost overruns in the MD Anderson engagement with Watson epitomize our concerns here.

The latter refers to the MD Anderson Cancer Center ditching its Watson pilot project in 2016 after switching to a new database, which would have entailed additional integration work. At that point MD Anderson had already sunk $62 million into the effort. As a result, the center was not able to deploy the technology for clinical use.

As Kisner notes, the irony here is that a significant portion of Watsons revenue is going to be generated from consultation, which is the very thing he believes will hinder its wider adoption. Thats not to say that IBM cant make a going concern out of the business. As the AI space matures, theres likely to be an array of offerings from providers aimed at different levels of users from consumers to Fortune 500 companies. IBM is going to be focused at the high end of that spectrum.

Another factor to be consider is the relative value of the intellectual property in Watson, all of which lies in its software. According to Kisner though, in the world of AI today, it is data and talent that have the most value, not the algorithms. Moreover, much of software, especially the deep learning frameworks, in the AI space developed by Google, Microsoft, Amazon and others, is now open source, and thus widely accessible. Although Watson is available as a cloud service, complete with an API interface, it charges a fee ($0.0025) for each API query.

On the data side, IBM owns the meteorological dataset from its Weather Channel acquisition, as well as Truven Healths database. But compared to the data repositories available to the web giants like Amazon, Google and Facebook, IBMs data resources are much more limited in scope and size.

Talent is also a problem for IBM, says Kisner. For this, he used AI-related job openings as a sort of proxy for a companys ability to recruit individuals. When looking at the data, companies like Amazon, Microsoft, and Apple had many more job opening in this area that IBM. (Amazon had 10 times as many as IBM.)

Here Kisner seems to be skating on somewhat thin ice. Looking at job openings ignores the fact that some companies may already have assembled a talent base, or are able to draw from employees working elsewhere in the company or through acquisitions. Its notable that Google had even less job openings than IBM, even though they are widely considered one of the leaders in the AI space. Nevertheless, the analysis paints a competitive landscape where AI talent will likely gravitate toward the biggest users and providers of this technology, and those tend to be the hyperscale web companies.

Looking at the broader AI space, the report notes that analysts like IDC and Tractica project that the market is growing at a double-digit pace. IDC forecasts that cognitive software will increase at an 18 percent CAGR from 2016 to 2020, growing from $1.6 billion to $6.3 billion over this period. Tractica is even more bullish, predicting a 40-fold increase in the AI market from 2016 to 2025, at which point companies will be spending about $60 billion per year. And these numbers largely ignore the internal use of this technology at the big web companies.

Much of this growth is being enabled by advances in parallel computing and related high performance computing gear. On this last count, the report calls out three hardware provide that are benefitting from the AI surge, namely NVIDIA (of course), but also Mellanox and Pure Storage. In the case of Mellanox, Jeffries analysts believe the demand of high-performance interconnects will drive more revenue, with the caveat that this doesnt yet outweigh our concerns around ramping competition from Intels Omni-Path. For Pure Storage, a company providing all-flash storage solutions, the benefit will be derived from AI use cases with datasets more than 10 terabytes.

None of those companies compete with IBM in fact, they are partners. But IBM does face competition in the cognitive services space. Rivals include Microsoft, Oracle, and SAP, all of which, writes Kisner, have significant machine learning efforts underway and may be more credible threats in the Enterprise near-term. IBM may also end up losing some market share to Cisco. It recently acquired MindMeld, a startup that has developed conversational AI technology for voice and chat assistants.

Products and services driving this rapid ramp-up include applications in robotics, augmented and virtual reality, autonomous vehicles, chatbots, language translation and analysis, and computer vision. Nearly every industry will be impacted, with transformational effects in areas such as healthcare, transportation, consumer electronics, and retail.

Nevertheless, it will take some time AI applications to become widespread. Using Gartner forecasts as a guide, the report states that in the near-term, speech and image recognition are going to be adopted the fastest over the next two years. Further out in the two-to-five-year timeframe will be products like virtual customer assistants and smart appliances. In five to ten years, they expect smart robots, commercial drones, virtual personal assistants, and the use of conversational user interfaces to become widespread. Autonomous vehicles are projected to go mainstream after 10 years.

The report also outlines some other aspects of the AI space, including a nice overview of the acquisition landscape, as well the different APIs currently available. In addition, it provides a detailed financial analysis of Watsons revenue potential under a number of different scenarios. All in all, a good read for anyone interested in Watson or the broader AI market.

Original post:

Financial Analyst Takes Critical Look at IBM Watson | TOP500 ... - TOP500 News