Europe’s new weather forecasting supercomputer heads for Italy – Digital Journal

The current compute system used to make medium-range weather forecasts across Europe is based in Reading, in the U.K. The computer had served its time and a new system was needed. Because of Brexit - the U.K.'s soon-to-be-leaving the European Union - a decision was taking by the member states of the European Center for Medium-range Weather Forecasts to relocate the new version. The place selected to house the next-generation device is Bologna, Italy. Before installation work begins, full agreement is required from the Italian government (although this is expected to be a formality). The computer's space will be a disused tobacco factory in the Emilia-Romagna Region. The conversion project will be in the region of $55 million (or 50 million euros). The new computer will process vast quantities of data, drawn from satellite images together with operational seasonal forecasting coupled atmosphere-ocean-land models. To do this the center works very closely with the European Space Agency. The body in charge of the project, the European Center for Medium-range Weather Forecasts, is an independent intergovernmental organisation supported by most of the nations of Europe. Th remit of the organization is to provide accurate medium-range global weather forecasts out to 15 days and seasonal forecasts out to 12 months. The most important function of the computer is to provide an early warning, to European nations, of potentially damaging severe weather. Speaking with the BBC about the new computer, European Center for Medium-range Weather Forecasts Director-General Florence Rabier said: "As laid out in our 2025 Strategy launched last September, we believe that continuing to improve weather predictions relies heavily on our ability to support our science with proportionate computing power. Intermediary goals to 2020 already require that the Centres next supercomputers should provide a tenfold increase in our computational capacity."

Continued here:

Europe's new weather forecasting supercomputer heads for Italy - Digital Journal

University of Texas supercomputer speeds real-time MRI analysis – Information Management

Researchers from the Texas Advanced Computing Center, the University of Texas Health Science Center and Philips Healthcare have developed a new, automated platform capable of real-time analyses of magnetic resonance imaging (MRI) scans in minutes, rather than hours or even days.

By leveraging the Stampede supercomputer at the University of Texas-Austins TACC, imaging capabilities of a Philips MRI scanner, as well as the TACC-developed Agave application programming interface, researchers were able to demonstrate the system's effectiveness in using a T1 mapping process, which converts raw data into useful imagery.

The full circuitfrom MRI scan to Linux-based supercomputer and backtook about five minutes to complete and was accomplished without any additional inputs or interventions, says William Allen, technical lead for the effort and research associate in TACCs Life Sciences Computing Group.

Its really about the speed and flexibility. The whole point of this is to analyze the data faster, adds Allen, who notes that Philips Healthcare modified the MRI scanner software to accommodate the pipeline to enable fast, accurate image processing. The platform that we developed gives us the ability to link the scanner to a remote supercomputing resource.

Funded by the National Science Foundation, Stampede open science computing resource is one of worlds fastest supercomputers and is comprised of a Dell PowerEdge cluster equipped with Intel Xeon Phi coprocessors in an effort to push the envelope of computational capabilities by enabling breakthroughs in advancing computational biology and bioinformatics.

Also See: Fighting ZikaThe global computing effort to stop the virus

Allen describes the Agave API as a science-as-a-service platform designed to capture different kinds of biomedical data in real time and turn them into actionable insights for providers. Its the same analysis you would normally do with MRI, except now its all automated, he says. The way weve set it up is weve removed all need for human intervention.

According to Allen, the Agave API ensures that there is seamless communication between the MRI scanner and the Stampede supercomputer. The real benefit here is the Agave platform, which grabs the data automatically as it comes off the scanner, pushing it and then quickly starting the job, and then pulling the data back once the analysis is complete.

At the same time, Allen acknowledges that the test cases that the research team has conducted so far are relatively lightweight, using about 16 processing cores and up to 20 megabytes of RAM. Were at the proof-of-concept stage, he concludes. Once we get to more complicated analyses, with automated image segmentation and registration, well use easily up to 200 cores.

Allen is quick to make the point that the platform with the Agave API is not limited to MRI and could conceivably be done for any medical device or instrument that gathers some sort of data and pushes it to a computer.

Researchers presented the platform at last weeks International Conference on Biomedical and Health Informatics in Orlando, Fla., which was co-located with the HIMSS17 conference and exhibition.

Greg Slabodkin is managing editor of Health Data Management.

Continued here:

University of Texas supercomputer speeds real-time MRI analysis - Information Management

Tottenham news: Super Computer predicts Spurs’ finish – Football Insider

1st March, 2017, 6:41 PM

By Harvey Byrne

A Super Computer has predicted a third place finish for Tottenham when the season comes to an end in May.

Football website Football Web Pages, who have a predicted final league table section on their site using the said technology believes Spurs will drop down a place while picking up 29 points from their final 12 fixtures.

The website feeds data into a computer to calculate the end of seasons results with each prediction re-calculated aftereach goal is scored.

For now, it has Tottenham to drop points against Burnley in April in a 1-0 defeat at Turf Moor alongside 1-1 draws away at both Leicester and Hull, allowing Manchester City to overtake them.

The north London club also finished in third place in the last campaign, but the machine has predicted Mauricio Pochettinos side to gain 12 more points than the 70 they finished on last season.

Additionally, the 82 predicted would have been enough to see them win the league last season with Leicester finishing on 81.

Tottenham fans would certainly take the computers results at this stage of the season with their Champions League spot still at risk.

But, many will also be keen for their side to improve on last seasons third placing.

In other Tottenham news, Dele Alli has made a bold Spurs claim.

Weve launched a<>exclusively for your club. Like Us on Facebook byclicking hereif you want 24/7 updates on all Tottenham breaking news.

Continued here:

Tottenham news: Super Computer predicts Spurs' finish - Football Insider

Scientists reveal new super-fast form of computer that ‘grows as it computes’ – Phys.Org

March 1, 2017 DNA double helix. Credit: public domain

Researchers from The University of Manchester have shown it is possible to build a new super-fast form of computer that "grows as it computes".

Professor Ross D King and his team have demonstrated for the first time the feasibility of engineering a nondeterministic universal Turing machine (NUTM), and their research is to be published in the prestigious Journal of the Royal Society Interface.

The theoretical properties of such a computing machine, including its exponential boost in speed over electronic and quantum computers, have been well understood for many years but the Manchester breakthrough demonstrates that it is actually possible to physically create a NUTM using DNA molecules.

"Imagine a computer is searching a maze and comes to a choice point, one path leading left, the other right," explained Professor King, from Manchester's School of Computer Science. "Electronic computers need to choose which path to follow first.

"But our new computer doesn't need to choose, for it can replicate itself and follow both paths at the same time, thus finding the answer faster.

"This 'magical' property is possible because the computer's processors are made of DNA rather than silicon chips. All electronic computers have a fixed number of chips.

"Our computer's ability to grow as it computes makes it faster than any other form of computer, and enables the solution of many computational problems previously considered impossible.

"Quantum computers are an exciting other form of computer, and they can also follow both paths in a maze, but only if the maze has certain symmetries, which greatly limits their use.

"As DNA molecules are very small a desktop computer could potentially utilize more processors than all the electronic computers in the world combined - and therefore outperform the world's current fastest supercomputer, while consuming a tiny fraction of its energy."

The University of Manchester is famous for its connection with Alan Turing - the founder of computer science - and for creating the first stored memory electronic computer.

"This new research builds on both these pioneering foundations," added Professor King.

Alan Turing's greatest achievement was inventing the concept of a universal Turing machine (UTM) - a computer that can be programmed to compute anything any other computer can compute. Electronic computers are a form of UTM, but no quantum UTM has yet been built.

DNA computing is the performing of computations using biological molecules rather than traditional silicon chips. In DNA computing, information is represented using the four-character genetic alphabet - A [adenine], G [guanine], C [cytosine], and T [thymine] - rather than the binary alphabet, which is a series of 1s and 0s used by traditional computers.

Explore further: Researchers restore first ever computer music recording

More information: Currin, A., Korovin, K., Ababi, M., Roper, K., Kell, D.B., Day, P.J., King, R.D. (2017) Computing exponentially faster: Implementing a nondeterministic universal Turing machine using DNA. Journal of the Royal Society Interface. (in press). On Arxiv: arxiv.org/abs/1607.08078

New Zealand researchers said Monday they have restored the first recording of computer-generated music, created in 1951 on a gigantic contraption built by British genius Alan Turing.

"Siri, will it rain today?", "Facebook, tag my friend in this photo." These are just two examples of the incredible things that we ask computers to do for us. But, have you ever asked yourself how computers know how to do ...

(Phys.org)A team of researchers made up of representatives from Google, Lawrence Berkeley National Labs, Tufts University, UC Santa Barbara, University College London and Harvard University reports that they have successfully ...

An international team, led by a scientist from the University of Sussex, have today unveiled the first practical blueprint for how to build a quantum computer, the most powerful computer on Earth.

Scientists at the University of Sussex have invented a ground-breaking new method that puts the construction of large-scale quantum computers within reach of current technology.

IBM on Wednesday opened its quantum computer processor to anyone who wants to try what is expected to be a new kind of computing with enormously improved power and speed.

(Phys.org)Dynamic holograms allow three-dimensional images to change over time like a movie, but so far these holograms are still being developed. The development of dynamic holograms may now get a boost from recent research ...

Researchers at Karolinska Institutet and KTH Royal Institute of Technology in Sweden have contributed to a recent discovery that the heart is filled with the aid of hydraulic forces, the same as those involved in hydraulic ...

When matter is cooled to near absolute zero, intriguing phenomena emerge. These include supersolidity, where crystalline structure and frictionless flow occur together. ETH researchers have succeeded in realising this strange ...

Georgia Institute of Technology researchers have demonstrated an optical metamaterial whose chiroptical properties in the nonlinear regime produce a significant spectral shift with power levels in the milliwatt range.

In a new study published last week in Science Advances, researchers at the U.S. Department of Energy's (DOE) Argonne National Laboratory created tiny swirling vortices out of magnetic particles, providing insight into the ...

An international team of scientists has tailored special X-ray glasses to concentrate the beam of an X-ray laser stronger than ever before. The individually produced corrective lens eliminates the inevitable defects of an ...

Adjust slider to filter visible comments by rank

Display comments: newest first

Just another form of efficient parallel processing

In wikipedia, you can look at "Biological computing" and "Amorphous Computing"

One can also look at the work of Pr. Andrew Adamatzky in reaction-diffusion computing & massive parallel computation

And at the work of MIT Amorphous Computing

And many others in the field of "Unconventional Computing" or "Unconventional Programming Paradigms"

Seems like we are just determined to create an all-powerful AI that sees us as either a nuisance or food.

What I get from this article is that I no longer need to tack on additional memory when the program needs more: it will do that on its own. This is absolutely wonderful!

A system like this could, theoretically, turn the entire cosmos into memory for certain algorithms and inputs, and still be nowhere near finished, but I assume we'd cut the machine's power supply long before that happened (that's also a solution to the all-powerful AI monster; Asimov's Three Laws of Robotics is another)...

Seems to me, implementing this won't be so easy, either in building the DNA or in creating an interface to it. And trying to simulate it with software using conventional computers means that you'll have to add processors and memory as it "grows".

Do they have any wetwear that can actually do this or is it just speculation on what one might be able to do if they did?

Our real universe with quantum mechanics replicating parallel universes is already such a growing endless computer experimenting all the possibilities with sosies of us living all the possibles lifes with the same past. DNA is not necessary, it is useful and easy because life use it to keep past memory.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Read the original post:

Scientists reveal new super-fast form of computer that 'grows as it computes' - Phys.Org

TSUBAME3.01 Set To Be Japan’s Largest Supercomputer – Asian Scientist Magazine

Equipped with over 2,000 of the latest NVIDIA GPUs, TSUBAME3.0 will give Japan an additional 47.2 petaFLOPS of supercomputing power.

Asian Scientist Newsroom | March 1, 2017 | Top News

AsianScientist (Mar. 1, 2017) - The Tokyo Institute of Technology (Tokyo Tech) Global Scientific Information and Computing Center (GSIC) has begun development and construction of a next-generation supercomputer called TSUBAME3.0. When it begins operations in the summer of 2017, TSUBAME3.01 will be Japan's most powerful supercomputer.

The theoretical performance of the TSUBAME3.0 is 47.2 petaFLOPS in 16-bit half precision mode or above, and once the new TSUBAME3.0 is operating alongside the current TSUBAME2.5, Tokyo Tech GSIC will be able to provide a total computation performance of 64.3 petaFLOPS in half precision mode or above, making it the largest supercomputer center in Japan.

The majority of scientific calculation requires 64-bit double precision, however, artificial intelligence (AI) and Big Data processing can be performed at 16-bit half precision, and the TSUBAME3.0 is expected to be widely used in these fields where demand is continuing to increase.

Since the TSUBAME2.0 and 2.5 started operations in November 2010 as the fastest supercomputers in Japan, these computers have become supercomputers for everyone and have significantly contributed to industry-academia-government research and development both in Japan and overseas. These research results and the experience gained through operating TSUBAME2.0 and 2.5, and the energy-saving supercomputer TSUBAME-KFC2 were all applied in the design process for TSUBAME3.0.

As a result of Japanese government procurement for the development of TSUBAME3.0, SGI Japan, Ltd. (SGI) was awarded the contract to work on the project. Tokyo Tech is developing TSUBAME3.0 in partnership with SGI and NVIDIA, as well as other companies.

The TSUBAME series feature the most recent NVIDIA GPUs available at the time, namely Tesla for TSUBAME1.2, Fermi for TSUBAME2.0, and Kepler for TSUBAME2.5. The upcoming TSUBAME3.0 will feature the fourth-generation Pascal GPU to ensure high compatibility. TSUBAME3.0 will contain 2,160 GPUs, making a total of 6,720 GPUs in operation at GSIC once operating alongside TSUBAME2.5 and TSUBAME-KFC.

Using the latest GPUs enables improved performance and energy efficiency as well as higher speed and larger capacity storage. The overall computation speed and capacity has also been improved through the NVMe-compatible, high-speed 1.08 PB SSDs on the computation nodes; resulting in significant advances in high-speed processing for big data applications. TSUBAME3.0 also incorporates a variety of cloud technologies, including virtualization, and is expected to become the most advanced science cloud in Japan.

Artificial intelligence is rapidly becoming a key application for supercomputing, said Mr. Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA. NVIDIA's GPU computing platform merges AI with HPC, accelerating computation so that scientists and researchers can tackle once unsolvable problems.

TSUBAME3.0 has the theoretical performance of 12.15 petaFLOPS in double precision mode (enabling calculation of 12,150 trillion floating point numbers/second); performance that is set to exceed the K supercomputer. In single precision mode, the TSUBAME3.0 performs at 24.3 petaFLOPS, and in half precision mode this increases to 47.2 petaFLOPS.

The computational power of TSUBAME3.0 will not only be used for education and cutting-edge research within the TokyoTech but will continue to serve as supercomputing for everyone through the Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures (JHPCN) and the High Performance Computing Infrastructure (HPCI), two leading information bases for Japan's top universities, and GSIC's own TSUBAME Joint Usage Service.

Source: Tokyo Institute of Technology. Disclaimer: This article does not necessarily reflect the views of AsianScientist or its staff.

View post:

TSUBAME3.01 Set To Be Japan's Largest Supercomputer - Asian Scientist Magazine

University of Texas supercomputer speeds real-time MRI analysis – Health Data Management

Researchers from the Texas Advanced Computing Center, the University of Texas Health Science Center and Philips Healthcare have developed a new, automated platform capable of real-time analyses of magnetic resonance imaging (MRI) scans in minutes, rather than hours or even days.

By leveraging the Stampede supercomputer at the University of Texas-Austins TACC, imaging capabilities of a Philips MRI scanner, as well as the TACC-developed Agave application programming interface, researchers were able to demonstrate the system's effectiveness in using a T1 mapping process, which converts raw data into useful imagery.

The full circuitfrom MRI scan to Linux-based supercomputer and backtook about five minutes to complete and was accomplished without any additional inputs or interventions, says William Allen, technical lead for the effort and research associate in TACCs Life Sciences Computing Group.

Its really about the speed and flexibility. The whole point of this is to analyze the data faster, adds Allen, who notes that Philips Healthcare modified the MRI scanner software to accommodate the pipeline to enable fast, accurate image processing. The platform that we developed gives us the ability to link the scanner to a remote supercomputing resource.

Funded by the National Science Foundation, Stampede open science computing resource is one of worlds fastest supercomputers and is comprised of a Dell PowerEdge cluster equipped with Intel Xeon Phi coprocessors in an effort to push the envelope of computational capabilities by enabling breakthroughs in advancing computational biology and bioinformatics.

Also See: Fighting ZikaThe global computing effort to stop the virus

Allen describes the Agave API as a science-as-a-service platform designed to capture different kinds of biomedical data in real time and turn them into actionable insights for providers. Its the same analysis you would normally do with MRI, except now its all automated, he says. The way weve set it up is weve removed all need for human intervention.

According to Allen, the Agave API ensures that there is seamless communication between the MRI scanner and the Stampede supercomputer. The real benefit here is the Agave platform, which grabs the data automatically as it comes off the scanner, pushing it and then quickly starting the job, and then pulling the data back once the analysis is complete.

At the same time, Allen acknowledges that the test cases that the research team has conducted so far are relatively lightweight, using about 16 processing cores and up to 20 megabytes of RAM. Were at the proof-of-concept stage, he concludes. Once we get to more complicated analyses, with automated image segmentation and registration, well use easily up to 200 cores.

Allen is quick to make the point that the platform with the Agave API is not limited to MRI and could conceivably be done for any medical device or instrument that gathers some sort of data and pushes it to a computer.

Researchers presented the platform at last weeks International Conference on Biomedical and Health Informatics in Orlando, Fla., which was co-located with the HIMSS17 conference and exhibition.

Link:

University of Texas supercomputer speeds real-time MRI analysis - Health Data Management

India Is Getting Its Most Powerful Supercomputer yet, and It’s Faster Than You Can Imagine! – The Better India (blog)

QUICK BYTES: Curated positive news from across the web. Read more.

If you thought the successful launch of a record 104 satellites by ISROs Polar Satellite Launch Vehicle (PSLV) in a single mission would be the single scientific highlight for India in 2017, prepare yourself for yet anotherchapter for the history books. Soon, the country will be unveiling its most powerful supercomputer.

According to a report in The Hindu, the supercomputer will be a million times faster than even the fastest of consumer laptops. The project has been sanctioned by the government to the tune of400 crore and it will most likely come to fruition in a matter of just months.

You may also like:From the First Rocket to the Launch of 104 Satellites, ISRO Has Always Been the King of Jugaad

Speaking to the publication, Madhavan Rajeevan, Secretary, Ministry of Earth Sciences, said,The tender [to select the company that will build the machine] is ready and we hope to have it [the computer] by June.

The system will be mostly used for weather updates, specifically when it comes to forecasting the monsoons. The supercomputer, which has not been given a moniker yet, will be able to forecast the likely outcomes months in advance with accuracy.

What makes this supercomputer so special is that while India has been home to some terribly fast computers, none have been able to go past cracking the top 200 or 100 in the list of supercomputers across the world.

For instance, the fastest supercomputer in India called Aaditya (iDataPlex DX360M4)is currently at the Indian Institute of Tropical Meteorology and is ranked at 139. There is also one in the Indian Institute of Technology, Delhi calledHP Apollo 6000 Xl230/250 ranked at 217.

While details are still being finalised, it has been decided that this particular system will be hosted by both National Centre for Medium Range Weather Forecasting at Noida in Uttar PradeshandIndian Institute of Tropical Meteorology, which is in Pune.

IndiaIndian Institute of Tropical MeteorologyMadhavan RajeevanMinistry of Earth SciencesMonsoonsNational Centre for Medium Range Weather ForecastingSecretarySpeedsupercomputerWeather forecasting

See original here:

India Is Getting Its Most Powerful Supercomputer yet, and It's Faster Than You Can Imagine! - The Better India (blog)

India Planning to Deploy 10-Petaflop Supercomputer – TOP500 News

India is getting ready to field the countrys most powerful supercomputer to date. According to a report in The Hindu, the 10-petaflop system will be installed this June, returning India to the upper echelons of supercomputing.

The machine is to be jointly hosted by the Indian Institute of Tropical Meteorology in Pune and the National Centre for Medium Range Weather Forecasting at Noida in Uttar Pradesh. Not surprisingly, the new system will be used mostly for weather modeling, but according to the report, also for non-meteorological research such as protein folding.

The Hindu quotes Madhavan Rajeevan, Secretary, Ministry of Earth Sciences, who said the bid to select the vendor that will build the machine is ready to go, and they hope to have the computer in place by June. The Indian government has allocated 400 crore or about $60 million for the project.

The upcoming 10-petaflop system promises to propel the nation back into the elite ranks of supercomputing stardom, something it has not enjoyed for a decade. The last time India had a top 10 system on the TOP500 list was 2007, when EKA, an HPC cluster from Hewlett Packard (now HPE) captured the number 4 spot.

SahasraT, Supercomputer Education and Research Centre

The most powerful Indian supercomputer today is SahasraT, a 1.2 petaflop (peak) system that can run Linpack at 901 teraflops. SahasraT is a Cray XC40 installed at the Supercomputer Education and Research Centre, as part of the Indian Institute of Science. SahasraT is currently ranked as number 133 on the TOP500, and is one of just four Indian supercomputers on the current list. From 2012 to 2015, India has made a more substantial showing, claiming between 9 and 12 such systems.

The new machine may get India back into the top 10, but its not a given. The current 10th-ranked system on the TOP500 list is Trinity, an 11-petaflop (peak) supercomputer that eked out 8.1 teraflops on Linpack. Even if no new top systems show up on the June list, the Indian machine would have to have a very efficient Linpack run to make it a top 10 machine.

Last year, the Indian government enacted a plan to build as many as 80 new supercomputer over the next seven years, allocating 4,500 crore for the effort. Many of those future systems are supposed to be domestically produced. Its not clear if this upcoming 10-petaflop system is part of that plan or funding allocation.

Continued here:

India Planning to Deploy 10-Petaflop Supercomputer - TOP500 News

15 ton supercomputer provides access to water, energy and the internet – Phys.Org

February 27, 2017 Credit: Shutterstock

A new machine called the Watly offers solutions to three of society's most important challenges ensuring access to clean water, sustainable energy generation and reaping the benefits of the evolving digital revolution. Supported by funds from the Horizon 2020 project, the innovative SME behind the project is now nearly ready to unveil its first full-scale Watly machine.

1.1 billion people worldwide still do not have access to a safe source of drinking water, causing more than 4 200 deaths from water-related diseases every day. 1.3 billion people lack access to electricity (more than a fifth of world's population) and 5 billion worldwide still have no access to the internet. Water and energy are highly interdependent and crucial to human well-being and sustainable socio-economic development. Watly, a trailblazing SME based in Spain and Italy, has devised a truly revolutionary way to tackle all three of these challenges with one machine.

The Watly machine comprises a central array of solar panels connected to four wing units, each of which houses a bank of vapour compression distillation tubes that can boil unsafe water from sources such as rivers and produce safe, clean water fit for human consumption.

But a crucial factor is that the energy used to drive that water purification process is not the electricity generated by the panels. Instead, the process is driven by waste heat harvested from the panels by an air circulation system an ingenious technique that founder and CEO of Watly, Marco Attisani describes as effectively self-powering. 'It does not use any energy,' he confirmed.

In turn, this generates a number of associated benefits. These include optimisation of the solar panels which are kept at their most effective operating temperature of 25C irrespective of ambient conditions and the delivery of all the generated electric power to support other more appropriate applications. These can vary from mobile phone recharging through 'cloud' connection to the internet to conventional electricity supply via an internal inverter that carries out DC-AC conversion.

Since March 2013 Attisani and his team have been devoted to the project and they have since developed two prototypes, one which was tested in Ghana with the support of a mixture of private funding, a crowdsourcing initiative and nearly EUR 1.5 million of funding from Horizon 2020.

The amount of power that a machine could generate rests on several important factors, though Attisani believes that 150kWhr per day could be achievable. The output of purified water from a fully functional machine operating at peak efficiency would potentially be around 5 000 litres per day. Finally, the machine's IT capabilities promise to go beyond simply supporting personal communications, such as email the company reckons that each machine could have a wireless connectivity zone with a radius of up to 1 kilometre.

Because of the purification process being one of distillation, the machine can also eradicate any type of contamination from the input water, bacterial, chemical and physical. In fact, Attisani claims that results from the purification process is so pure that the output water's mineral content is effectively zero, something that Watly has addressed by providing the space for rocks to be packed into the machine so that water can be 'remineralised'.

However, don't expect to be able to able to install a 3.0 Watly machine anytime soon into your own home. Currently, given the volumes that Attisani is talking about, from end-to-end the machine is currently 40 metres in length and could cost up to around EUR 600 000 to 1 million, depending on the technologies built into it. Moving forward, Attisani recently announced that the company is currently working with the European Space Agency to create an application that would allow the machine to guide in a drone aircraft to deliver urgent supplies in crisis zones.

The full 3.0 Watly machine is due to be unveiled in May 2017 (also the official end of the self-named Horizon 2020 WATLY project) and the company currently has the capacity to manufacture 50 machines per year, with the first five units going to customers by the end of 2017.

Explore further: Solar cells help purify water in remote areas

More information: Project page: cordis.europa.eu/project/rcn/198937_en.html

The United States shares 5,525 miles of land border with Canada and 1,989 miles with Mexico. Monitoring these borders, which is the responsibility of U.S. Customs and Border Protection (CBP), is an enormous task. Detecting, ...

A University of Central Florida professor has invented a way to use light to continuously monitor a surgical patient's blood, for the first time providing a real-time status during life-and-death operations.

The smartphone revolution is poised to go onto the next levelwith "superphones" equipped with artificial intelligence now on the horizon.

The number of mobile phone users globally will surpass five billion by the middle of this year, according to a study released Monday by GSMA, the association of mobile operators.

Thousands of ants converge to follow the most direct path from their colony to their food and back. A swarm of inexpensive, unmanned drones quickly map an offshore oil spill.

New research from North Carolina State University offers insights into how far and how fast cyborg cockroaches - or biobots - move when exploring new spaces. The work moves researchers closer to their goal of using biobots ...

Adjust slider to filter visible comments by rank

Display comments: newest first

Maybe they described it badly, but it sounds like an overly complicated and expensive piece of equipment that would need a lot of servicing that no one could afford.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Original post:

15 ton supercomputer provides access to water, energy and the internet - Phys.Org

Chinese Supercomputer Tianhe-3 Set to Shatter World Speed … – 24/7 Wall St.

China is looking to break through in 2017 by harnessing high-performance processors and other key technologies to build the worlds first prototype exascale supercomputer, the Tianhe-3. Ultimately this next-generation computer is expected to be 10 times faster than the current world champion. The prototype is expected to be completed in early 2018.

Exascale refers to a computers capability of making a quintillion (1 followed by 18 zeros) calculations per second. For a reference point, this is at least 10 times faster than the current world champion, the Sunway TaihuLight.

The current world champion is Chinas first supercomputer that uses domestically designed processors and has a peak speed of 125 quadrillion (1 followed by 15 zeros) calculations per second.

Meng Ziangfei, director of application at the National Super Computer Tianjin Center, commented:

Its computing power is on the next level, cementing China as the world leader in supercomputer hardware. [It would be available for public use and] help us tackle some of the worlds toughest scientific challenges with greater speed, precision and scope.

The Tianhe-3 will be entirely produced in China, from its processors to the operating system. The plan is for it to be stationed in Tianjin and fully operational by 2020, potentially earlier than the U.S. plan for its exascale supercomputer.

Its not out of the question that while this production is happening, another exascale computer is in the works. These projects generally take a few years to make and then are retired in about six to eight years.

The Tianhe-1, Chinas first quadrillion-level supercomputer developed in 2009, is currently operating at full capacity and taking on more than 1,400 assignments each day.

Overall the team working with the Tianhe-3 supercomputer expects that it will generate over 10 billion yuan ($1.49 billion) in economic benefits per year.

Ziangfei, finished by saying:

The exascale supercomputer will be able to analyze smog distribution on a national level, while current models can only handle a district. Tianhe-3 also could simulate earthquakes and epidemic outbreaks in more detail, allowing swifter and more effective government responses.

The new machine also will be able to analyze gene sequence and protein structures in unprecedented scale and speed. That may lead to new discoveries and more potent medicine.

By Chris Lange

See more here:

Chinese Supercomputer Tianhe-3 Set to Shatter World Speed ... - 24/7 Wall St.

NASA Saves Energy, Water with Modular Supercomputer – Energy Manager Today

The supercomputer at NASAs Ames Research Center at Moffett Field, CA, is using an innovative modular approach that is designed to get researchers the answers that they need, while reducing the high level of energy and water traditionally required for these cutting edge machines.

Scientific Computing lays out the issue:

All of todays modern supercomputers must be optimised in some way for energy efficiency because of the huge power consumption of large supercomputers. The Top500 is a prime example of this. Each of the top 10 systems consumes megawatts of power, with the very largest consuming in excess of 15 megawatts.

The story quotes William Thigpen, the Chief of NASAs Advanced Computing Branch as saying that supercomputers use multiple megawatts of power. From 33 percent to 50 percent are used for cooling.

The NASA system, called Electra, is expected to save 1 million kWh and 1.3 million gallons of water annually by virtue of its modular construction. Computing assets are added and thus need to be cooled only as necessary. The system, according to the story at Scientific Computing, is designed to work within a power usage effectiveness (PUE) range of 1.03 to 1.05. The current lead supercomputer for NASA, Pleaides, runs a PUE of about 1.3.

Space Daily describes Electras flexibility. The story says that NASA is considering an expansion to 16 times its current capacity. Some of the energy benefits are indirect: Since researchers can log in remotely to utilize Electra, pressure will be taken off the supercomputers those scientists and engineers would otherwise access. Thus, the overall benefit to the environment is a bit hidden but there nonetheless.

Electra is expected to provide 280 million hours of computing time annually and currently is 39th on the U.S. TOP500 list of computer systems, according to Space Daily (Scientific Computing says Pleaides is 13th.) The modular super computer center at Ames was built and installed by SGI/CommScope and is managed by the NASA Advanced Supercomputing Division.

Modular datacenters use the same basic approach to reduce energy use.

Read more from the original source:

NASA Saves Energy, Water with Modular Supercomputer - Energy Manager Today

Supercomputer-Powered Portal Provides Data, Simulations to Geology and Engineering Community – HPCwire (blog)

Feb. 23 As with many fields, computing is changing how geologists conduct their research. One example: the emergence ofdigital rock physics, where tiny fragments of rock are scanned at high resolution, their 3-D structures are reconstructed, and this data is used as the basis for virtual simulations and experiments.

Digital rock physics complements the laboratory and field work that geologists, petroleum engineers, hydrologists, environmental scientists, and others traditionally rely on. In specific cases, it provides important insights into the interaction of porous rocks and the fluids that flow through them that would be impossible to glean in the lab.

In 2015, the National Science Foundation (NSF) awarded a team of researchers from The University of Texas at Austin and the Texas Advanced Computing Center (TACC) a two-year, $600,000grantto build theDigital Rocks Portalwhere researchers can store, share, organize and analyze the structures of porous media, using the latest technologies in data management and computation.

The project lets researchers organize and preserve images and related experimental measurements of different porous materials, said Maa Prodanovi, associate professor of petroleum and geosystems engineering at The University of Texas at Austin (UT Austin). It improves access to them for a wider geosciences and engineering community and thus enables scientific inquiry and engineering decisions founded on a data-driven basis.

The grant is a part ofEarthCube, a large NSF-supported initiative that aims to create an infrastructure for all available Earth system data to make the data easily accessible and useable.

Small pores, big impacts

The small-scale material properties of rocks play a major role in their large-scale behavior whether it is how the Earth retains water after a storm or where oil might be discovered and how best to get it out of the ground.

As an example, Prodanovi points to the limestone rock above the Edwards Aquifer, which underlies central Texas and provides water for the region. Fractures occupy about five percent of the aquifer rock volume, but these fractures tend to dominate the flow of water through the rock.

All of the rain goes through the fractures without accessing the rest of the rock. Consequently, theres a lot of flooding and the water doesnt get stored, she explained. Thats a problem in water management.

Digital rocks physicists typically perform computed tomography (CT) scans of rock samples and then reconstruct the materials internal structure using computer software. Alternatively, a branch of the field creates synthetic, virtual rocks to test theories of how porous rock structures might impact fluid flow.

In both cases, the three-dimensional datasets that are created are quite large frequently several gigabytes in size. This leads to significant challenges when researchers seek to store, share and analyze their data. Even when data sets are made available, they typically only live online for a matter of months before they are erased due to space issues. This impedes scientific cross-validation.

Furthermore, scientists often want to conduct studies that span multiple length scales connecting what occurs at the micrometer scale (a millionth of a meter: the size of individual pores and grains making up a rock) to the kilometer scale (the level of a petroleum reservoir, geological basin or aquifer), but cannot do so without available data.

The Digital Rocks Portal helps solve many of these problems.

James McClure, a computational scientist at Virginia Tech uses the Digital Rocks Portal to access the data he needs to perform large-scale fluid flow simulations and to share data directly with collaborators.

The Digital Rocks Portal is essential to share and curate experimentally-generated data, both of which are essential to allow for re-analyses and reproducibility, said McClure. It also provides a mechanism to enable analyses that span multiple data sets, which researchers cannot perform individually.

The Portal is still young, but its creators hope that, over time, material studies at all scales can be linked together and results can be confirmed by multiple studies.

When you have a lot of research revolving around a five-millimeter cube, how do I really say what the properties of this are on a kilometer scale? Prodanovi said. Theres a big gap in scales and bridging that gap is where we want to go.

A framework for knowledge sharing

When the research team was preparing the Portal, they visited the labs of numerous research teams to better understand the types of data researchers collected and how they naturally organized their work.

Though there was no domain-wide standard, there were enough commonalities to enable them to develop a framework that researchers could use to input their data and make it accessible to others.

We developed a data model that ended up being quite intuitive for the end-user, said Maria Esteva, a digital archivist at TACC. It captures features that illustrate the individual projects but also provides an organizational schema for the data.

The entire article can be found here.

Source: Aaron Dubrow, TACC

Read this article:

Supercomputer-Powered Portal Provides Data, Simulations to Geology and Engineering Community - HPCwire (blog)

Supercomputer tests ways to divert blood from aneurysm – Futurity: Research News

Engineers have used high-performance computing to examine the best way to treat an aneurysm.

To reduce blood flow into aneurysms, surgeons often insert a flow divertertiny tubes made of weaved metal, like stentsacross the opening. The reduced blood flow into the aneurysm minimizes the risk of a rupture, researchers say.

But, if the opening, or neck, of an aneurysm is large, surgeons will sometimes overlap two diverters, to increase the density of the mesh over the opening. Another technique is to compress the diverter to increase the mesh density and block more blood flow.

When doctors see the simulated blood flow in our models, theyre able to visualize it.

A computational study published in the American Journal of Neuroradiology shows the best option is the single, compressed diverterprovided it produces a mesh denser than the two overlapped diverters, and that it covers at least half of the aneurysm opening.

When doctors see the simulated blood flow in our models, theyre able to visualize it. They see that they need to put more of the dense mesh here or there to diffuse the jets (of blood), because the jets are dangerous, says lead author Hui Meng, a mechanical engineering professor at the University at Buffalo.

Working with the universityssupercomputing facility, the Center for Computational Research, Robert Damiano and Nikhil Paliwal, both PhD candidates in Mengs lab, used virtual models of three types of aneurysmsfusiform (balloons out on all sides), and medium and large saccular (balloons on one side)and applied engineering principles to model the pressure and speed of blood flowing through the vessels.

The engineers modeled three different diverter treatment methodssingle non-compacted, two overlapped, and single compacted, and ran tests to determine how they would affect blood flow in and out of the aneurysm using computational fluid dynamics.

We used equations from fluid mechanics to model the blood flow, and we used structural mechanics to model the devices, Damiano says. Were working with partial differential equations that are complex and typically unsolvable by hand.

These equations are converted to millions of algebraic equations and are solved using the supercomputer. The very small size of the mesh added to the need for massive computing power.

The diverter mesh wires are 30 microns in diameter, Paliwal says. To accurately capture the physics, we needed to have a maximum of 10 to 15 micron grid sizes. Thats why it is computationally very expensive.

The models showed that compressing a diverter produced a dense mesh that covered 57 percent of a fusiform-shaped aneurysm. That proved more effective than overlapping two diverters.

The compacted diverter was less effective in saccular aneurysms. As diverters are compressed, they become wider and bump into the sides of the vessel, so they could not be compressed enough to cover a small opening of an aneurysm. Compression was more effective in a large necked saccular aneurysm, producing a dense mesh that covered 47 percent of the opening.

Because a porous scaffold is needed to allow cell and tissue growth around the neck of the aneurysm, complete coverage using a solid diverter isnt the best option, Paliwal says. Further, solid diverters could risk blocking off smaller arteries.

The team next would like to look back over hundreds of previous cases, to determine how blood flow was affected by the use of diverters. The idea is to build a database so that more definitive conclusions can be drawn.

Were going to look at and model previous cases, and hopefully well have a way to determine the best treatment to cause the best outcome for new aneurysm cases, Damiano says.

Source: University at Buffalo

See original here:

Supercomputer tests ways to divert blood from aneurysm - Futurity: Research News

IBM super-computer will overhaul NYC 311 – New York’s PIX11 / WPIX-TV

IBM super-computer will overhaul NYC 311
New York's PIX11 / WPIX-TV
NYC 311 handled nearly 36 million reports in 2016. The first overhaul of the system is now in the works and it should create a smarter system. Over the next 18 months, IBM will install networks and systems. It includes "Watson," which is the company's ...

Originally posted here:

IBM super-computer will overhaul NYC 311 - New York's PIX11 / WPIX-TV

Wyoming Starts its Largest Climate Change Supercomputer, Cheyenne – The Green Optimistic (blog)

In Cheyenne, Wyoming, a supercomputer started extensive climate-change research notwithstanding the ones doubting the global warming. Now, there is a concern among scientists that the research might be cut from funding with Trumps administration.

The supercomputer is worth $30 million and is federally founded. The supercomputer just started to operate a few weeks ago, modeling air currents at wind farms or predicting the weather months in advance.

Cheyenne is the 20th fastest supercomputer in the world, replacing the supercomputer Yellowstone. Additionally, Cheyenne is 240,000 times faster than a brand new laptop, and it makes 5.34 quadrillion calculations per second.

Although it has been supported by the state, Cheyenne is a matter of concern for the global warming doubters; not only Cheyenne, but there are many other issues that create concern among doubters. In fact, in 2012, the fossil fuel industry has asked the University of Wyoming to remove an artwork that raises awareness about the climate change. The state, Wyoming, has also discussed whether K-12 students should be educated about the climate change.

A climate change skeptic, as he calls himself, Gov. Matt Mead supports the supercomputer in terms of Wyomings improvement in technology. Yet, the scientists still fear that Trump might cut funding for such projects. This is vital for the supercomputer too, as 70% of its funding comes from the National Science Foundation. As a result, 800 scientists from the U.S., including people from the University of Wyoming, signed a letter that urges Trump to take climate change seriously.

The supercomputers role is very important in predicting weather and analyzing the climate change. Regarding Cheyennes importance, Rich Loft, a National Center for Atmospheric Research supercomputing specialist said:

We believe that doing better predictions of those things have apolitical benefits saving lives and saving money, and improving outcomes for businesses and farmers.

Supplying approximately the 40% of the U.S.s coal , Wyoming feeds on its coal, oil, and natural gas sources. Consequently, Wyoming funded a power plant to study the carbon capture for $15 million.

Nevertheless, the supercomputer consumes 1.5 megawatts, which equals to having electricity in 75 homes. Yet, some of its electricity comes from a wind farm 7 miles away.

No matter what, the scientists in Wyoming aim to get great results from the supercomputer to analyze the climate change better.

[via AP]

See original here:

Wyoming Starts its Largest Climate Change Supercomputer, Cheyenne - The Green Optimistic (blog)

China’s new supercomputer to be 10 times faster – The Hindu

China has started to build a new-generation supercomputer that is expected to be 10 times faster than the current world champion, a media report said.

This year, China is aiming for breakthroughs in high-performance processors and other key technologies to build the worlds first prototype exascale supercomputer, the Tianhe-3, said Meng Xiangfei, the director of application at the National Super Computer Tianjin Centre, on Monday.

The prototype is expected to be completed by 2018, the China Daily reported.

Its computing power is on the next level, cementing China as the world leader in supercomputer hardware, Meng said.

It would be available for public use and help us tackle some of the worlds toughest scientific challenges with greater speed, precision and scope, he added.

Tianhe-3 will be made entirely in China, from processors to operating system. It will be stationed in Tianjin and fully operational by 2020, earlier than the US plan for its exascale supercomputer, he said.

Tianhe-1, Chinas first quadrillion-level supercomputer developed in 2009, is now working at full capacity, undertaking more than 1,400 assignments each day, solving problems from stars to cells.

The exascale supercomputer will be able to analyse smog distribution on a national level, while current models can only handle a district, the daily said. Tianhe-3 also could simulate earthquakes and epidemic outbreaks in more detail, allowing swifter and more effective government responses, Meng said.The new machine also will be able to analyse gene sequence and protein structures in unprecedented scale and speed. That may lead to new discoveries and more potent medicine, he said.IANS

Exascale means it will be capable of making a quintillion (1 followed by 18 zeros) calculations per second. That is at least 10 times faster than the worlds current speed champ, the Sunway TaihuLight, Chinas first supercomputer to use domestically designed processors. That computer has a peak speed of 125 quadrillion

(1 followed by 15 zeros) calculations per second.

Read more here:

China's new supercomputer to be 10 times faster - The Hindu

Google Rolls Out GPU Cloud Service – TOP500 News

The largest Internet company on the planet has made GPU computing available in its public cloud. Google announced this week that it has added the NVIDIA Tesla K80 to its cloud offering, with more graphics processor options on the way. The search giant follows Amazon, Microsoft and others into the GPU rental business.

According to a blog posted Tuesday, a user can attach up to four K80 boards, each of which houses two Kepler-generation GK210 GPUs and a total of 24GB of GDDR5 memory. The K80 delivers 2.9 teraflops of double precision performance or 8.73 teraflops of single precision performance, the latter of which is the more relevant metric for deep learning applications. Since were talking about a utility computing environment here, a user may choose to rent just a single GPU (half a K80 board) for their application.

The initial service is mainly aimed at AI customers, but other HPC users should take note as well. Although Google has singled out deep learning as a key application category, the company is also targeting other high performance computing applications, including, computational chemistry, seismic analysis, fluid dynamics, molecular modeling, genomics, computational finance, physics simulations, high performance data analysis, video rendering, and visualization

Googles interest in positioning its GPU offering to deep learning is partially the result of the in-house expertise and software the company has built in this area over the last several years. The new cloud-based GPU instance have been integrated with Googles Cloud Machine Learning (Cloud ML), a set of tools for building and managing deep learning codes. Cloud ML uses the TensorFlow deep learning framework, another Google invention, but which is now maintained as an open source project. Cloud ML helps users employ multiple GPUs in a distributed manner so that the applications can be scaled up, the idea being to speed execution.

The Tesla K80 instance is initially available as a public beta release in the Eastern US, Eastern Asia and Western Europe. Initial pricing is $0.70 per GPU/hour in the US, and $0.77 elsewhere. However, that doesnt include any host processors or memory. Depending on what you want, that can add as little as $0.05 per hour (for one core and 3.75 GB of memory), all the way up to more than $2 per hour (for 32 cores and 208 GB of memory). For a more reasonable configuration, say four host cores and 15 GB of memory, an additional $0.20 per hour would be charged.

That would make it roughly equivalent to the GPU instance pricing on Amazon EC2 and Microsoft Azure, which include a handful of CPU cores and memory by default. Both of those companies, which announced GPU instances for their respective clouds in Q4 2016, have set their pricing at $0.90 per GPU/hour. For users willing to make a three-year commitment, Amazon will cut the cost to $0.425 per GPU/hour via its reserved instance pricing.

IBMs SoftLayer cloud also has a number of GPU options, but they rent out complete servers rather than individual graphics processors. A server with a dual-GPU Tesla K80, two eight-core Intel Xeon CPUs, 128 GB of RAM, and a couple of 800GB SSDs will cost $5.30/hour. Other K80 server configurations are available for longer terms, starting at $1,359/month.

At this point, HPC cloud specialist Nimbix has what is probably the best pricing for renting GPU cycles. Theyre offering a K80-equipped server so two GPUs with four host cores and 32 GB of main memory for $1.06/hour. Thats substantially less expensive than any others cloud providers mentioned, assuming your application can utilize more than a single GPU. Nimbix is also the only cloud provider that currently offers a Tesla P100 server configuration, although that will cost you $4.95 per hour.

Even though the initial GPU offering from Google is confined to the Tesla K80 board, the company is promising NVIDIA Tesla P100 and AMD FirePro configuration are coming soon. The specific AMD device is likely to be the FirePro S9300 x2, a dual-GPU board that offers up to 13.9 teraflops of single precision performance. When Google previewed its accelerator rollout last November, it implied the FirePro S9300 x2 would be aimed at cloud customers interested in GPU-based remote workstations. The P100 is NVIDIAs flagship Tesla GPU, delivering 5.3 or 10.6 teraflops of double or single precision performance, respectively.

At this point, Google is in third place in the fast-growing public cloud space, trailing Amazon and Microsoft, in that order. Adding a GPU option is not likely to change that, but it does illustrate that graphics processor-based acceleration is continuing to spread across the IT datacenter landscape. Whereas once GPU acceleration was confined to HPC, with the advent of hyperscale-based machine learning, it quickly became standard equipment for hyperscale web companies involved in training neural networks. Now that more enterprise customers are looking to mine their own data for machine learning purpose, the GPU is getting additional attention. And for traditional HPC, many of the more popular software packages have already been ported to GPUs.

This all might be good news for Google, but its even better news for NVIDIA, and to a lesser extent AMD, which still stands to benefit from the GPU computing boom despite the companys less cohesive strategy. NVIDIA just announced a record revenue of 6.9 billion for fiscal 2017, driven, in part, by the Tesla datacenter business. That can only get better as GPU availability in the cloud becomes more widespread.

Follow this link:

Google Rolls Out GPU Cloud Service - TOP500 News

Tianhe-3: China says its world-first exascale supercomputer will be ready by 2020 – Deutsche Welle

Fast isn't the word. If China's Tianhe-3 supercomputer manages to hit the exascale mark, it will handle one quintillion calculations per second.

NB: 1 quintillion = 1,000 000 000 000 000 000 (yep, that's 18 zeros)

Meng Xiangfei, a director at the National Supercomputer Center at Tianjin,told the China Daily newspaper that his institute aims to have a prototype of its Tianhe-3 ready by 2018. For that they will need breakthroughs in high-performance processors. But Meng is confident.

If they succeed, Tianhe-3 will be 10 times faster than the current fastest supercomputer in the world, the Sunway TaihuLight.

The Sunway runs at 93 petaFLOPS, with a reported peak speed of 125 quadrillion calculations per second.

1 quadrillion = 1,000 000 000 000 000 (15 zeros)

"Its computing power is on the next level," Meng told the newspaper. "It will help us tackle some of the world's toughest scientific challenges with greater speed, precision and scope."

The Tianhe-3 will be measured in exaFLOPS.

Its sibling, the Tianhe-2 runs at 34 petaFLOPS, while the USA's next best, Titan, creaks in at 18 peteFLOPS.

If the Tianhe-3 breaks the peta-barrier, its processing speed will leave the rest for dead - which is probably a good thing as supercomputers don't have the longest life-expectancy.

Super, but compared to what?

How can we even begin to image the Tianhe-3's processing speeds?

Well, one of the world's first computers (as we know them) was the Zuse Z3. It was a programmable, digital computer. Based on the same Boolean theory that gave us the zeros and ones of modern computing, the Z3 was the first solid implementation of so-called "flip-flops" and what became "floating point" arithmetic.

A computer's processing speed is measured (in part) by the number of floating points it can handle per second - and that's why we refer to a FLOP or FLOPS.

In 1941, the Z3's average single calculation speeds were 0.8 seconds for addition and 3 seconds for multiplication.

Fast-forward 70 years or so and the average smartphone will perform addition and multiplication almost before we've finished entering the numbersImagine that, predictive math!

Smartphones speeds tend to be measured in gigaflops (1 GFLOP = 1,000 000 000), but it's hard to get a good read on the latest models as the manufacturers are so competitive and as a result secretive. It is said, however, that Apple's A-series chips, which are made by Imagination Technologies, are years ahead of Qualcomm's Snapdragon chips, which Samsung and Google use in their phones.

Gaming consoles are a lot faster than smartphones, but then again nothing compared to a supercomputer. It would take more than 18,000 Playstation 4s to match the Tianhe-2 - which, to remind us, is half as fast as China's Sunway supercomputer, and that is 10 times slower than the Tianhe-3 will be.

Like I said, fast just isn't the word. But, then, the Tianhe-3 won't be a toy. Chinese scientists hope to use it to analyze smog distribution, gene sequence and protein structures to help develop new medicines. They also say it will simulate earthquakes and epidemic outbreaks in greater detail, "allowing swifter and more effective government responses."

See the article here:

Tianhe-3: China says its world-first exascale supercomputer will be ready by 2020 - Deutsche Welle

Supercomputer CIO wins Fed Govt Benchmarks gong – Strategy … – iT News

The Bureau of Meteorology's replacement of its critical supercomputer, on time and on budget, has earned its CIO the top Federal Government prize in the 2017 iTnews Benchmark Awards.

BoM tech chief Lesley Seebeck was named the federal government CIO of the year for managing to deliver a world-class supercomputer - a foundation stone of her agency's data crunching abilities - without any overspend or delays.

Seebeck's team correctly predicted several years ago that the agency's high performance computing system would no longer be up to the task of processing BoM's complex climate modelling by 2016.

The switch to a new Cray XC40 facility last September was seamless, and the power offered by the kit cements the bureau within the top ten meteorological agencies in the world.

Seebeck rose to the top of a competitive category that included the establishment ofGovCMS by John Sheridan at the Department of Finance, and the creation of myTax by the Australian Taxation Office and CIO Ramez Katf.

Winners were announced at the iTnews Benchmark Awards held as part of Adapt Venture'sCIO Edge Experienceat the Grand Hyatt Melbourne.

Excerpt from:

Supercomputer CIO wins Fed Govt Benchmarks gong - Strategy ... - iT News