Supercomputer Sales Drove 2016 HPC Market Up to Record $11.2 Billion – HPCwire (blog)

A 26.2 percent jump in supercomputer spending helped lift the overall 2016 HPC server segment by 4.4 percent according to a brief report released by Hyperion Research yesterday. The big drag on growth was a 19.3 percent decline in sales of departmental HPC servers. Nevertheless, the overall HPC server market set a new record at $11.2 billion, up from $10.7 billion in 2015, and surpassing 2012s high water mark of $11.1 billion.

Hyperion may provide more complete numbers for the full year at the HP User Forum being held in Santa Fe, NM, in two weeks. Hyperion is the former IDC HPC group which has been spun out of IDC as part of its acquisition by companies based in China (See HPCwire article, IDCs HPC Group Spun out to Temporary Trusteeship).

The 2016 year-over-year market gain was driven by strong revenue growth in high-end and midrange HPC server systems, partially offset by declines in sales of lower-priced systems, according to the Hyperion release. Brief summary:

HPC servers have been closely linked not only to scientific advances but also to industrial innovation and economic competitiveness. For this reason, nations and regions across the world, as well as businesses and universities of all sizes, are increasing their investments in high performance computing, said Earl Joseph, CEO of Hyperion Research. In addition, the global race to achieve exascale performance will drive growth in high-end supercomputer sales.

Another important factor driving growth is the market for big data needing HPC, which we call high performance data analysis, or HPDA, according to Steve Conway, Hyperion Research senior vice president for research. HPDA challenges have moved HPC to the forefront of R&D for machine learning, deep learning, artificial intelligence, and the Internet of Things.

Getting use to the new Hyperion name may take awhile, but its senior members, all from IDC, say there should be little change, at least in the near-term. Below is the companys self-description.

Hyperion Research is the new name for the former IDC high performance computing (HPC) analyst team. IDC agreed with the U.S. government to divest the HPC team before the recent sale of IDC to Chinese firm Oceanwide. As Hyperion Research, the team continues all the worldwide activities that have made it the worlds most respected HPC industry analyst group for more than 25 years, including HPC and HPDA market sizing and tracking, subscription services, custom studies and papers, and operating the HPC User Forum. For more information, see http://www.hpcuserforum.com.

See the original post:

Supercomputer Sales Drove 2016 HPC Market Up to Record $11.2 Billion - HPCwire (blog)

TalkSPORT Super Computer predicts Tottenham end of season finish – Football Insider

6th April, 2017, 10:51 PM

By Harvey Byrne

A Super Computer tasked with the job of predicting the final Premier League table has placed Tottenham in second place.

Popular radio station TalkSPORThave provided an April update on their regular feature where they aim to discover the end of season results by feeding data into their prediction machine.

Now, the computer has tipped Spurs to achieve a second place finish in this seasons Premier League, which is the position they are currently occupying.

Mauricio Pochettinos side still have to travel to Chelseas Stamford Bridge alongside home fixtures against Arsenal and Manchester United before the campaigns end in eight matches time.

A second place finish would mark progression for the Tottenham team after they finished in third place in the previous season.

However, many associated with the north London club will still have their eyes set on a title challenge with just seven points currently separating them from league leaders Chelsea.

Other noteworthy placings made by the TalkSPORT machine are Tottenhams fierce rivals Arsenal in fourth, which sees them continue their 20-year consecutive streak of finishing in the top four.

Meanwhile, current third placed team Liverpool have been predicted to finish as low as sixth.

In other Tottenham news, here is Spurs best possible line-up to face Watford at the weekend.

Weve launched a<>exclusively for your club. Like Us on Facebook byclicking hereif you want 24/7 updates on all Tottenham breaking news.

Continued here:

TalkSPORT Super Computer predicts Tottenham end of season finish - Football Insider

LOLCODE: I Can Has Supercomputer? – HPCwire (blog)

What programming model refers to threads as friends and uses types like NUMBR (integer), NUMBAR (floating point), YARN (string), and TROOF (Boolean)? That would be the internet-meme-based procedural programming language, known as LOLCODE. Inspired by lolspeak and the LOLCAT meme, the esoteric programming language was created in 2007 by Adam Lindsay at the Computing Department of Lancaster University.

Now a new research effort is looking to use the meme-based language as a tool to teach parallel and distributed computing concepts of synchronization and remote memory access.

Its a common complaint in high-performance computing circles: computer science curricula dont give sufficient attention toparallel computing, especially at the undergraduate level. In this age of multicore ubiquity, the need for parallel programming expertise is even more urgent. Is there a way to make teaching parallel and distributed computing more approachable? Fun even?

Thats the focus of the new research paper from David A. Richie (Brown Deer Technology) and James A. Ross (U.S. Army Research Laboratory), which documents the duos efforts to implement parallel extensions to LOLCODE within a source-to-source compiler sufficient for the development of parallel and distributed algorithms normally implemented using conventional high-performance computing languages and APIs.

From the introduction:

The modern undergraduate demographic has been born into an internet culture where poking fun at otherwise serious issues is considered cool. Internet memes are the cultural currency by which ideas are transmitted through younger audiences. This reductionist approach using humor is very effective at simplifying often complex ideas. Internet memes have a tendency to rise and fall in cycles, and as with most things placed on the public internet, they never really go away. In 2007, the general-purpose programming language LOLCODE was developed and resembled the language used in the LOLCAT meme which includes photos of cats with comical captions, and with deliberate pattern-driven misspellings and common abbreviations found in texting and instant messenger communications.

The researchers have developed a LOLCODE compiler and propose minor language syntax extensions to the LOLCODE that create parallel programming semantics to enable the compilation of parallel and distributed LOLCODE applications on virtually any platform with a C compiler and OpenSHMEM library.

They are targeting the inexpensive Parallella board, as it an ideal educational or developmental platform for introducing parallel programming concepts.

We demonstrate parallel LOLCODE applications running on the $99 Parallella board, with the 16-core Adapteva Epiphany coprocessor, as well as (a portion of) the $30 million US Army Research Laboratorys, they write.

Since its 2007 launch, LOLCODE development has occurred in spurts with activity tending to occur in early April. See also: I can has MPI, a joint Cisco and Microsoft joint Cross-Animal Technology Project (CATP) that introduced LOLCODE language bindings for the Message Passing Interface (MPI) in 2013.

Learn to LOLCODE at http://lolcode.codeschool.com/levels/1/challenges/1

See the original post:

LOLCODE: I Can Has Supercomputer? - HPCwire (blog)

In the early 80s, CIA showed little interest in "supercomputer" craze – MuckRock

April 6, 2017

As you can see from my letter, CIA has no use for a supercomputer now or in the immediate future.

In 1983, cybermania would grip the nation: The movie WarGames is released over the summer, becoming a blockbuster hit for the time and intriguing President Ronald Reagan enough to summon his closest advisors to help study emerging cyberthreats and ultimately pass the first directive on cybersecurity. But according to declassified documents, made fully public thanks to MuckRocks lawsuit, one intelligence agency made a hard pass on the computer craze.

Other agencies entreated William J. Caseys Central Intelligence Agency to get involved. The National Security Agency was convening some of the nations best and brightest to develop a strategy for staying on top of the processing arms race .

In fact, the year before, the White House itself had sent Casey a memo asking that he designate someone to weigh in on supercomputer R&D policies:

But while Casey acknowledge that supercomputers were really important and he was flattered that other agencies picked the CIA to be on their team, it just wasnt something he felt comfortable devoting agency resources on :

In fact, not only did the agency not want to be part of the federal supercomputer club, in a 1983 survey it said it didnt own or have access to a supercomputer , nor did it have plans to start using them:

Why the apparent lack of concern? Maybe it had to do with an undated, unsourced (and possibly culled from public sources) report that found the U.S. cybercapabilities were still years ahead of the real threat: Russia.

But at some point, the director appears to have conceded that, for better or worse, supercomputers were not yet another fad and hed be start figuring out what exactly they were all about. Two memos from 1984 show his vigorous interest in getting up to speed on the subject.

The first response came regarding a memo on the increasing Japanese advantage when it came to building out Fifth Generation super computers .

The second memo was after he requested a copy of a staffers Spectrum Magazine, which IEEEs monthly magazine. Apparently, the director had a legendary, perhaps even alarming, appetite for reading materials.

The NSAs presentation of supercomputers is embedded below.

Image via 20th Century FOX

See original here:

In the early 80s, CIA showed little interest in "supercomputer" craze - MuckRock

Your chance to become a supercomputer superuser for free – The Register

Accurate depiction of you after attending the lectures. Pic: agsandrew/Shutterstock

HPC Blog The upcoming HPC Advisory Council conference in Lugano will be much more than just a bunch of smart folks presenting PowerPoints to a crowd. It will feature a number of sessions designed to teach HPCers how to better use their gear, do better science, and generally humble all those around you with your vast knowledge and perspicacity.

The first "best practices" session will feature Maxime Martinasso from the Swiss National Supercomputing Center discussing how MetroSwiss (the national weather forecasting institute) uses densely populated accelerated servers as their basic server to compute weather forecast simulations.

However, when you have a lot of accelerators attached to the PCI bus of a system, you're going to generate some congestion. How much congestion will you get and how do you deal with it? They've come up with an algorithm for computing congestion that characterises the dynamic usage of network resources by an application. Their model has been validated as 97 per cent accurate on two different eight-GPU topologies. Not too shabby, eh?

Another best practice session also deals with accelerators, discussing a dCUDA programming model that implements device-side RMA access. What's interesting is how they hide pipeline latencies by over-decomposing the problem and then over-subscribing the device by running many more threads than there are hardware execution units. The result is that when a threat stalls, the scheduler immediately proceeds with the execution of another threat. This fully utilises the hardware and leads to higher throughput.

We will also see a best practices session covering SPACK, an open-source package manager for HPC applications. Intel will present a session on how to do deep learning on their Xeon Phi processor. Dr Gilles Fourestey will discuss how Big Data can be, and should be, processed on HPC clusters.

Pak Lui from Mellanox will lead a discussion on how to best profile HPC applications and wring the utmost scalability and performance out of them. Other session topics include how to best deploy HPC workloads using containers, how to use the Fabriscale Monitoring System, and how to build a more efficient HPC system.

Tutorials include a twilight session on how to get started with deep learning (you'll need to bring your own laptop to this one), using EasyBuild and Continuous Integration tools, and using SWITCHengines to scale horizontally campus wide.

Phew, that's a lot of stuff... and it's all free, provided you register for the event and get yourself to Lugano by 10 April. I'll be there covering the event, so be sure to say hi if you happen to see me.

View post:

Your chance to become a supercomputer superuser for free - The Register

Credit Card-Sized Super Computer That Powers AI Such As Robots And Drones Unveiled By Nvidia – Forbes


Forbes
Credit Card-Sized Super Computer That Powers AI Such As Robots And Drones Unveiled By Nvidia
Forbes
A supercomputer the size of a credit card that can power artificial intelligence (AI) such as robots, drones and smart cameras has been unveiled by computer graphics firm Nvidia. Revealed at an event in San Francisco, the super intelligent yet tiny ...
Nvidia Jetson TX2: Credit card-sized supercomputer looks to fuel AI developmentThe INQUIRER
Nvidia shows off Jetson supercomputerFudzilla (blog)
Nvidia Unveils Jetson TX2: Pocket-Sized Supercomputer Doubles TX1 Performance, Powers Drones With AI, And MoreTech Times

all 58 news articles »

Read more from the original source:

Credit Card-Sized Super Computer That Powers AI Such As Robots And Drones Unveiled By Nvidia - Forbes

Watson supercomputer working to keep you healthier – Utica Observer Dispatch

Amy Neff Roth

Watson, the Jeopardy-winning celebrity supercomputer, is bringing his considerable computing capability to bear in Central New York.

Watson and the folks at IBM Watson Health will be working with regional health care providers to help keep area residents healthier.

The providers are all part of the Central New York Care Collaborative, which includes more than 2,000 providers in six counties, including Oneida and Madison.

What were doing is working with partners and all different types of health care providers: hospitals, physicians, primary care physicians in particular, long-term-care facilities, behavioral health and substance abuse-type facilities, community benefit organizations, every type of health care organization, said Executive Director Virginia Opipare. We are working to build and connect a seamless system for health care delivery that moves this region and helps to prepare this region for a value-based pay environment.

That pay system is one in which providers are paid for health outcomes and the quality of care provided, not a set fee for each service delivered. Its forcing providers to work together to create a more seamless system of care to keep patients healthier.

Thats where Watson comes in. The collaborative has partnered with IBM Watson Health to work on population health management, a huge buzz concept in health care in which providers work to keep patients from needing their services. Thats good for patients and good for health care costs.

To do that, IBM and Watson will gather data from providers 44 different kinds of electronic health records and state Medicaid claims data, normalize and standardize the data, and analyze it. That way providers can see all the care their patients have received and can figure out how to best help each patient, and over time, the collaborative can learn about how to keep patients healthy.

This is about identifying high risk individuals and using Watson-based tools and services to help providers engage with patients to improve health, said Dr. Anil Jain, vice president and chief health informatics office, value-based care at IBM Watson Health, in a release. As the health care industry shifts away from fee-for-service to a value-based system, care providers need integrated solutions that help them gain a holistic view of each individual within a population of patients.

The first wave of implementation should come within the next six months, Opipare said.

The CNY Care Collaborative is one of 25 regional performing-provider organizations in that state organized under the states Delivery System Reform Incentive Payment program to reshape health care in the state with a goal of cutting unnecessary hospital readmissions by 25 percent in five years. Organizations apply for state funding for projects chosen from a list of possibilities. The program is funded by $6.42 billion set aside from a federal waiver that allowed the state keep $8 billion of federal money saved the states redesign of Medicaid.

Follow @OD_Roth on Twitter or call her at 315-792-5166.

Read the rest here:

Watson supercomputer working to keep you healthier - Utica Observer Dispatch

Weather bureau’s $12m super computer yet to deliver – Chinchilla News

THE Bureau of Meteorology is upgrading its weather forecasting models as its multi-million dollar "supercomputer" comes under fire over its accuracy.

It follows a number of failed short-term forecasts in Queensland, where heavy rainfalls failed to eventuate and gale-force gusts were not predicted during a storm last June.

The bureau's $12 million-a-year XC40 supercomputer was installed last year to "successfully support the Bureau's capacity to predict".

A spokesman said the benefits of the computer were yet to be seen.

"As we continue to implement the program, the increase to computing power and storage capability will allow the Bureau to run more detailed numerical models more often, run forecasts more frequently, issue warnings more often and provide greater certainty and precision in our forecasting," he said.

American supercomputer manufacturer, Cray Inc, signed a six-year contract with the Bureau of Meteorology in 2015, totalling around $77 million.

The supercomputer's capabilities and accuracy came under fire last week after the bureau forecast a week of heavy rain in Brisbane, which failed to eventuate.

Fellow meteorologists have defended the bureau's "challenging" job.

Weatherzone senior meteorologist Jacob Cronje said despite the amount of technology around, predicting the weather was always tricky.

"Uncertainty will always be part of weather forecasting," he said.

"We may predict one thing but the slightest change in weather conditions will change the outcome exponentially."

A 40 per cent chance of showers and a possible storm is predicted tomorrow, with a maximum of 31C.

The rest is here:

Weather bureau's $12m super computer yet to deliver - Chinchilla News

IBM’s Watson supercomputer leading charge into early melanoma detection – The Australian Financial Review

IBM melanoma research manager Rahil Garnavi (L) and MoleMap Australia diagnosing dermatologist Dr Martin Haskett. The two firms are collaborating on early detection of skin cancer.

IBM is breaking ground in the early detection of skin cancer using its supercomputer Watson, potentially saving the federal government hundreds of millions of dollars.

The tech giant has partnered with skin cancer detection program MoleMap and the Melanoma Institute of Australia to teach the computer how to recognise cancerous skin lesions.

The initial focus is on the early detection of melanomas, which are the rarest but most deadly type of skin cancer, and make up just 2 per cent of diagnoses but 75 per cent of skin cancer deaths.

IBM vice-president and lab director of IBM Research Australia, Joanna Batstone, told AFR Weekend her colleagues, including melanoma research manager Rahil Garnavi, had so far fed 41,000 melanoma images into the system with accompanying clinician notes and it had a 91 per cent accuracy at detecting skin cancers.

The Watson supercomputer uses machine learning algorithms in conjunction with image recognition technology to detect patterns in the moles.

"Today, if you have a skin lesion, a clinician's accuracy is about 60 per cent. If you use a high-powered DermaScope [a digital microscope], a trained clinician can identify with 80 per cent accuracy," said Ms Batstone.

"We want to achieve 90 per cent accuracy for all data and we also want it to do more than just say yes or no in regards to whether or not it's cancerous, we want it to be able to identify what type of skin cancer it is, or if it's another type of skin disease."

Australian and New Zealanders have the highest rates of skin cancer in the world. In 2016 there were more than 13,000 new cases of melanoma skin cancer.

Of those with melanoma, there were almost 1800 deaths, making up 3.8 per cent of all cancer deaths in Australia in 2016.

Non-melanoma skin cancers alone are estimated to cost the government more than $703 million a year, according to 2010 research using medicare data.

Martin Haskett from MoleMap said diagnoses rates of skin cancer were relatively stable, with a slight decrease in younger people thanks to sun avoidance education campaigns.

"It occurs more frequently in older people and we're in a situation where the population is ageing. The older population has not been exposed to the sun protection campaigns and as you get older your immune system performs differently and is less capable," Dr Haskett said.

But even in younger generations, awareness does not always equate to action. Olympic swimmer Mack Horton had a skin cancer scare last year after a doctor watching TV saw an odd looking mole on him while he was racing and alerted the team doctor.

"With young people in general a tan is still seen as cool," he said.

"When they pulled me aside and notified me I didn't think much of it... but eight weeks later I got it checked and they said they'd have to take it out that day and then they said they would rush the results and that's when it dawned on me how serious it was."

IBM is setting up a free skin check event at Sydney's Bondi Beach over the weekend.

Beach goers will be able to stand in front of a smart mirror created by IBM that takes in their visual appearance and asks them questions about their age, family history and behavioural patterns. Within minutes it then generates a report on that individual's skin cancer risk. MoleMap will also be checking people's moles for free.

More here:

IBM's Watson supercomputer leading charge into early melanoma detection - The Australian Financial Review

Titan Supercomputer Assists With Polymer Nanocomposites Study – HPCwire (blog)

OAK RIDGE, Tenn.,March 8 Polymer nanocomposites mix particles billionths of a meter (nanometers, nm) in diameter with polymers, which are long molecular chains. Often used to make injection-molded products, they are common in automobiles, fire retardants, packaging materials, drug-delivery systems, medical devices, coatings, adhesives, sensors, membranes and consumer goods. When a team led by the Department of Energys Oak Ridge National Laboratory tried to verify that shrinking the nanoparticle size would adversely affect the mechanical properties of polymer nanocomposites, they got a big surprise.

We found an unexpectedly large effect of small nanoparticles, said Shiwang Cheng of ORNL. The team of scientists at ORNL, the University of Illinois at Urbana-Champaign (Illinois) and the University of Tennessee, Knoxville (UTK)reportedtheir findings in the journalACS Nano.

Blending nanoparticles and polymers enables dramatic improvements in the properties of polymer materials. Nanoparticle size, spatial organization and interactions with polymer chains are critical in determining behavior of composites. Understanding these effects will allow for the improved design of new composite polymers, as scientists can tune mechanical, chemical, electrical, optical and thermal properties.

Until recently, scientists believed an optimal nanoparticle size must exist. Decreasing the size would be good only to a point, as the smallest particles tend to plasticize at low loadings and aggregate at high loadings, both of which harm macroscopic properties of polymer nanocomposites.

The ORNL-led study compared polymer nanocomposites containing particles 1.8 nm in diameter and those with particles 25 nm in diameter. Most conventional polymer nanocomposites contain particles 1050 nm in diameter.Tomorrow, novel polymer nanocomposites may contain nanoparticles far less than 10 nm in diameter, enabling new properties not achievable with larger nanoparticles.

Well-dispersed small sticky nanoparticles improved properties, one of which broke records: Raising the materials temperature less than 10 degrees Celsius caused a fast, million-fold drop in viscosity. A pure polymer (without nanoparticles) or a composite with large nanoparticles would need a temperature increase of at least 30 degrees Celsius for a comparable effect.

We see a shift in paradigm where going to really small nanoparticles enables accessing totally new properties, said Alexei Sokolov of ORNL and UTK. That increased access to new properties happens because small particles move faster than large ones and interact with fewer polymer segments on the same chain. Many more polymer segments stick to a large nanoparticle, making dissociation of a chain from that nanoparticle difficult.

Now we realize that we can tune the mobility of the particleshow fast they can move, by changing particle size, and how strongly they will interact with the polymer, by changing their surface, Sokolov said. We can tune properties of composite materials over a much larger range than we could ever achieve with larger nanoparticles.

Better together

The ORNL-led study required expertise in materials science, chemistry, physics, computational science and theory. The main advantage of Oak Ridge National Lab is that we can form a big, collaborative team, Sokolov said.

Cheng and UTKs Bobby Carroll carried out experiments they designed with Sokolov. Broadband dielectric spectroscopy tracked the movement of polymer segments associated with nanoparticles. Calorimetry revealed the temperature at which solid composites transitioned to liquids. Using small-angle X-ray scattering, Halie Martin (UTK) and Mark Dadmun (UTK and ORNL) characterized nanoparticle dispersion in the polymer.

To better understand the experimental results and correlate them to fundamental interactions, dynamics and structure, the team turned to large-scale modeling and simulation (by ORNLs Bobby Sumpter and Jan-Michael Carrillo) enabled by the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility at ORNL.

It takes us a lot of time to figure out how these particles affect segmental motion of the polymer chain, Cheng said. These things cannot be visualized from experiments that are macroscopic. The beauty of computer simulations is they can show you how the chain moves and how the particles move, so the theory can be used to predict temperature dependence.

Shi-Jie Xie and Kenneth Schweizer, both of Illinois, created a new fundamental theoretical description of the collective activated dynamics in such nanocomposites and quantitatively applied it to understand novel experimental phenomena. The theory enables predictions of physical behavior that can be used to formulate design rules for optimizing material properties.

Carrillo and Sumpter developed and ran simulations on Titan, Americas most powerful supercomputer, and wrote codes to analyze the data on the Rhea cluster. The LAMMPS molecular-dynamics code calculated how fast nanoparticles moved relative to polymer segments and how long polymer segments stuck to nanoparticles.

We needed Titan for fast turn-around of results for a relatively large system (200,000 to 400,000 particles) running for a very long time (100 million steps). These simulations allow for the accounting of polymer and nanoparticle dynamics over relatively long times, Carrillo said.These polymers are entangled. Imagine pulling a strand of spaghetti in a bowl. The longer the chain, the more entangled it is. So its motion is much slower. Molecular dynamics simulations of long, entangled polymer chains were needed to calculate time-correlation functions similar to experimental conditions and find connections or agreements between the experiments and theories proposed by colleagues at Illinois.

The simulations also visualized how nanoparticles moved relative to a polymer chain. Corroborating experiment and theory moves scientists closer to verifying predictions and creates a clearer understanding of how nanoparticles change behavior, such as how altering nanoparticle size or nanoparticlepolymer interactions will affect the temperature at which a polymer loses enough viscosity to become liquid and start to flow. Large particles are relatively immobile on the time scale of polymer motion, whereas small particles are more mobile and tend to detach from the polymer much faster.

The title of the paper is Big Effect of Small Nanoparticles: A Shift in Paradigm for Polymer Nanocomposites.

Source: ORNL

View post:

Titan Supercomputer Assists With Polymer Nanocomposites Study - HPCwire (blog)

Compressing Software Development Cycles with Supercomputer-based Spark – insideHPC

Anthony DiBiase, Cray

In this video, Anthony DiBiase from Cray presents: Compress Software Development Cycles with supercomputer based Spark.

Do you need to compress your software development cycles for services deployed at scale and accelerate your data-driven insights? Are you delivering solutions that automate decision making & model complexity using analytics and machine learning on Spark? Find out how a pre-integrated analytics platform thats tuned for memory-intensive workloads and powered by the industry leading interconnect will empower your data science and software development teams to deliver amazing results for your business. Learn how Crays supercomputing approach in an enterprise package can help you excel at scale.

Anthony DiBiase is an analytics infrastructure specialist at Cray Supercomputer based in Boston with over 25 years program & project management experience in software development & systems integration. He matches life sciences software groups to computing technology for leading pharma & research organizations. Previously, he helped Novartis on NGS (next generation sequencing) workflows and large genomics projects, and later assisted Childrens Hospital of Boston on: systems & translational biology, multi-modal omics, disease models (esp. oncology & hematology), and stem cell biology. Earlier in his career, he delivered high-throughput inspection systems featuring image processing & machine learning algorithms while at Eastman Kodak, multi-protocol gateway solutions for Lucent Technologies, and mobile telephone solutions for Harris Corporation.

Sign up for our insideHPC Newsletter

Read this article:

Compressing Software Development Cycles with Supercomputer-based Spark - insideHPC

Microsoft, Facebook Build Dualing Open Standard GPU Servers for Cloud – TOP500 News

It was only a matter of time until someone came up with an Open Compute Project (OCP) design for a GPU-only accelerator box for the datacenter. That time has come.

In this case though, it was two someones: Microsoft and Facebook. This week at the Open Compute Summit in Santa Clara, California, both hyperscalers announced different OCP designs for putting eight of NVIDIAs Tesla P100 GPUs into a single chassis. Both fill the role of a GPU expansion box that can be paired with CPU-based servers in need of compute acceleration. The idea is to disaggregate the GPUs and CPUs in cloud environments so that users may flexibly mix these processors in different ratios, depending upon the demands of the particular workload.

The principle application target is machine learning, one of the P100s major areas of expertise. An eight-GPU configuration of these devices will yield over 80 teraflops at single precision and over 160 teraflops at half precision.

Source: Microsoft

Microsofts OCP contribution is known as HGX-1. Its principle innovation is that it can dynamically serve up as many GPUs to a CPU-based host as it may need well, up to eight, at least. It does this via four PCIe switches, an internal NVLink mesh network, plus a fabric manager to route the data through the appropriate connections. Up to four of the HGX-1 expansion boxes can be glued together for a total of 32 GPUs. Ingrasys, a Foxconn subsidiary will be the initial manufacturer of the HGX-1 chassis.

The Facebook version, which is called Big Basin, looks quite similar. Again, P100 devices are glued together vial an internal mesh, which they describe as similar to the design of the DGX-1, NVIDIAs in-house server designed for AI research. A CPU server can be connected to the Big Basin chassis via one or more PCIe cable. Quanta Cloud Technology will initially manufacture the Big Basin servers.

Source: Facebook

Facebook said they were able to achieve a 100 percent performance improvement on ResNet50, an image classification model, using Big Basin, compared to its older Big Sur server, which uses the Maxwell-generation Tesla M40 GPUs. Besides image classification, Facebook will use the new boxes for other sorts deep learning training, such as text translation, speech recognition, and video classification, to name a few.

In Microsofts case, the HGX-1 appears to be the first of multiple OCP designs that will fall under its Project Olympus initiative, which the company unveiled last October. Essentially, Project Olympus is a related set of OCP hardware building blocks for cloud hardware. Although HGX-1 is suitable for many compute-intensive workloads, Microsoft is promoting it for artificial intelligence work, calling it the Project Olympus hyperscale GPU accelerator chassis for AI, according to a blog posted by Azure Hardware Infrastructure GM Kushagra Vaid.

Vaid also set the stage for what will probably become other Project Olympus OCP designs, hinting at future platforms that will include the upcoming Intel Skylake Xeon and AMD Naples processors. He also left open the possibility that Intel FPGAs or Nervana accelerators could work their way into some of these designs.

In addition, Vail brought up the possibility of a ARM-based OCP server via the companys engagement with chipmaker Cavium. The software maker has already announced its using Qualcomms new ARM chip, the Centriq 2400, in Azure instances. Clearly, Microsoft is keeping its cloud options open.

See more here:

Microsoft, Facebook Build Dualing Open Standard GPU Servers for Cloud - TOP500 News

Supercomputer – Simple English Wikipedia, the free encyclopedia

A supercomputer is a computer with great speed and memory. This kind of computer can do jobs faster than any other computer of its generation. They are usually thousands of times faster than ordinary personal computers made at that time. Supercomputers can do arithmetic jobs very fast, so they are used for weather forecasting, code-breaking, genetic analysis and other jobs that need many calculations. When new computers of all classes become more powerful, new ordinary computers are made with powers that only supercomputers had in the past, while new supercomputers continue to outclass them.

Electrical engineers make supercomputers that link many thousands of microprocessors.

Supercomputer types include: shared memory, distributed memory and array. Supercomputers with shared memory are developed by using a parallel computing and pipelining concept. Supercomputers with distributed memory consist of many (about 100~10000) nodes. CRAY series of CRAYRESERCH and VP 2400/40, NEC SX-3 of HUCIS are shared memory types. nCube 3, iPSC/860, AP 1000, NCR 3700, Paragon XP/S, CM-5 are distributed memory types.

An array type computer named ILIAC started working in 1972. Later, the CF-11, CM-2, and the Mas Par MP-2 (which is also an array type) were developed. Supercomputers that use a physically separated memory as one shared memory include the T3D, KSR1, and Tera Computer.

Organizations

Centers

See the original post here:

Supercomputer - Simple English Wikipedia, the free encyclopedia

Credit Card-Sized Super Computer That Powers AI Such As Robots … – Forbes


Forbes

Read more here:

Credit Card-Sized Super Computer That Powers AI Such As Robots ... - Forbes

Final Premier League table predicted: Super computer reveals where each side should finish – Daily Star

A SUPER computer has predicted how the final Premier League table will look as we approach the finish line.

A SUPER computer has predicted how the final Premier League table will look as we approach the finish line. *Data from talkSPORT.

1 / 20

Although it looks as though Chelsea are running away with the Premier League title, there's plenty left to be decided among the teams below them.

The five clubs directly behind the Blues are all battling for Champions League places.

And at the other end of the table, an exciting scrap to stay in the English top flight is taking place.

So where will your side finish?

TalkSPORT have fed the data into their super computer - and they may have the answer.

Click through the gallery above to see the final predicted Premier League table.

Read this article:

Final Premier League table predicted: Super computer reveals where each side should finish - Daily Star

SDSC Seismic Simulation Software Exceeds 10 Petaflops on Cori … – insideHPC

Researchers at the San Diego Supercomputer Center (SDSC) at the University of California San Diego have developed a new seismic software package with Intel Corporation that has enabled the fastest seismic simulation to-date, as the two organizations collaborate on ways to better predict ground motions to save lives and minimize property damage.

The latest simulations, which mimic possible large-scale seismic activity in the southern California region, were done using a new software system called EDGE, for Extreme-Scale Discontinuous Galerkin Environment. The largest simulation used 612,000 Intel Xeon Phi processor cores of the new Cori Phase II supercomputer at the National Energy Research Scientific Computing Center (NERSC), the primary scientific computing facility for the Office of Science in the U.S. Department of Energy.

SDSCs ground-breaking performance of 10.4 Petaflopssurpassed the previous seismic record of 8.6 PFLOPS conducted on Chinas Tianhe-2 supercomputer. Through efficient utilization of the latest and largest supercomputers, seismologists are now able to increase the frequency content of the simulated seismic wave field.

Obtaining higher frequencies is a key to predict ground motions relevant for common dwellings in conducting earthquake research. SDSC and Intel researchers also used the DOEs Theta supercomputer at the Argonne National Laboratory as part of the year-long project.

In addition to using the entire Cori Phase II supercomputer, our research also showed a substantial gain in efficiency in using the new software, said Alex Breuer, a postdoctoral researcher from SDSCs High Performance Geocomputing Laboratory (HPGeoC) and lead author of the paper, to be presented in June at the ISC 2017 conference in Frankfurt, Germany. Researchers will be able to run about two to almost five times the number of simulations using EDGE, saving time and reducing cost.

Example of hypothetical seismic wave propagation with mountain topography using the new EDGE software. Shown is the surface of the computational domain covering the San Jacinto fault zone between Anza and Borrego Springs in California. Colors denote the amplitude of the particle velocity, where warmer colors correspond to higher amplitudes. Image courtesy of Alex Breuer, SDSC.

A second HPGeoC paper submitted and accepted for the ISC High Performance conference covers a new study of the AWP-ODC software that has been used by the Southern California Earthquake Center (SCEC) for years. The software was optimized to run in large-scale for the first time on the latest generation of Intel data center processors, called Intel Xeon Phi x200.

These simulations, also using NERSCs Cori Phase II supercomputer, attained competitive performance to an equivalent simulation on the entire GPU-accelerated Titan supercomputer. Titan is located at the DOEs Oak Ridge National Laboratory and has been the resource used for the largest AWP-ODC simulations in recent years. Additionally, the software obtained high performance on Stampede-KNL at the Texas Advanced Computing Center at The University of Texas at Austin.

Both research projects are part of a collaboration announced in early 2016 under which Intel opened a computing center at SDSC to focus on seismic research, including the ongoing development of computer-based simulations that can be used to better inform and assist disaster recovery and relief efforts.

The Intel Parallel Computing Center (Intel PCC) continues an interdisciplinary collaboration between Intel, SDSC, and SCEC, one of the largest open research collaborations in geoscience. In addition to UC San Diego, the Intel PCC at SDSC includes researchers from the University of Southern California (USC), San Diego State University (SDSU), and the University of California Riverside (UCR).

The Intel PCC program provides funding to universities, institutions, and research labs to modernize key community codes used across a wide range of disciplines to run on current state-of-the-art parallel architectures. The primary focus is to modernize applications to increase parallelism and scalability through optimizations that leverage cores, caches, threads, and vector capabilities of microprocessors and coprocessors.

Research and results such as the massive seismic simulation demonstrated by the SDSC/Intel team are tremendous for their contributions to science and society, said Joe Curley, senior director of Code Modernization Organization at Intel Corporation. Equally, this work also demonstrates the benefit to society of developing modern applications to exploit power-efficient and highly parallel CPU technology.

Such detailed computer simulations allow researchers to study earthquake mechanisms in a virtual laboratory. These two studies open the door for the next-generation of seismic simulations using the latest and most sophisticated software, said Yifeng Cui, founder of the HPGeoC at SDSC and director of the Intel PCC at SDSC. Going forward, we will use the new codes widely for some of the most challenging tasks at SCEC.

The multi-institution study which led to the record results includes Breuer and Cui; as well as Josh Tobin, a Ph.D. student in UC San Diegos Department of Mathematics; Alexander Heinecke, a research scientist at Intel Labs; and Charles Yount, a principal engineer at Intel Corporation.

The titles of the respective presentations and publications are EDGE: Extreme Scale Fused Seismic Simulations with the Discontinuous Galerkin Method and Accelerating Seismic Simulations using the Intel Xeon Phi Knights Landing Processor.

Sign up for our insideHPC Newsletter

Read more from the original source:

SDSC Seismic Simulation Software Exceeds 10 Petaflops on Cori ... - insideHPC

Fujitsu to Deliver Deep Learning Supercomputer to RIKEN – TOP500 News

Japanese computer maker Fujitsu has announced it will build a deep learning supercomputer for RIKEN that will be used to spur research and development of AI technology. The new machine, which is schedule to go into operation in April, will be a blend of NVIDIA DGX-1 and Fujitsu PRIMERGY RX2530 M2 servers.

The yet-to-be-named system will be used at the Center for Advanced Intelligence Project (AIP), a group established by RIKEN in 2016 that specializes in R&D related to AI, big data, IoT and cybersecurity. The mission statement summarizes their work as follow: Our center aims to achieve scientific breakthrough and to contribute to the welfare of society and humanity through developing innovative technologies. We also conduct research on ethical, legal and social issues caused by the spread of AI technology and develop human resources.

Its intended user base will be AI researchers at universities and other institutions in Japan, as well as practitioners in the field in healthcare, manufacturing, and other commercial domains. Of particular interest are AI technologies that can help solve domestic issues of particular relevance to the Japanese, such as healthcare in aging populations, response strategies to natural disasters, regenerative medicine, and robotics-based manufacturing.

According to Fujitsu, the new system will deliver four half-precision (16-bit floating point) petaflops, essentially all of which are derived from the DGX-1 servers. Each server houses eight Tesla P100 GPUs, representing 170 peak teraflops at half-precision. Fujitsus contribution will be integrating the 32 of the DGX-1 boxes with 24 of its own PRIMERGY RX2540 M2 servers, along with a Fujitsu-made storage system. The latter is made up of six PRIMERGY RX2540 M2 PC servers, which will run FEFS, a parallel file system developed by Fujistu. The storage itself will consist of eight ETERNUS DX200 S3 units, and one ETERNUS DX100 S3 unit.

This is the third supercomputer unveiled in Japan within the last six months that has been significantly influenced by AI requirements. The first, known as the AI Bridging Cloud Infrastructure (ABCI) was announced by the National Institute of Advanced Industrial Science and Technology (AIST) at SC16 in November. When completed in late 2017, this 130-petaflop (half-precision) system which will be used to help support commercial AI deployment in Japan. The second system, TSUBAME 3.0, will be Tokyo Techs attempt to bring a lot of AI capability into the next generation of this lineage. This system is expected to deliver 47 half-precision petaflops when installed later this summer.

Both ABCI and TSUBAME 3.0 will fulfill the role of a general-purpose supercomputer, running conventional HPC application alongside deep learning workloads. Unlike those two systems, the RIKEN machine, besides being quite a bit smaller, also looks to be completely devoted to running deep learning applications.

Originally posted here:

Fujitsu to Deliver Deep Learning Supercomputer to RIKEN - TOP500 News

Researchers Say It’s Possible to Build a Self-Replicating DNA … – Sputnik International

Tech

01:24 07.03.2017(updated 06:23 07.03.2017) Get short URL

All existing computers, fromthe building-sized Sunway Taihulight supercomputer inChina tothe device you are using toread this article, are based onthe principles ofa Turing machine. Named forManchester's own Dr. Alan Turing, Turing machines are theoretical devices that run ona set ofstrict instructions. A typical deterministic Turing machine (DTM) might have a direction: "If my state is A, then perform task 1."

The DNA computer would however be a non-deterministic universal Turing Machine (NUTM). An NUTM is a Turing machine that can solve multiple tasks atonce that a DTM can only solve one ata time. In the example above, an NUTM might have the direction: "If my state is A, then perform tasks 1-1,000,000,000" thus performing a trillion tasks simultaneously.

Imagine a computer program designed tosolve a maze. The program comes toa fork inthe road. An ordinary electronic computer chooses one path and sees where it leads, trying another if that first path fails toget it outof the maze. An NUTM can go downevery path simultaneously byreplicating itself, thus solving the maze far more quickly.

The problem is, ofcourse, how tobuild a computer that can rapidly replicate itself. Manchester's solution is tobuild a processor outof DNA molecules, which "is an excellent medium forinformation processing and storage."

AP Photo/ Li Xiang/Xinhua

"It is very stable, asthe sequencing ofancient DNA demonstrates. It can also reliably be copied, and many genes have remained virtually unchanged forbillions ofyears," the study said, adding that, "As DNA molecules are very small, a desktop computer could potentially utilise more processors thanall the electronic computers inthe world combined and therefore outperform the world's current fastest supercomputer, while consuming a tiny fraction ofits energy."

Team member Ross King said that while DNA computers were first proposed inthe 1990s, the Manchester group is the first todemonstrate that such a machine is feasible. They claim that Thue, a theoretical programming language written in2000 byJohn Colagioia, can convert existing computers intoNUTMs.

NUTMs should not be confused withquantum computers. Quantum computers exploit quantum mechanics toprocess ata much faster rate thanelectronic computers. Quantum computers are probabilistic Turing machines (PTM) which might say: 'if my state is A, then perform task 190 percent ofthe time and task 210 percent ofthe time." Quantum computers would be much faster thanelectronic computers, butwhile theoretical quantum computers are inthe works inlaboratories all overthe world, no one has found a way tobuild one that functions inthe real world.

The University ofManchester team claims that their NUTM model would be superior toquantum computing. "Quantum computers are an exciting other form ofcomputer, and they can also follow both paths ina maze, butonly if the maze has certain symmetries, which greatly limits their use," said King.

Flickr/ Wellcome Images

More importantly, quantum computers would still rely onsilicon chips, just likeelectronic computers. As small asthose chips can get, they are unlikely tobecome smaller thana single DNA molecule. The less space a processor takes up, the more you can fit intoone computer.

Humanity is closer toquantum computers thanto those that are DNA-based. But whether the notion ofa computer made fromDNA excites or terrifies you, it is worth remembering that we humans run ona biocomputer. It's called a brain.

Read more:

Researchers Say It's Possible to Build a Self-Replicating DNA ... - Sputnik International

IBM Sets Stages for Quantum Computing Business – TOP500 News

IBM has revealed its intentions to commercial its quantum computing technology being developed under its research division. Although the company didnt offer a definitive timeline or even a roadmap for the product set, it set down some markers on what such an endeavor would entail.

In a nutshell, IBM plans to build systems on the order of 50 qubits in the next few years and make them commercially available as part of its cloud offering. These IBM Q machines will be universal quantum computers, rather than the kinds of quantum annealing systems that D-Wave offers today. As such, they promise to be much more powerful and have a wider application scope. At 50 qubits, they should be able to perform some types of computation that would be impossible to do on a classical system of any size. In general, those are problems where the solution space encompasses so many possibilities that good old binary digits dont offer much help. Some of the most notable commercial application areas include drug discovery, financial services, artificial intelligence, computer security, materials discovery, and supply chain logistics.

For IBM, this represents the second step for an effort that began last May, when the company made its five-qubit platform freely available to the public via the companys cloud. Such accessibility attracted more than 40,000 users, who in aggregate, have run over 275,000 quantum computing experiments on the device. A number of courses and research studies have been developed around the platform at various institutions including the Massachusetts Institute of Technology (MIT) in the US, the University of Waterloo in Canada, and cole polytechnique fdrale de Lausanne (EPFL) in Switzerland.

That early system has relatively few qubits and is not able to beat a classical computer at anything meaningful, but it has been able to demonstrate the potential of the technology. Its based on IBMs current quantum computing platform, which uses superconducting circuitry developed in-house and manufactured in their own fabs. The technology is silicon-based, but incorporates niobium as well.

According to Dave Turek, Vice President of Exascale Systems at IBM, the research division is toying with at least six more variants of the technology, and most of these experiments are already sporting more than five qubits. Turek says they are trying to get a feel for the interactions between the various underlying materials, interconnect topologies and other features that make such a system useful and stable.

There are several factors that you have to manage simultaneously to increase the size of the system. Just producing qubits is actually a pretty trivial thing to do at this stage for us, explained Turek. But to produce them in a context where you are demonstrating entanglement and preserving coherence and scalability therein lies the trick.

A big part of that trick is ensuring the universality of the platform. IBM is intent on this aspect and wants to make sure everyone knows they are not offering something akin to a D-Wave quantum annealer. To drive that point home, theyve developed the metric of quantum volume, which is essentially a way of measuring the quantum-ness of a computing system. It incorporates not just the number of qubits, but also the interconnectivity in the device, the number of operations than can be run in parallel, and the number of gates that can be applied before errors make the device behave essentially classically. Whether this will catch on as the Linpack of quantum computing remains to be seen.

Setting aside the business roadmap for a moment, the company also released a new API for the initial cloud-based system. It promises to help developers more easily exploit the technology without having to know the intricacies of quantum physics. In concert with the new API is an upgraded simulator, which can model a device with up to 20 qubits. Although, you wont get the performance of real hardware, it will allow developers to play with problems that dont fit in a five-qubit system. Those two additions just touch on how IBM has been building out the software ecosystem over the past year or so. You can get a more complete picture by visiting the companys quantum computing programming webpage.

Although IBM made no mention about how their IBM Q products would be positioned relative to their traditional system offerings, Turek did speculate that early versions could be employed as accelerators to classical systems, where offloading certain algorithms onto the qubits made sense. Certainly, IBM has some experience with this model inasmuch it employs NVIDIA GPUs as floating point accelerators in its own Power servers today. At least in the short run, Turek thinks its likely that these quantum systems will be as an adjunct to conventional HPC machines to do quantum computations.

The analogy breaks down a bit when you realize the GPUs are just faster than their host processors by one to three orders of magnitude, at most -- for certain types of computations, whereas quantum processors will be able to execute algorithms that will not run on a classical host in any reasonable amount of time. Thats motivation enough for IBM to keep this technology in-house.

In fact, the company sees quantum computing as one the major technology pillars of its future, alongside its Watson and blockchain products in terms of strategic importance. The biggest challenge for IBM, as always, will be the competition. Setting aside D-Wave, there are perhaps 10 to 20 quantum projects that could be fairly close to a commercial release. They come from rivals as diverse as Google, Microsoft and Intel.

IBM is as well positioned as any of these. Its been working on the problem for nearly four decades and has accumulated expertise in all the adjacent areas along the way chip technology, superconductivity, applications domains, and system software. Its narrowed its focus on the most promising technologies and thrown the less promising ones over the side. And now they are at the point where, as Turek says, we can see the horizon.

Images: Cloud-based experimental system; Five-qubit chip. Source: IBM.

Go here to read the rest:

IBM Sets Stages for Quantum Computing Business - TOP500 News

Fujitsu to build Riken’s new deep learning supercomputer – ZDNet

The Riken Center for Advanced Intelligence Project in Japan has announced it will be receiving a new deep learning supercomputer next month, which will be used to accelerate research and development into the "real-world" application of artificial intelligence (AI) technology.

The system will be provided by Japanese IT giant Fujitsu, with the supercomputer's total theoretical processing performance expected to reach 4 petaflops.

The system is comprised of two server architectures, with 24 Nvidia DGX-1 servers -- each including eight of the latest NVIDIA Tesla P100 accelerators and integrated deep learning software -- and 32 Fujitsu Server PRIMERGY RX2530 M2 servers, along with a high-performance storage system, Fujitsu explained.

Fujitsu said the file system is Fujitsu Software FEFS on six Fujitsu Server PRIMERGY RX2540 M2 PC servers; eight Fujitsu Storage ETERNUS DX200 S3 storage systems; and one Fujitsu Storage ETERNUS DX100 S3 storage system to provide the IO processing demanded by deep learning analysis.

Along with the standard DGX-1 deep learning software environment provided by Nvidia in a public cloud, Fujitsu said it will also integrate a customised software environment for use in a secure on-site network.

"Nvidia DGX-1, the world's first all-in-one AI supercomputer, is designed to meet the enormous computational needs of AI researchers

"Powered by 24 DGX-1s, the Riken Center for Advanced Intelligence Project's system will be the most powerful DGX-1 customer installation in the world," said Jim McHugh, VP and general manager at Nvidia. "Its breakthrough performance will dramatically speed up deep learning research in Japan, and become a platform for solving complex problems in healthcare, manufacturing, and public safety."

The new supercomputer will be installed in Fujitsu's Yokohama datacentre, with Fujitsu to also provide Riken with R&D support when using the system.

Founded in 1917, Riken is a large research institute in Japan that boasts about 3,000 scientists over seven campuses across Japan.

The new system will be used by the Riken centre to accelerate research into AI and the development of technologies to support fields such as regenerative medicine and manufacturing, in addition to "real-world" implementation of solutions to social issues including healthcare for the elderly, management of aging infrastructure, and response to natural disasters.

Fujitsu's K computer currently installed at the Riken Advanced Institute for Computational Science in Kobe, Japan, is in the top 10 of 2016's TOP500 list of the fastest computers in the world.

Visit link:

Fujitsu to build Riken's new deep learning supercomputer - ZDNet