12345...102030...


Hyperion Research: Supercomputer Growth Drives Record HPC Revenue in 2016 – HPCwire (blog)

FRAMINGHAM, Mass., April 7, 2017 Worldwide factory revenue for the high-performance computing (HPC) technical server market grew 4.4% in full-year 2016 to a record $11.2 billion, up from $10.7 billion in 2015 and from the previous record of $11.1 billion in exceptionally strong 2012, according to the newly released Hyperion Research Worldwide High Performance Technical Server QView. Hyperion Research is the new name for the former IDC HPC group.

Each quarter for the last 27 years, Hyperion Research analysts have conducted interviews with major hardware original equipment manufacturers (OEMs) in the technical computing space to gather information on their quarterly sales. Specifically, Hyperion collects data on the number of HPC systems sold, system revenue, system average selling price (ASP), the price band segment that a system falls into, architecture of the system, average number of processor packages per system, average number of nodes for each system sold, system revenue distribution by geographical regions, the use of coprocessors, and system revenue distribution by operating systems. We complement this supply-side data with extensive and intensive worldwide demand-side surveys of HPC user organizations to verify their HPC resources and purchasing plans in detail.

The 2016 year-over-year market gain was driven by strong revenue growth in high-end and midrange HPC server systems, partially offset by declines in sales of lower-priced systems.

Fourth Quarter 2016

2016 fourth-quarter revenues for the whole market grew 7.4% over the prior-year fourth quarter to reach $3.1 billion, while Supercomputers segment fourth-quarter revenues were up 45.6% over the same period in 2015. Hyperion Research expects the worldwide HPC server market to grow at a healthy 7.8% rate (CAGR) to reach $15.1 billion in 2020.

HPC servers have been closely linked not only to scientific advances but also to industrial innovation and economic competitiveness. For this reason, nations and regions across the world, as well as businesses and universities of all sizes, are increasing their investments in high performance computing, said Earl Joseph, CEO of Hyperion Research. In addition, the global race to achieve exascale performance will drive growth in high-end supercomputer sales.

Another important factor driving growth is the market for big data needing HPC, which we call high performance data analysis, or HPDA, according to Steve Conway, Hyperion Research senior vice president for research. HPDA challenges have moved HPC to the forefront of R&D for machine learning, deep learning, artificial intelligence, and the Internet of Things.

Vendor Highlights

The Hyperion Research Worldwide High-Performance Technical Server QView presents the HPC market from various perspectives, including by competitive segment, vendor, cluster versus non-cluster, geography, and operating system. It also contains detailed revenue and shipment information by HPC models.

For more information about the Hyperion Research Worldwide High Performance Technical Server QView, contact Kevin Monroe at kmonroe@hyperionres.com.

About Hyperion Research

Hyperion Research is the new name for the former IDC high performance computing (HPC) analyst team. IDC agreed with the U.S. government to divest the HPC team before the recent sale of IDC to Chinese firm Oceanwide.

Source: Hyperion Research

Read the rest here:

Hyperion Research: Supercomputer Growth Drives Record HPC Revenue in 2016 – HPCwire (blog)

Smallest Dutch supercomputer – Phys.org – Phys.Org

April 6, 2017 A team of scientists from the Netherlands has built a supercomputer the size of four pizza boxes. The Little Green Machine II has a computing power of more than 10,000 ordinary PCs. Credit: Simon Portegies Zwart (Leiden University)

A team of Dutch scientists has built a supercomputer the size of four pizza boxes. The Little Green Machine II has the computing power of 10,000 PCs and will be used by researchers in oceanography, computer science, artificial intelligence, financial modeling and astronomy. The computer is based at Leiden University (the Netherlands) and developed with help from IBM.

The supercomputer has a computing power of more than 0.2 Petaflops. That’s 200,000,000,000,000 calculations per second. Thereby this supercomputer equals the computing power of more than 10,000 ordinary PCs.

The researchers constructed their supercomputer from four servers with four special graphics cards each. They connected the PCs via a high-speed network. Project leader Simon Portegies Zwart (Leiden University): “Our design is very compact. You could transport it with a carrier bicycle. Besides that we only use about 1% of the electricity of a similar large supercomputer.”

Unlike its predecessor Little Green Machine I the new supercomputer uses professionalized graphics cards that are made for big scientific calculations, and no longer the default video cards from gaming computers. The machine isn’t based on the x86 architecture from Intel anymore either, but uses the much faster OpenPower architecture developed by IBM.

Astronomer Jeroen Bdorf (Leiden University): “We greatly improved the communication between the graphic cards in the last six months. Therefore we could connect several cards together to form a whole. This technology is essential for the construction of a supercomputer, but not very useful for playing video games.”

To test the little supercomputer the researchers simulated the collision between the Milky Way and the Andromeda Galaxy that will occur in about four billion years from now. Just a few years ago the researchers performed the same simulation at the huge Titan Computer (17.6 petaflops) at Oak Ridge National Laboratory (USA). “Now we can do this calculation at home,” Jeroen Bdorf says, “That’s so convenient.”

Little Green Machine II is the successor of Little Green Machine I that was built in 2010. The new small supercomputer is about ten times faster than its predecessor which is retiring as of today. The name Little Green Machine was chosen because of its small size and low power consumption. In addition, it is a nod to Jocelyn Bell Burnell who discovered the first radio pulsar in 1967. That pulsar, the first ever discovered, got nicknamed LGM-1 where LGM stands for Little Green Men.

Explore further: China to develop prototype super, super computer in 2017

China plans to develop a prototype exascale computer by the end of the year, state media said Tuesday, as it seeks to win a global race to be the first to build a machine capable of a billion, billion calculations per second.

The new “L-CSC” supercomputer at the GSI Helmholtz Centre for Heavy Ion Research is ranked as the world’s most energy-efficient supercomputer. The new supercomputer reached first place on the “Green500” list published in …

A Chinese supercomputer is the fastest in the world, according to survey results announced Monday, comfortably overtaking a US machine which now ranks second.

(PhysOrg.com) — NVIDIA has built the worlds fastest supercomputer using 7,000 of its graphics processor chips. With a horsepower equivalent to 175,000 laptop computers, its sustained performance is equivalent to 2.5 …

Cray Inc. said it has sealed a deal to overhaul the US Department of Energy’s “Jaguar” supercomputer, making it faster than any other machine on the planet.

China’s ambitions to become a major global power in the world of supercomputing were given a boost when one of its machines was ranked second-fastest in a survey.

Uber is scoffing at claims that its expansion into self-driving cars hinges on trade secrets stolen from a Google spinoff, arguing that its ride-hailing service has been working on potentially superior technology.

A sci-fi staple for decades, laser weapons are finally becoming reality in the US military, albeit with capabilities a little less dramatic than at the movies.

Facebook on Thursday launched its digital assistant named “M” for US users of its Messenger application, ramping up the social network’s efforts in artificial intelligence.

YouTube TV, Google’s new streaming package of about 40 television channels, is the tech industry’s latest bid to get cable-shunning millennials to pay for live TV over the internet. It offers intriguing advantages over rivals, …

Researchers at the University of California, Riverside have found an innovative new use for a simple piece of glass tubing: weighing things. Their glass tube sensor will help speed up chemical toxicity tests, shed light on …

Proteins are the most abundant substance in living cells aside from water, and their interactions with cellular functions are crucial to healthy life. When proteins fall short of their intended function or interact in an …

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Go here to see the original:

Smallest Dutch supercomputer – Phys.org – Phys.Org

Supercomputer Sales Drove 2016 HPC Market Up to Record $11.2 Billion – HPCwire (blog)

A 26.2 percent jump in supercomputer spending helped lift the overall 2016 HPC server segment by 4.4 percent according to a brief report released by Hyperion Research yesterday. The big drag on growth was a 19.3 percent decline in sales of departmental HPC servers. Nevertheless, the overall HPC server market set a new record at $11.2 billion, up from $10.7 billion in 2015, and surpassing 2012s high water mark of $11.1 billion.

Hyperion may provide more complete numbers for the full year at the HP User Forum being held in Santa Fe, NM, in two weeks. Hyperion is the former IDC HPC group which has been spun out of IDC as part of its acquisition by companies based in China (See HPCwire article, IDCs HPC Group Spun out to Temporary Trusteeship).

The 2016 year-over-year market gain was driven by strong revenue growth in high-end and midrange HPC server systems, partially offset by declines in sales of lower-priced systems, according to the Hyperion release. Brief summary:

HPC servers have been closely linked not only to scientific advances but also to industrial innovation and economic competitiveness. For this reason, nations and regions across the world, as well as businesses and universities of all sizes, are increasing their investments in high performance computing, said Earl Joseph, CEO of Hyperion Research. In addition, the global race to achieve exascale performance will drive growth in high-end supercomputer sales.

Another important factor driving growth is the market for big data needing HPC, which we call high performance data analysis, or HPDA, according to Steve Conway, Hyperion Research senior vice president for research. HPDA challenges have moved HPC to the forefront of R&D for machine learning, deep learning, artificial intelligence, and the Internet of Things.

Getting use to the new Hyperion name may take awhile, but its senior members, all from IDC, say there should be little change, at least in the near-term. Below is the companys self-description.

Hyperion Research is the new name for the former IDC high performance computing (HPC) analyst team. IDC agreed with the U.S. government to divest the HPC team before the recent sale of IDC to Chinese firm Oceanwide. As Hyperion Research, the team continues all the worldwide activities that have made it the worlds most respected HPC industry analyst group for more than 25 years, including HPC and HPDA market sizing and tracking, subscription services, custom studies and papers, and operating the HPC User Forum. For more information, see http://www.hpcuserforum.com.

See the original post:

Supercomputer Sales Drove 2016 HPC Market Up to Record $11.2 Billion – HPCwire (blog)

Premier League: Super computer predicts the results this weekend … – Daily Star

THE stats geeks over at Football Web Pages have developed a super computer which predicts the most likely scores for every match.

THE stats geeks over at Football Web Pages have developed a super computer which predicts the most likely scores for every match.

1 / 10

So does it think Liverpool will win at Stoke? Can West Ham beat Swansea?

The super computer works using an algorithm based on every single goal scored in every match so far this campaign.

So does it think Liverpool will win at Stoke? Can West Ham beat Swansea? What about Everton v Leicester?

CLICK THROUGH THE GALLERY ABOVE TO SEE THE PREDICTED SCORES.

See original here:

Premier League: Super computer predicts the results this weekend … – Daily Star

LOLCODE: I Can Has Supercomputer? – HPCwire (blog)

What programming model refers to threads as friends and uses types like NUMBR (integer), NUMBAR (floating point), YARN (string), and TROOF (Boolean)? That would be the internet-meme-based procedural programming language, known as LOLCODE. Inspired by lolspeak and the LOLCAT meme, the esoteric programming language was created in 2007 by Adam Lindsay at the Computing Department of Lancaster University.

Now a new research effort is looking to use the meme-based language as a tool to teach parallel and distributed computing concepts of synchronization and remote memory access.

Its a common complaint in high-performance computing circles: computer science curricula dont give sufficient attention toparallel computing, especially at the undergraduate level. In this age of multicore ubiquity, the need for parallel programming expertise is even more urgent. Is there a way to make teaching parallel and distributed computing more approachable? Fun even?

Thats the focus of the new research paper from David A. Richie (Brown Deer Technology) and James A. Ross (U.S. Army Research Laboratory), which documents the duos efforts to implement parallel extensions to LOLCODE within a source-to-source compiler sufficient for the development of parallel and distributed algorithms normally implemented using conventional high-performance computing languages and APIs.

From the introduction:

The modern undergraduate demographic has been born into an internet culture where poking fun at otherwise serious issues is considered cool. Internet memes are the cultural currency by which ideas are transmitted through younger audiences. This reductionist approach using humor is very effective at simplifying often complex ideas. Internet memes have a tendency to rise and fall in cycles, and as with most things placed on the public internet, they never really go away. In 2007, the general-purpose programming language LOLCODE was developed and resembled the language used in the LOLCAT meme which includes photos of cats with comical captions, and with deliberate pattern-driven misspellings and common abbreviations found in texting and instant messenger communications.

The researchers have developed a LOLCODE compiler and propose minor language syntax extensions to the LOLCODE that create parallel programming semantics to enable the compilation of parallel and distributed LOLCODE applications on virtually any platform with a C compiler and OpenSHMEM library.

They are targeting the inexpensive Parallella board, as it an ideal educational or developmental platform for introducing parallel programming concepts.

We demonstrate parallel LOLCODE applications running on the $99 Parallella board, with the 16-core Adapteva Epiphany coprocessor, as well as (a portion of) the $30 million US Army Research Laboratorys, they write.

Since its 2007 launch, LOLCODE development has occurred in spurts with activity tending to occur in early April. See also: I can has MPI, a joint Cisco and Microsoft joint Cross-Animal Technology Project (CATP) that introduced LOLCODE language bindings for the Message Passing Interface (MPI) in 2013.

Learn to LOLCODE at http://lolcode.codeschool.com/levels/1/challenges/1

See the original post:

LOLCODE: I Can Has Supercomputer? – HPCwire (blog)

TalkSPORT Super Computer predicts Tottenham end of season finish – Football Insider

6th April, 2017, 10:51 PM

By Harvey Byrne

A Super Computer tasked with the job of predicting the final Premier League table has placed Tottenham in second place.

Popular radio station TalkSPORThave provided an April update on their regular feature where they aim to discover the end of season results by feeding data into their prediction machine.

Now, the computer has tipped Spurs to achieve a second place finish in this seasons Premier League, which is the position they are currently occupying.

Mauricio Pochettinos side still have to travel to Chelseas Stamford Bridge alongside home fixtures against Arsenal and Manchester United before the campaigns end in eight matches time.

A second place finish would mark progression for the Tottenham team after they finished in third place in the previous season.

However, many associated with the north London club will still have their eyes set on a title challenge with just seven points currently separating them from league leaders Chelsea.

Other noteworthy placings made by the TalkSPORT machine are Tottenhams fierce rivals Arsenal in fourth, which sees them continue their 20-year consecutive streak of finishing in the top four.

Meanwhile, current third placed team Liverpool have been predicted to finish as low as sixth.

In other Tottenham news, here is Spurs best possible line-up to face Watford at the weekend.

Weve launched a>exclusively for your club. Like Us on Facebook byclicking hereif you want 24/7 updates on all Tottenham breaking news.

Continued here:

TalkSPORT Super Computer predicts Tottenham end of season finish – Football Insider

In the early 80s, CIA showed little interest in "supercomputer" craze – MuckRock

April 6, 2017

As you can see from my letter, CIA has no use for a supercomputer now or in the immediate future.

In 1983, cybermania would grip the nation: The movie WarGames is released over the summer, becoming a blockbuster hit for the time and intriguing President Ronald Reagan enough to summon his closest advisors to help study emerging cyberthreats and ultimately pass the first directive on cybersecurity. But according to declassified documents, made fully public thanks to MuckRocks lawsuit, one intelligence agency made a hard pass on the computer craze.

Other agencies entreated William J. Caseys Central Intelligence Agency to get involved. The National Security Agency was convening some of the nations best and brightest to develop a strategy for staying on top of the processing arms race .

In fact, the year before, the White House itself had sent Casey a memo asking that he designate someone to weigh in on supercomputer R&D policies:

But while Casey acknowledge that supercomputers were really important and he was flattered that other agencies picked the CIA to be on their team, it just wasnt something he felt comfortable devoting agency resources on :

In fact, not only did the agency not want to be part of the federal supercomputer club, in a 1983 survey it said it didnt own or have access to a supercomputer , nor did it have plans to start using them:

Why the apparent lack of concern? Maybe it had to do with an undated, unsourced (and possibly culled from public sources) report that found the U.S. cybercapabilities were still years ahead of the real threat: Russia.

But at some point, the director appears to have conceded that, for better or worse, supercomputers were not yet another fad and hed be start figuring out what exactly they were all about. Two memos from 1984 show his vigorous interest in getting up to speed on the subject.

The first response came regarding a memo on the increasing Japanese advantage when it came to building out Fifth Generation super computers .

The second memo was after he requested a copy of a staffers Spectrum Magazine, which IEEEs monthly magazine. Apparently, the director had a legendary, perhaps even alarming, appetite for reading materials.

The NSAs presentation of supercomputers is embedded below.

Image via 20th Century FOX

See original here:

In the early 80s, CIA showed little interest in "supercomputer" craze – MuckRock

Your chance to become a supercomputer superuser for free – The Register

Accurate depiction of you after attending the lectures. Pic: agsandrew/Shutterstock

HPC Blog The upcoming HPC Advisory Council conference in Lugano will be much more than just a bunch of smart folks presenting PowerPoints to a crowd. It will feature a number of sessions designed to teach HPCers how to better use their gear, do better science, and generally humble all those around you with your vast knowledge and perspicacity.

The first “best practices” session will feature Maxime Martinasso from the Swiss National Supercomputing Center discussing how MetroSwiss (the national weather forecasting institute) uses densely populated accelerated servers as their basic server to compute weather forecast simulations.

However, when you have a lot of accelerators attached to the PCI bus of a system, you’re going to generate some congestion. How much congestion will you get and how do you deal with it? They’ve come up with an algorithm for computing congestion that characterises the dynamic usage of network resources by an application. Their model has been validated as 97 per cent accurate on two different eight-GPU topologies. Not too shabby, eh?

Another best practice session also deals with accelerators, discussing a dCUDA programming model that implements device-side RMA access. What’s interesting is how they hide pipeline latencies by over-decomposing the problem and then over-subscribing the device by running many more threads than there are hardware execution units. The result is that when a threat stalls, the scheduler immediately proceeds with the execution of another threat. This fully utilises the hardware and leads to higher throughput.

We will also see a best practices session covering SPACK, an open-source package manager for HPC applications. Intel will present a session on how to do deep learning on their Xeon Phi processor. Dr Gilles Fourestey will discuss how Big Data can be, and should be, processed on HPC clusters.

Pak Lui from Mellanox will lead a discussion on how to best profile HPC applications and wring the utmost scalability and performance out of them. Other session topics include how to best deploy HPC workloads using containers, how to use the Fabriscale Monitoring System, and how to build a more efficient HPC system.

Tutorials include a twilight session on how to get started with deep learning (you’ll need to bring your own laptop to this one), using EasyBuild and Continuous Integration tools, and using SWITCHengines to scale horizontally campus wide.

Phew, that’s a lot of stuff… and it’s all free, provided you register for the event and get yourself to Lugano by 10 April. I’ll be there covering the event, so be sure to say hi if you happen to see me.

View post:

Your chance to become a supercomputer superuser for free – The Register

Watson supercomputer working to keep you healthier – Utica Observer Dispatch

Amy Neff Roth

Watson, the Jeopardy-winning celebrity supercomputer, is bringing his considerable computing capability to bear in Central New York.

Watson and the folks at IBM Watson Health will be working with regional health care providers to help keep area residents healthier.

The providers are all part of the Central New York Care Collaborative, which includes more than 2,000 providers in six counties, including Oneida and Madison.

What were doing is working with partners and all different types of health care providers: hospitals, physicians, primary care physicians in particular, long-term-care facilities, behavioral health and substance abuse-type facilities, community benefit organizations, every type of health care organization, said Executive Director Virginia Opipare. We are working to build and connect a seamless system for health care delivery that moves this region and helps to prepare this region for a value-based pay environment.

That pay system is one in which providers are paid for health outcomes and the quality of care provided, not a set fee for each service delivered. Its forcing providers to work together to create a more seamless system of care to keep patients healthier.

Thats where Watson comes in. The collaborative has partnered with IBM Watson Health to work on population health management, a huge buzz concept in health care in which providers work to keep patients from needing their services. Thats good for patients and good for health care costs.

To do that, IBM and Watson will gather data from providers 44 different kinds of electronic health records and state Medicaid claims data, normalize and standardize the data, and analyze it. That way providers can see all the care their patients have received and can figure out how to best help each patient, and over time, the collaborative can learn about how to keep patients healthy.

This is about identifying high risk individuals and using Watson-based tools and services to help providers engage with patients to improve health, said Dr. Anil Jain, vice president and chief health informatics office, value-based care at IBM Watson Health, in a release. As the health care industry shifts away from fee-for-service to a value-based system, care providers need integrated solutions that help them gain a holistic view of each individual within a population of patients.

The first wave of implementation should come within the next six months, Opipare said.

The CNY Care Collaborative is one of 25 regional performing-provider organizations in that state organized under the states Delivery System Reform Incentive Payment program to reshape health care in the state with a goal of cutting unnecessary hospital readmissions by 25 percent in five years. Organizations apply for state funding for projects chosen from a list of possibilities. The program is funded by $6.42 billion set aside from a federal waiver that allowed the state keep $8 billion of federal money saved the states redesign of Medicaid.

Follow @OD_Roth on Twitter or call her at 315-792-5166.

Read the rest here:

Watson supercomputer working to keep you healthier – Utica Observer Dispatch

Credit Card-Sized Super Computer That Powers AI Such As Robots And Drones Unveiled By Nvidia – Forbes


Forbes
Credit Card-Sized Super Computer That Powers AI Such As Robots And Drones Unveiled By Nvidia
Forbes
A supercomputer the size of a credit card that can power artificial intelligence (AI) such as robots, drones and smart cameras has been unveiled by computer graphics firm Nvidia. Revealed at an event in San Francisco, the super intelligent yet tiny
Nvidia Jetson TX2: Credit card-sized supercomputer looks to fuel AI developmentThe INQUIRER
Nvidia shows off Jetson supercomputerFudzilla (blog)
Nvidia Unveils Jetson TX2: Pocket-Sized Supercomputer Doubles TX1 Performance, Powers Drones With AI, And MoreTech Times

all 58 news articles »

Read more from the original source:

Credit Card-Sized Super Computer That Powers AI Such As Robots And Drones Unveiled By Nvidia – Forbes

Weather bureau’s $12m super computer yet to deliver – Chinchilla News

THE Bureau of Meteorology is upgrading its weather forecasting models as its multi-million dollar “supercomputer” comes under fire over its accuracy.

It follows a number of failed short-term forecasts in Queensland, where heavy rainfalls failed to eventuate and gale-force gusts were not predicted during a storm last June.

The bureau’s $12 million-a-year XC40 supercomputer was installed last year to “successfully support the Bureau’s capacity to predict”.

A spokesman said the benefits of the computer were yet to be seen.

“As we continue to implement the program, the increase to computing power and storage capability will allow the Bureau to run more detailed numerical models more often, run forecasts more frequently, issue warnings more often and provide greater certainty and precision in our forecasting,” he said.

American supercomputer manufacturer, Cray Inc, signed a six-year contract with the Bureau of Meteorology in 2015, totalling around $77 million.

The supercomputer’s capabilities and accuracy came under fire last week after the bureau forecast a week of heavy rain in Brisbane, which failed to eventuate.

Fellow meteorologists have defended the bureau’s “challenging” job.

Weatherzone senior meteorologist Jacob Cronje said despite the amount of technology around, predicting the weather was always tricky.

“Uncertainty will always be part of weather forecasting,” he said.

“We may predict one thing but the slightest change in weather conditions will change the outcome exponentially.”

A 40 per cent chance of showers and a possible storm is predicted tomorrow, with a maximum of 31C.

The rest is here:

Weather bureau’s $12m super computer yet to deliver – Chinchilla News

IISER Pune to acquire 25 crore supercomputer by September – Times of India

PUNE: Indian Institute of Science Education and Research (IISER), Pune, will be the first among all the IISERs to get a supercomputer – a 500 teraflop High Performance Computer system (HPC).

“Supercomputer helps in extremely fast and accurate computing of complex calculations and it will help us in all kinds of research. Both the students and teachers will be able to use this facility,” said A A Natu, senior faculty member from IISER, adding that while this year there won’t be any courses on HPC, it can be looked into next year. The ministry of electronics and information technology (MeitY) has asked the Centre for Development of Advanced Computing for support, assembly and installation of the High Performance Computer system at IISER.

The computer is estimated to cost about Rs 25 crore and is expected to be ready by September 2017.

Read more here:

IISER Pune to acquire 25 crore supercomputer by September – Times of India

super computer | eBay

Enter your search keyword

All Categories Antiques Art Baby Books Business & Industrial Cameras & Photo Cell Phones & Accessories Clothing, Shoes & Accessories Coins & Paper Money Collectibles Computers/Tablets & Networking Consumer Electronics Crafts Dolls & Bears DVDs & Movies eBay Motors Entertainment Memorabilia Gift Cards & Coupons Health & Beauty Home & Garden Jewelry & Watches Music Musical Instruments & Gear Pet Supplies Pottery & Glass Real Estate Specialty Services Sporting Goods Sports Mem, Cards & Fan Shop Stamps Tickets & Experiences Toys & Hobbies Travel Video Games & Consoles Everything Else

View post:

super computer | eBay

IBM’s Watson supercomputer leading charge into early melanoma detection – The Australian Financial Review

IBM melanoma research manager Rahil Garnavi (L) and MoleMap Australia diagnosing dermatologist Dr Martin Haskett. The two firms are collaborating on early detection of skin cancer.

IBM is breaking ground in the early detection of skin cancer using its supercomputer Watson, potentially saving the federal government hundreds of millions of dollars.

The tech giant has partnered with skin cancer detection program MoleMap and the Melanoma Institute of Australia to teach the computer how to recognise cancerous skin lesions.

The initial focus is on the early detection of melanomas, which are the rarest but most deadly type of skin cancer, and make up just 2 per cent of diagnoses but 75 per cent of skin cancer deaths.

IBM vice-president and lab director of IBM Research Australia, Joanna Batstone, told AFR Weekend her colleagues, including melanoma research manager Rahil Garnavi, had so far fed 41,000 melanoma images into the system with accompanying clinician notes and it had a 91 per cent accuracy at detecting skin cancers.

The Watson supercomputer uses machine learning algorithms in conjunction with image recognition technology to detect patterns in the moles.

“Today, if you have a skin lesion, a clinician’s accuracy is about 60 per cent. If you use a high-powered DermaScope [a digital microscope], a trained clinician can identify with 80 per cent accuracy,” said Ms Batstone.

“We want to achieve 90 per cent accuracy for all data and we also want it to do more than just say yes or no in regards to whether or not it’s cancerous, we want it to be able to identify what type of skin cancer it is, or if it’s another type of skin disease.”

Australian and New Zealanders have the highest rates of skin cancer in the world. In 2016 there were more than 13,000 new cases of melanoma skin cancer.

Of those with melanoma, there were almost 1800 deaths, making up 3.8 per cent of all cancer deaths in Australia in 2016.

Non-melanoma skin cancers alone are estimated to cost the government more than $703 million a year, according to 2010 research using medicare data.

Martin Haskett from MoleMap said diagnoses rates of skin cancer were relatively stable, with a slight decrease in younger people thanks to sun avoidance education campaigns.

“It occurs more frequently in older people and we’re in a situation where the population is ageing. The older population has not been exposed to the sun protection campaigns and as you get older your immune system performs differently and is less capable,” Dr Haskett said.

But even in younger generations, awareness does not always equate to action. Olympic swimmer Mack Horton had a skin cancer scare last year after a doctor watching TV saw an odd looking mole on him while he was racing and alerted the team doctor.

“With young people in general a tan is still seen as cool,” he said.

“When they pulled me aside and notified me I didn’t think much of it… but eight weeks later I got it checked and they said they’d have to take it out that day and then they said they would rush the results and that’s when it dawned on me how serious it was.”

IBM is setting up a free skin check event at Sydney’s Bondi Beach over the weekend.

Beach goers will be able to stand in front of a smart mirror created by IBM that takes in their visual appearance and asks them questions about their age, family history and behavioural patterns. Within minutes it then generates a report on that individual’s skin cancer risk. MoleMap will also be checking people’s moles for free.

More here:

IBM’s Watson supercomputer leading charge into early melanoma detection – The Australian Financial Review

Titan Supercomputer Assists With Polymer Nanocomposites Study – HPCwire (blog)

OAK RIDGE, Tenn.,March 8 Polymer nanocomposites mix particles billionths of a meter (nanometers, nm) in diameter with polymers, which are long molecular chains. Often used to make injection-molded products, they are common in automobiles, fire retardants, packaging materials, drug-delivery systems, medical devices, coatings, adhesives, sensors, membranes and consumer goods. When a team led by the Department of Energys Oak Ridge National Laboratory tried to verify that shrinking the nanoparticle size would adversely affect the mechanical properties of polymer nanocomposites, they got a big surprise.

We found an unexpectedly large effect of small nanoparticles, said Shiwang Cheng of ORNL. The team of scientists at ORNL, the University of Illinois at Urbana-Champaign (Illinois) and the University of Tennessee, Knoxville (UTK)reportedtheir findings in the journalACS Nano.

Blending nanoparticles and polymers enables dramatic improvements in the properties of polymer materials. Nanoparticle size, spatial organization and interactions with polymer chains are critical in determining behavior of composites. Understanding these effects will allow for the improved design of new composite polymers, as scientists can tune mechanical, chemical, electrical, optical and thermal properties.

Until recently, scientists believed an optimal nanoparticle size must exist. Decreasing the size would be good only to a point, as the smallest particles tend to plasticize at low loadings and aggregate at high loadings, both of which harm macroscopic properties of polymer nanocomposites.

The ORNL-led study compared polymer nanocomposites containing particles 1.8 nm in diameter and those with particles 25 nm in diameter. Most conventional polymer nanocomposites contain particles 1050 nm in diameter.Tomorrow, novel polymer nanocomposites may contain nanoparticles far less than 10 nm in diameter, enabling new properties not achievable with larger nanoparticles.

Well-dispersed small sticky nanoparticles improved properties, one of which broke records: Raising the materials temperature less than 10 degrees Celsius caused a fast, million-fold drop in viscosity. A pure polymer (without nanoparticles) or a composite with large nanoparticles would need a temperature increase of at least 30 degrees Celsius for a comparable effect.

We see a shift in paradigm where going to really small nanoparticles enables accessing totally new properties, said Alexei Sokolov of ORNL and UTK. That increased access to new properties happens because small particles move faster than large ones and interact with fewer polymer segments on the same chain. Many more polymer segments stick to a large nanoparticle, making dissociation of a chain from that nanoparticle difficult.

Now we realize that we can tune the mobility of the particleshow fast they can move, by changing particle size, and how strongly they will interact with the polymer, by changing their surface, Sokolov said. We can tune properties of composite materials over a much larger range than we could ever achieve with larger nanoparticles.

Better together

The ORNL-led study required expertise in materials science, chemistry, physics, computational science and theory. The main advantage of Oak Ridge National Lab is that we can form a big, collaborative team, Sokolov said.

Cheng and UTKs Bobby Carroll carried out experiments they designed with Sokolov. Broadband dielectric spectroscopy tracked the movement of polymer segments associated with nanoparticles. Calorimetry revealed the temperature at which solid composites transitioned to liquids. Using small-angle X-ray scattering, Halie Martin (UTK) and Mark Dadmun (UTK and ORNL) characterized nanoparticle dispersion in the polymer.

To better understand the experimental results and correlate them to fundamental interactions, dynamics and structure, the team turned to large-scale modeling and simulation (by ORNLs Bobby Sumpter and Jan-Michael Carrillo) enabled by the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility at ORNL.

It takes us a lot of time to figure out how these particles affect segmental motion of the polymer chain, Cheng said. These things cannot be visualized from experiments that are macroscopic. The beauty of computer simulations is they can show you how the chain moves and how the particles move, so the theory can be used to predict temperature dependence.

Shi-Jie Xie and Kenneth Schweizer, both of Illinois, created a new fundamental theoretical description of the collective activated dynamics in such nanocomposites and quantitatively applied it to understand novel experimental phenomena. The theory enables predictions of physical behavior that can be used to formulate design rules for optimizing material properties.

Carrillo and Sumpter developed and ran simulations on Titan, Americas most powerful supercomputer, and wrote codes to analyze the data on the Rhea cluster. The LAMMPS molecular-dynamics code calculated how fast nanoparticles moved relative to polymer segments and how long polymer segments stuck to nanoparticles.

We needed Titan for fast turn-around of results for a relatively large system (200,000 to 400,000 particles) running for a very long time (100 million steps). These simulations allow for the accounting of polymer and nanoparticle dynamics over relatively long times, Carrillo said.These polymers are entangled. Imagine pulling a strand of spaghetti in a bowl. The longer the chain, the more entangled it is. So its motion is much slower. Molecular dynamics simulations of long, entangled polymer chains were needed to calculate time-correlation functions similar to experimental conditions and find connections or agreements between the experiments and theories proposed by colleagues at Illinois.

The simulations also visualized how nanoparticles moved relative to a polymer chain. Corroborating experiment and theory moves scientists closer to verifying predictions and creates a clearer understanding of how nanoparticles change behavior, such as how altering nanoparticle size or nanoparticlepolymer interactions will affect the temperature at which a polymer loses enough viscosity to become liquid and start to flow. Large particles are relatively immobile on the time scale of polymer motion, whereas small particles are more mobile and tend to detach from the polymer much faster.

The title of the paper is Big Effect of Small Nanoparticles: A Shift in Paradigm for Polymer Nanocomposites.

Source: ORNL

View post:

Titan Supercomputer Assists With Polymer Nanocomposites Study – HPCwire (blog)

Compressing Software Development Cycles with Supercomputer-based Spark – insideHPC

Anthony DiBiase, Cray

In this video, Anthony DiBiase from Cray presents: Compress Software Development Cycles with supercomputer based Spark.

Do you need to compress your software development cycles for services deployed at scale and accelerate your data-driven insights? Are you delivering solutions that automate decision making & model complexity using analytics and machine learning on Spark? Find out how a pre-integrated analytics platform thats tuned for memory-intensive workloads and powered by the industry leading interconnect will empower your data science and software development teams to deliver amazing results for your business. Learn how Crays supercomputing approach in an enterprise package can help you excel at scale.

Anthony DiBiase is an analytics infrastructure specialist at Cray Supercomputer based in Boston with over 25 years program & project management experience in software development & systems integration. He matches life sciences software groups to computing technology for leading pharma & research organizations. Previously, he helped Novartis on NGS (next generation sequencing) workflows and large genomics projects, and later assisted Childrens Hospital of Boston on: systems & translational biology, multi-modal omics, disease models (esp. oncology & hematology), and stem cell biology. Earlier in his career, he delivered high-throughput inspection systems featuring image processing & machine learning algorithms while at Eastman Kodak, multi-protocol gateway solutions for Lucent Technologies, and mobile telephone solutions for Harris Corporation.

Sign up for our insideHPC Newsletter

Read this article:

Compressing Software Development Cycles with Supercomputer-based Spark – insideHPC

Microsoft, Facebook Build Dualing Open Standard GPU Servers for Cloud – TOP500 News

It was only a matter of time until someone came up with an Open Compute Project (OCP) design for a GPU-only accelerator box for the datacenter. That time has come.

In this case though, it was two someones: Microsoft and Facebook. This week at the Open Compute Summit in Santa Clara, California, both hyperscalers announced different OCP designs for putting eight of NVIDIAs Tesla P100 GPUs into a single chassis. Both fill the role of a GPU expansion box that can be paired with CPU-based servers in need of compute acceleration. The idea is to disaggregate the GPUs and CPUs in cloud environments so that users may flexibly mix these processors in different ratios, depending upon the demands of the particular workload.

The principle application target is machine learning, one of the P100s major areas of expertise. An eight-GPU configuration of these devices will yield over 80 teraflops at single precision and over 160 teraflops at half precision.

Source: Microsoft

Microsofts OCP contribution is known as HGX-1. Its principle innovation is that it can dynamically serve up as many GPUs to a CPU-based host as it may need well, up to eight, at least. It does this via four PCIe switches, an internal NVLink mesh network, plus a fabric manager to route the data through the appropriate connections. Up to four of the HGX-1 expansion boxes can be glued together for a total of 32 GPUs. Ingrasys, a Foxconn subsidiary will be the initial manufacturer of the HGX-1 chassis.

The Facebook version, which is called Big Basin, looks quite similar. Again, P100 devices are glued together vial an internal mesh, which they describe as similar to the design of the DGX-1, NVIDIAs in-house server designed for AI research. A CPU server can be connected to the Big Basin chassis via one or more PCIe cable. Quanta Cloud Technology will initially manufacture the Big Basin servers.

Source: Facebook

Facebook said they were able to achieve a 100 percent performance improvement on ResNet50, an image classification model, using Big Basin, compared to its older Big Sur server, which uses the Maxwell-generation Tesla M40 GPUs. Besides image classification, Facebook will use the new boxes for other sorts deep learning training, such as text translation, speech recognition, and video classification, to name a few.

In Microsofts case, the HGX-1 appears to be the first of multiple OCP designs that will fall under its Project Olympus initiative, which the company unveiled last October. Essentially, Project Olympus is a related set of OCP hardware building blocks for cloud hardware. Although HGX-1 is suitable for many compute-intensive workloads, Microsoft is promoting it for artificial intelligence work, calling it the Project Olympus hyperscale GPU accelerator chassis for AI, according to a blog posted by Azure Hardware Infrastructure GM Kushagra Vaid.

Vaid also set the stage for what will probably become other Project Olympus OCP designs, hinting at future platforms that will include the upcoming Intel Skylake Xeon and AMD Naples processors. He also left open the possibility that Intel FPGAs or Nervana accelerators could work their way into some of these designs.

In addition, Vail brought up the possibility of a ARM-based OCP server via the companys engagement with chipmaker Cavium. The software maker has already announced its using Qualcomms new ARM chip, the Centriq 2400, in Azure instances. Clearly, Microsoft is keeping its cloud options open.

See more here:

Microsoft, Facebook Build Dualing Open Standard GPU Servers for Cloud – TOP500 News

Supercomputer – Simple English Wikipedia, the free encyclopedia

A supercomputer is a computer with great speed and memory. This kind of computer can do jobs faster than any other computer of its generation. They are usually thousands of times faster than ordinary personal computers made at that time. Supercomputers can do arithmetic jobs very fast, so they are used for weather forecasting, code-breaking, genetic analysis and other jobs that need many calculations. When new computers of all classes become more powerful, new ordinary computers are made with powers that only supercomputers had in the past, while new supercomputers continue to outclass them.

Electrical engineers make supercomputers that link many thousands of microprocessors.

Supercomputer types include: shared memory, distributed memory and array. Supercomputers with shared memory are developed by using a parallel computing and pipelining concept. Supercomputers with distributed memory consist of many (about 100~10000) nodes. CRAY series of CRAYRESERCH and VP 2400/40, NEC SX-3 of HUCIS are shared memory types. nCube 3, iPSC/860, AP 1000, NCR 3700, Paragon XP/S, CM-5 are distributed memory types.

An array type computer named ILIAC started working in 1972. Later, the CF-11, CM-2, and the Mas Par MP-2 (which is also an array type) were developed. Supercomputers that use a physically separated memory as one shared memory include the T3D, KSR1, and Tera Computer.

Organizations

Centers

See the original post here:

Supercomputer – Simple English Wikipedia, the free encyclopedia

Credit Card-Sized Super Computer That Powers AI Such As Robots … – Forbes


Forbes

Read more here:

Credit Card-Sized Super Computer That Powers AI Such As Robots … – Forbes

Final Premier League table predicted: Super computer reveals where each side should finish – Daily Star

A SUPER computer has predicted how the final Premier League table will look as we approach the finish line.

A SUPER computer has predicted how the final Premier League table will look as we approach the finish line. *Data from talkSPORT.

1 / 20

Although it looks as though Chelsea are running away with the Premier League title, there’s plenty left to be decided among the teams below them.

The five clubs directly behind the Blues are all battling for Champions League places.

And at the other end of the table, an exciting scrap to stay in the English top flight is taking place.

So where will your side finish?

TalkSPORT have fed the data into their super computer – and they may have the answer.

Click through the gallery above to see the final predicted Premier League table.

Read this article:

Final Premier League table predicted: Super computer reveals where each side should finish – Daily Star


12345...102030...