12345...102030...


China plans new supercomputer for June 2018 – Neowin

To put it frankly, China has been absolutely killing the competition in the TOP500 supercomputer rankings for the last couple of years. Its Sunway TaihuLight has been sitting at the top of the pile well ahead of the Tianhe-2, another Chinese supercomputer. Now, the country is saying that it has a new computer which will be ready in 12 months.

The new Sunway exascale computer being developed by the National Supercomputer Center (NSC) and the National Research Center of Parallel Computer Engineering and Technology (NRCPC) will be able to execute a quintillion calculations per second, making it eight times faster than the Sunway TiahuLight, which scored 93 petaflops to make first place on the TOP500 list.

The new supercomputer has already gone into production in Jinan. Once completed, the new computer will support further research and scientific applications in fields including marine environments, biological information, aviation, and aerospace. As we get better supercomputers, computationally intensive tasks such as predicting the weather and climate change will become easier to perform and the results will be more accurate.

In November, we reported that Japan would be gunning for the top spot on the supercomputer rankings. The Japanese Ministry of Economy, Trade and Industry had planned to spend $173 million on developing a new supercomputer called ABCI or AI Bridging Cloud Infrastructure. The country hopes that it can edge in front of China by scoring 130 petaflops on the Linpack benchmark. With the latest news from China, Japan might not have the opportunity to move ahead after all.

Source: CGTN

Follow this link:

China plans new supercomputer for June 2018 – Neowin

IBM, Air Force to collaborate on brainy supercomputer — Washington … – Washington Technology

EMERGING TECH

IBM and the Air Force Research Laboratory have partnered to develop an artificial intelligence-based supercomputer with what they call abrain-inspired, neural network design.

Based on a 64-chip array, the company and AFRL are designing the newIBM TrueNorth Neurosynaptic System to recognize patterns and carry outintegrated sensory processing functions. IBM first developed a TrueNorth platform for a Defense Advanced Research Projects Agency program in partnership with Cornell University.

Both IBM and AFRL envision TrueNorth as able to convert data such as images, video, audio and text from multiple, distributed sensors into symbols in real time. AFRL seeks to combine that so-called “right-brain” function with “left-brain” symbol processing capabilities in conventional computer systems.

The goal is to enable multiple data sources to run in parallel against the same neural network and help independent neural networks form an ensemble to also run in parallel on the same data.

Once complete, the new TrueNorth platform’s processing power would aim to equal that of 64 million neurons and 16 billion synapses as the processor component consumes energy equal to that of a 10-watt light bulb.

AFRL is investigating potential uses of the system in embedded, mobile and autonomous settings where limitations exist on the size, weight and power of platforms.

About the Author

Ross Wilkers is a senior staff writer for Washington Technology. He can be reached at rwilkers@washingtontechnology.com. Follow him on Twitter: @rosswilkers. Also find and connect with him on LinkedIn.

Go here to see the original:

IBM, Air Force to collaborate on brainy supercomputer — Washington … – Washington Technology

AFRL Taps IBM to Build Brain-Inspired AI Supercomputer – insideHPC – insideHPC

Today IBM announced they are collaborating with the U.S. Air Force Research Laboratory (AFRL) on a first-of-a-kind brain-inspired supercomputing system powered by a 64-chip array of the IBM TrueNorth Neurosynaptic System. The scalable platform IBM is building for AFRL will feature an end-to-end software ecosystem designed to enable deep neural-network learning and information discovery. The systems advanced pattern recognition and sensory processing power will be the equivalent of 64 million neurons and 16 billion synapses, while the processor component will consume the energy equivalent of a dim light bulb a mere 10 watts to power.

AFRL was the earliest adopter of TrueNorth for converting data into decisions, said Daniel S. Goddard, director, information directorate, U.S. Air Force Research Lab. The new neurosynaptic system will be used to enable new computing capabilities important to AFRLs mission to explore, prototype and demonstrate high-impact, game-changing technologies that enable the Air Force and the nation to maintain its superior technical advantage.

IBM researchers believe the brain-inspired, neural network design of TrueNorth will be far more efficient for pattern recognition and integrated sensory processing than systems powered by conventional chips. AFRL is investigating applications of the system in embedded, mobile, autonomous settings where, today, size, weight and power (SWaP) are key limiting factors.

The IBM TrueNorth Neurosynaptic System can efficiently convert data (such as images, video, audio and text) from multiple, distributed sensors into symbols in real time. AFRL will combine this right-brain perception capability of the system with the left-brain symbol processing capabilities of conventional computer systems. The large scale of the system will enable both data parallelism where multiple data sources can be run in parallel against the same neural network and model parallelism where independent neural networks form an ensemble that can be run in parallel on the same data.

The evolution of the IBM TrueNorth Neurosynaptic System is a solid proof point in our quest to lead the industry in AI hardware innovation, said Dharmendra S. Modha, IBM Fellow, chief scientist, brain-inspired computing, IBM Research Almaden. Over the last six years, IBM has expanded the number of neurons per system from 256 to more than 64 million an 800 percent annual increase over six years.

The system fits in a 4U-high (7) space in a standard server rack and eight such systems will enable the unprecedented scale of 512 million neurons per rack. A single processor in the system consists of 5.4 billion transistors organized into 4,096 neural cores creating an array of 1 million digital neurons that communicate with one another via 256 million electrical synapses. For CIFAR-100 dataset, TrueNorth achieves near state-of-the-art accuracy, while running at >1,500 frames/s and using 200 mW (effectively >7,000 frames/s per Watt) orders of magnitude lower speed and energy than a conventional computer running inference on the same neural network.

The IBM TrueNorth Neurosynaptic System was originally developed under the auspices of Defense Advanced Research Projects Agencys (DARPA) Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program in collaboration with Cornell University. In 2016, the TrueNorth Team received the inaugural Misha Mahowald Prize for Neuromorphic Engineering and TrueNorth was accepted into the Computer History Museum. Research with TrueNorth is currently being performed by more than 40 universities, government labs, and industrial partners on five continents.

Sign up for our insideHPC Newsletter

Continued here:

AFRL Taps IBM to Build Brain-Inspired AI Supercomputer – insideHPC – insideHPC

Computing Technology, Data Analytics Products – Cray

It takes many years and many generations of technology developments to be a successful supercomputing provider. With every Cray system, you get the benefit of decades of supercomputing experience. We offer a comprehensive portfolio of supercomputer, storage and data analytics solutions for a range of budgets. All hardware and software is integrated, and every solution comes with the assurance of Cray support.

View post:

Computing Technology, Data Analytics Products – Cray

Super Bowl 51 Super Computer Picks | Odds Shark

If the OddsShark Super Computer becomes a sentient being, were all doomed. Its gone 9-1 against the spread and 8-2 straight up during the 2017 NFL postseason and will more than likely become our robot overlord sooner rather than later if it keeps improving at the rate it has. All Matrix theories aside, the computer is cleaning up and is back one more time this season for its Super Bowl 51 pick and its siding with the underdog Atlanta Falcons.

The computer has been riding the Falcons all through the playoffs and correctly predicted that theyd blow out both the Seahawks and Packers. This pick is a little different, however, as the Dirty Birds will now have to face the consensus best team in football for the right to lift the Lombardi Trophy.

With a predicted score line of 29.3-21.1 for the Falcons, the computer is very confident in Atlanta and I have to agree. I wrote about the three reasons why the Falcons are going to win the Super Bowl, so it appears the computer and I are wired quite similarly. Weve been on the same page on just about everything this postseason so maybe when the machines rise up, theyll keep me around as a pet or something.

Another significant note regarding that projected score is that it would not come anywhere close to breaching the record-setting total that opened at 58.5. The public is heavily on the side of the OVER and given how these two offenses have been playing, its hard to disagree.

A loss and failure to cover for the Patriots here would be just their fourth ATS loss of the season. Win or lose, that’s an incredible record and if you’ve beenbacking them, congratulations you probably don’t even need a win here.

Although the Falcons specific trends dont exactly shine a great light on Atlantas SB odds, the underdog has won and covered the last five years at the Super Bowl. I know thats not incredibly specific to either of these teams but itdefinitely paints a telling picture. Vegas sets lines very carefully to get the most money on the side they think will lose and opening the Patriots as a small favorite has definitely done that.

Over 60% of the public is on the Patriots and if the computers right, the majority of people betting on a side will be very disappointed when the dust settles on Super Bowl Sunday.

The computer has completely disregarded the Patriots’ 4-0 SU and ATS record against the Falcons in their last four meetings and the Dirty Birds’ 0-5 SU and ATS record in their last five games as underdogs in the playoffs.

For a more human breakdown of the biggest pro sporting event in North America, check out the following links and make sure to check out our YouTube channel for all our video content:

See original here:

Super Bowl 51 Super Computer Picks | Odds Shark

The Air Force and IBM are building an AI supercomputer – Engadget

IBM and the USAF announced on Friday that the machine will run on an array of 64 TrueNorth Neurosynaptic chips. The TrueNorth chips are wired together like, and operate in a similar fashion to, the synapses within a biological brain. Each core is part of a distributed network and operate in parallel with one another on an event-driven basis. That is, these chips don’t require a clock, as conventional CPUs do, to function.

What’s more, because of the distributed nature of the system, even if one core fails, the rest of the array will continue to work. This 64-chip array will contain the processing equivalent of 64 million neurons and 16 billion synapses, yet absolutely sips energy — each processor consumes just 10 watts of electricity.

Like other neural networks, this system will be put to use in pattern recognition and sensory processing roles. The Air Force wants to combine the TrueNorth’s ability to convert multiple data feeds — whether it’s audio, video or text — into machine readable symbols with a conventional supercomputer’s ability to crunch data.

This isn’t the first time that IBM’s neural chip system has been integrated into cutting-edge technology. Last August, Samsung installed the chips in its Dynamic Vision Sensors enabling cameras to capture images at up to 2,000 fps while burning through just 300 milliwatts of power.

Read this article:

The Air Force and IBM are building an AI supercomputer – Engadget

What is the most powerful supercomputer in Ireland? – Siliconrepublic.com

Six of the seven most powerful computers in Ireland are owned by one company, with new entries on the list more than doubling the countrys HPC capacity.

Investment in high-performance computers (HPCs) in Ireland is continuing apace, with two new machines in recent months storming into the worldwide top 200.

Known only as Company M, a software company, rather than research centre, has seen its latest toys enter the global ranking of supercomputers at 196 and 197, respectively.

These supercomputers represent the second- and third-highest positions ever recorded by Irish computers on the global Top500 list they are both a Linpack Rmax of 819.16teraflops.

In 2008, a Xeon quad core machine operated by the Irish Centre for High-End Computing (ICHEC) reached 117th on the list, falling out of the top 100 within two years.

ICHEC still has one of Irelands most powerful machines, though, with Fionn the only computer outside of Company Ms array that makes it into the top seven domestically (sixth).

This more than doubles the Irish HPC capacity, which is up from 1.46 petaflops in November 2016, to 3.01 petaflopstoday.

Ireland has ranked on the Top500 list 29 times over a history of 23 years, with a total of 18 machines. More than half of these machines (11) and rankings (18) have been in the last six years, representing Irelands increasing pace of HPC investment.

The continued growth of the Irish Supercomputer List reflects an exciting period of high-performance computing expansion, said Dr Brett Becker of the School of Computer Science, University College Dublin.

With emerging technologies in data analytics, AI and machine learning driving the proliferation of high-performance computing globally, it is important that Ireland continues to invest in high-performance computing, said Becker, who also maintains the Irish Supercomputer List.

Participating as close to the top of the overall global computing list is important, he said, in order to remain globally competitive in todays emerging technologies that promise to drive the future economy and to improve the quality of peoples lives.

Irelands history in the global Top500 supercomputer ranking. Click to enlarge. Image: Irish Supercomputer List

Two Chinese supercomputers and an upgraded supercomputer in Switzerland rank ahead of the US now in the overall global list, released earlier this week.

Chinalast yearrevealed the most powerful machinein the world, the Sunway TaihuLight, with 93 petaflops of processing power. It is this machine that still reigns supreme.

Now, the supercomputer arms race is heating up once again, with news that the US Department of Energy is pumping $258m into research in this field across six American tech companies: IBM, Intel, HP Enterprise, Nvidia, Cray and AMD.

The purpose of the PathForward programme, the department said, is to maximise the energy efficiency and overall performance of future large-scale supercomputers.

The rest is here:

What is the most powerful supercomputer in Ireland? – Siliconrepublic.com

Makers of TaihuLight Supercomputer Offer Commercial Version – TOP500 News

One of the more unusual pieces of news at this years ISC High Performance conference was the announcement by the National Supercomputing Center in Wuxi that it will be offering a cut-down version of the Sunway TaihuLight supercomputer for more mainstream HPC users.

TaihuLight is the reigning champ on the TOP500 list, delivering a whopping 93 petaflops on the Linpack benchmark. Besides being the number one system, its other big claim to fame is that it is constructed almost entirely from Chinese-made componentry. In particular, the system is powered by the 260-core ShenWei processor, known as the SW26010. Each of TaihuLights 40,960 ShenWei chips delivers three teraflops of peak performance.

The commercial version they announced at ISC is called the Sunway Micro and is based a dual-socket SW26010 server node. The system is aimed at a broad spectrum of industrial and research applications including deep learning, oil & gas exploration, climate modeling, etc.

Source:National Supercomputing Center in Wuxi

The two-processor design means each node delivers a very respectable six peak teraflops. Unlike the TaihuLight supercomputer, whose single-socket nodes were outfitted with a scant 32 GB of memory, the Sunway Micro can be equipped with 64 GB to 256 GB. That gives Micro buyers the option to have lot more local memory to feed these high-flying ShenWei chips. Each node is also equipped with 12 GB of local storage of undefined type and origin.

While talking with some of the folks at the Wuxi booth during the ISC exhibition, they revealed that the Micro nodes can be clustered together via a network based on InfiniBand technology, which apparently is similar, but not identical to the TaihuLight network implementaion. Given that these servers will be used in relatively small clusters, they didnt have to develop a network for supercomputer-level scalability.

One of the most unusual aspects of the Sunway Micro is that it is being sold by the National Supercomputing Center in Wuxi. That might seem like an odd thing for a supercomputing center to do, given its public mission. But since the center supplies the system software and developer toolset for these ShenWei-based machines, they basically act as system integrators for the commercial offering. As for the TaihiLight, the Micro was developed by the National Research Center of Parallel Computer Engineering & Technology (NRCPC).

Software support includes C/C++ and Fortran compilers for the ShenWie, as well as supporting runtime libraries. For parallel software development, Wuxi includes MPI, OpenACC and Athread implementations targeted to the ShenWei platform. An integrated development environment, with a debugger and performance monitor, are also included.

Besides selling the standard version of the Micro, the Wuxi center will also provide customized solutions. Pricing for the system was not made public.

Go here to read the rest:

Makers of TaihuLight Supercomputer Offer Commercial Version – TOP500 News

Lenovo builds 11.1 petaflop MareNostrum 4 supercomputer – DatacenterDynamics

Lenovo has delivered what it says isthe worlds largest, next-generation Intel-based supercomputerat the Barcelona Supercomputing Center (BSC).

The 11.1 petaflop high performance computing (HPC) system, called MareNostrum 4, is at the Chapel Torre Girona at the Polytechnic University of Catalonia, Barcelona, Spain, one of the most beautiful data centers in the world.

MareNostrum 1

Source: Barcelona Supercomputing Center

TheHPCsystem will be used for science research includinghuman genome research, bioinformatics and biomechanics to weather forecasting and atmospheric composition.

It features 3,400 nodes of Lenovos next-generation servers, with Intel Xeon scalable processors, interconnected by more than 60 kilometers of Intel Omni-Path Technology 100 Gbps network cabling.

While there are plans to expand the system, it is currently the 13th most powerful supercomputer according to theTOP500list.

The fast delivery, installation and optimization of the MareNostrum 4 system at BSC, showcases Lenovos end-to-end, high-performance computing strength, Kirk Skaugen, the recently-appointed president of Lenovo data center group,said.

Building on our 25 years of history in x86 server computing and our number one position in x86 server customer satisfaction and reliability, our goal at Lenovo is to be the largest supercomputing company on earth helping solve [humanitys] biggest challenges through the rapid advancement of technology and innovation.

Madhu Matta, VP & GM of HPC and AIat Lenovo, added:From the lab to the factory, to the on-site implementation teams, the delivery of a system of this size and complexity demands a superior level of integration and skill.

It requires a focus on a holistic customer experience that very few companies are capable of delivering.

The company, which this week refreshed its data center lineup, also announced plans to upgrade its Global HPC Innovation Center in Stuttgart, Germany with 6,000 cores of the next-generation Intel Xeon scalable processors and Nvidia GPUs.

Read the rest here:

Lenovo builds 11.1 petaflop MareNostrum 4 supercomputer – DatacenterDynamics

Simply Beautiful: MareNorstum4 Supercomputer Sports 13.7 Petaflops – insideHPC

Over at Lenovo, Gavin OHara writes that the worlds most beautiful supercomputer center now sports a 13.7 Petaflop system so novel in design that it has captured the attention of the global HPC community.It landed at #13 on the TOP500 this week, and thats just the beginning.

In a converted 19th-century church on the outskirts of Barcelona sits a computer so overwhelmingly powerful, it could someday save us all.

Save us from what? Were not sure yet. But one day soon a scientific or medical research breakthrough will happen and its origins will be traced back to a glass-encased room inside the Torre Girona Chapel. Sitting within is a hulking mass of supercomputing power: a whopping 3,400 servers connected by 48 kilometers of cable and wire.

Torre Girona, nestled inside the Barcelona Supercomputing Center on the campus of the Polytechnic University of Catalonia, was used as a Catholic Church until 1960. The church was deconsecrated in the 1970s but, the longer you spend here seeing how supercomputing speed can enable lightning-fast insight, the more you start to sense the presence of a higher power.

This is technology at its inquisitive best. And it all starts with the specs of the monster they call MareNostrum.

Specifications

To consider the sheer power and scale of MareNostrums High Performance Computing capabilities is to test your own knowledge of large-scale counting units. You see, for supercomputing nerds its all about FLOPs, or Floating Point Operations/Second. The original MareNostrum 1, installed in 2004, had a calculation capacity of 42.35 teraflops/second. Which meant 42.35 trillion operations/second. Not bad, I guess, until you consider that the 2017 version (MareNostrum 4) blows that out of the waterit possesses 322 times the speed of the original.

The new supercomputer has a performance capacity of 13.7 petaflops/second and will be able to carry out 13,677 trillion operations per second, says Lenovo VP Wilfredo Sotolongo as we gaze upwards inside the chapel. Sotolongo not only works closely with the BSC, he actually lives near Torre Girona in Barcelona.

As I try to get my head around all these unfamiliar units of measure, Sotolongo lays it out for me: In computing, FLOPs are a measure of computer performance. Node performance My mind wanders a bit before I tune back in. A petaflop is a measure of a computers processing speed and can be expressed as a quadrillion, or thousand trillion, floating point operations per second. A thousand teraflops. 10 to the 15th power FLOPs. Etc etc.

He sees my head spinning so, mercifully, he simplifies it. Basically, MareNostrum 4 is 10 times more powerful than MareNostrum 3. OK, I can relate to that but I one-up him anyway: How many times more powerful is it than my 2016 ThinkPad X1 Carbon laptop? He laughs. About 11,000 times. Gulp.

Its Really About the Workloads

What kinds of workloads require the type of computing power found in the MareNostrum cluster? There are a lot, it turns out. Because HPC systems deliver results in a fraction of the time of a single desktop or workstation, they are of increasingly vital interest to researchers in science, engineering and business. They are all drawn by the possibility of solving sprawlingly complex problems in their respective fields.

Over the years, MareNostrum has been called on to serve more than 3,000 such projects. On any given day, as the Catalonian sun streams through the stained-glass windows of Torre Girona, MareNostrum manages mountains of data and spits out valuable nuggets of insight to a staff of more than 500 that could someday help solve some of humanitys greatest challenges.

Gavin OHara leads Lenovos Global Social Content & Community team. Hes been with Lenovo since 2005 and, in 2010, became the second person in the company to do social media. He is a big believer in unselfish brand storytelling and lives by the mantra people before products. As Lenovos chief storyteller, he scours the Earth in search of the inspiring and the unexpected. In a previous life, he worked as a writer, journalist and musician. Gavin is a Virginia native, a Syracuse University graduate and a long-time North Carolina resident.

Sign up for our insideHPC Newsletter

More here:

Simply Beautiful: MareNorstum4 Supercomputer Sports 13.7 Petaflops – insideHPC

AMD Challenges Intel’s Datacenter Dominance with New EPYC Processors – TOP500 News

For the first time in several years, AMD has brought a server chip to market that provides some real competition to Intel and its near total domination of the datacenter market. The new AMD silicon, known as the EPYC 7000 series processors, come with up to 32 cores, along with a number of features that offer some useful differentiation against its Xeon competition.

The new AMD processors are broadly aimed at the cloud and datacenter markets, including the high performance computing space. With regard to the latter, EPYC is going to have some challenges in HPC environments, but AMD definitely has a case to make for its use there. Before we dive into that subject, lets look at the feature set of the new products.

The EPYC processors launched this week come with 8 to 32 cores, and like their Xeon rivals, can execute two threads per core. AMD has decided to offer only single-socket and dual-socket versions, leaving the much smaller quad-socket-and-above market to Intel.

Clock frequencies dont vary all that much across the range of EPYC SKUs; they start at 2.0 GHz and top out at 2.4 GHz. As youll note from the tables below, the frequencies arent necessarily higher at the lower core counts, as one might expect. The same holds true for the max boost clock frequencies.

EPYC also features a new interconnect known as the Infinity Fabric, which takes the place of AMDs HyperTransport bus on the old Opterons. Except in this case, the fabric is used to connect the internals of the EPYC MCM the individual dies that make up the chip as well as the memory and the processors themselves (in a dual-socket setup). Socket-to-socket communication is up to 152 GB/second, while memory bandwidth tops out at 171 GB/sec.

Across the EPYC product set, AMD is claiming significantly higher integer performance 21 to 70 percent higher compared to comparably priced Xeon Broadwell processors, based on SPECint_rate_base2006. And for the top-end 32-core EPYC 7601 chip, AMD says its floating point performance is 75 percent higher than that of Intels Broadwell E5-2699A v4 processor, based on SPECfp_rate_base2006.

No doubt, some of the better performance is due to the generally higher counts of the EPYC parts compared to the comparably priced Xeon Broadwell SKUs. But thats sort of beside the point. The real issue is that, for the most part, EPYC processors will not be competing Broadwell, but rather against Intels new Skylake Xeon processors, which are expected to launch in July.

The Skylake design should offer better overall performance than Broadwell. More importantly, Skylake will support the AVX-512 instruction set, which will boost vector math performance (both integer and floating point) significantly compared to its predecessor. So AMDs performance-per-dollar comparisons will have to be revisited once Skylake launches, but its reasonable to assume that Intels top-end chips will outrun the EPYC 7601 in floating point performance, even if AMD manages to offer more value.

AMD does appear to have a clear advantage in memory support. Each EPYC processor is equipped with eight memory channels, which supports up to 16 DIMMs of DDR4 DRAM of speeds up to 2,666 MHz. So each socket can access up to 2 TB. On a dual-socket system, that doubles to 4 TB. Two EPYC 7601 processors in a server delivers 146 percent more bandwidth on the STREAM benchmark than a comparable Broadwell Xeon box. And even though Skylake Xeons will supposedly support six memory channels to Broadwells four, it looks like EPYCs memory advantage will prevail for the time being.

EPYCs support for a bigger memory footprint, and by extension, higher bandwidth, is designed to offer more performance for data-demanding applications, which are particularly sensitive to the worsening bytes/flops (or ops) ratio of modern processors. AMDs calculation here is that is that for most datacenter applications these days, memory access, rather than compute, is the limiting factor. The bigger memory footprint also makes the single-socket EPYC solution more attractive, since many customers often populate the second socket solely for the purpose of adding more memory.

The EPYC processor also offers an ungodly amount of PCIe support 128 lanes per socket, as compared to the expected 48 lanes for the Skylake Xeon processor. 128 lanes is enough to attach four to six GPUs or up to 24 NVMe drives. This also buttresses the case for single-socket servers, since, once again, you can avoid using the other socket to get access to additional devices. In fact, in a dual-socket configuration, you get the same 128 PCIe links, since the Infinity Fabric uses 64 of the PCIe links to connect to the other processor.

In summary, while even the fastest EPYC processors are unlikely to outperform the top Skylake parts in pure computational horsepower, from a performance per dollar or performance per watt per dollar, they may be extremely competitive. And for memory capacity and performance, as well as PCIe connectivity, they will outshine their Intel counterparts. Apparently, that was enough to attract Baidu and Microsoft, who are early customers of record

For HPC use, EPYC may appear to be something of a tradeoff. Its worth considering, though, that in 2017, the cheapest and most efficient flops are found on GPUs or other manycore processor, and not on multicore CPUs (with the caveat that not all flops are equally accessible to every application across these platforms). In addition, for many HPC applications, memory access is the most critical bottleneck.

With that in mind, AMD does have a high performance story to tell. Its regrettable that the company did not use the recent ISC conference to tell it. Instead, the EPYC launch was announced in Austin, Texas, during the companys Financial Analyst Day on June 20, and no one from the server side was dispatched to Frankfurt, Germany this year. (AMD did talk about their new Radeon Instinct GPUs for deep learning work at ISC, and well be reviewing those in an upcoming article.)

Its certainly understandable the AMD is focusing on the cloud and hyperscale space for the initial EPYC launch, given that it represents a bigger and faster growing market than that of HPC. But as Intel discovered awhile ago, being a leader at the high end of the market has downstream benefits as well.

The next time the HPC faithful are gathered in large numbers will be in November at SC17, and by that time the Skylake Xeon processors will be available for head-to-to-head comparisons on real applications. It would serve AMD well to be ready to talk about their HPCambitions for EPYC at the Denver event.

More here:

AMD Challenges Intel’s Datacenter Dominance with New EPYC Processors – TOP500 News

Lenovo unveils world’s largest Intel-based supercomputer – BetaNews – BetaNews

Lenovo has revealed what it says is a part ofthe next-generation of supercomputers.

At the International Supercomputing Conference in Frankfurt, the company confirmed it has completed the delivery and implementation of the worlds largest, Intel-based supercomputer at the Barcelona Supercomputing Center (BSC).

Called MareNostrum 4, the 11.1 petaFLOP supercomputer will be housed in the worlds “most beautiful data center” at the Chapel Torre Girona at the Polytechnic University of Catalonia in Barcelona. There it will be used to power a number of scientific investigation processes, including human genome research, bioinformatics and biomechanics to weather forecasting and atmospheric composition.

The system is powered by more than 3,400 nodes of Lenovos next-generation servers, featuring Intel Xeon scalable processors, interconnected with more than 60 kilometers of high-speed, Intel Omni-Path Technology 100 Gb/s network cabling.

The new system has already struck a claim to be one of the biggest in the world, currently listed at number 13 on theTOP500list, released today, and Lenovo says it will also continue to grow over time.

“From the lab to the factory, to the on-site implementation teams, the delivery of a system of this size and complexity demands a superior level of integration and skill,” says Madhu Matta, VP & GM of High Performance Computing and Artificial Intelligence at Lenovo. “It requires a focus on a holistic customer experience that very few companies are capable of delivering.”

Published under license from ITProPortal.com, a Future plc Publication. All rights reserved.

More here:

Lenovo unveils world’s largest Intel-based supercomputer – BetaNews – BetaNews

DAVIDE Supercomputer Named to TOP500, Green500 Lists – HPCwire (blog)

FRANKFURT, Germany, June 21, 2017 At the International Supercomputing Conference 2017, IBM Business Partner and OpenPOWER Foundation member, E4 Computer Engineering, the Italian technology provider of leading-edge solutions for HPC, data analytics and AI, announced that D.A.V.I.D.E. (Development for an Added Value Infrastructure Designed in Europe), a multi-node cluster powered by IBM POWER8 processor technology with NVIDIA Tesla P100 GPU accelerators and NVIDIA NVLink interconnect technology, entered the prestigious TOP500 list.

Twice a year, Top500.org publishes the TOP500 and Green500 lists. The TOP500 ranks supercomputing environments by performance capabilities, as determined by the Linpack benchmark, and recognizes the vendors and technologies that power the most powerful data intensive environments in the world. The Green500 list ranks the top 500 supercomputers in the world by energy efficiency.

D.A.V.I.D.E., developed within the Partnership for Advanced Computing in Europe (PRACE), provides a compelling solution for workloads with highly parallelized code and demanding memory bandwidth requirements such as weather forecasting, QCD, machine learning, computational fluid dynamics and genomic sequencing.

The supercomputer represents the third generation of the Pre-Commercial Procurement project for the development of a Whole-System Design for Energy Efficient HPC, and its innovative design uses the most advanced technologies to create a leading edge HPC cluster that provides powerful performance, low power consumption and ease of use.

D.A.V.I.D.E. was built with best-in-class components. The machine has a total of 45 nodes connected via Infiniband, with a total peak performance of 990 TFlops and an estimated power consumption of less than 2kW per node. Each node is a 2U form factor and hosts two IBM POWER8 Processors with NVIDIA NVLink and four Tesla P100 data center GPUs, with the intra-node communication layout optimized for best performance. Nodes are connected with an efficient EDR 100 Gb/s networking.

The multi-node cluster was fully configured in April 2017 at the E4s facility in order to perform initial testing, running baseline performance, power and energy benchmarks using standard codes in an aircooled configuration. D.A.V.I.D.E. is currently available for a select number of users for porting applications and profiling energy consumption.

A key feature of the multi-node cluster is an innovative technology for measuring, monitoring and capping the power consumption of the node and of the whole system, through the collection of data from the relevant components (processors, memory, GPUs, fans) to further improve energy efficiency. The technology has been developed in collaboration with the University of Bologna.

We are delighted to have reached this prestigious result to be included in the TOP500 list. The team worked very hard to design and develop this prototype and is very proud to see the system up and running; we look forward to seeing it fully available to the scientific community, said Cosimo Gianfreda, CTO, E4 Computer Engineering. With our work we have demonstrated that it is possible to integrate cost effective technologies to achieve high performance and significantly improve energy efficiency. We thank all our partners for the close collaboration that contributed to this great achievement.

HPC and AI are converging and the D.A.V.I.D.E. supercomputer will help the scientific community to run both kinds of workloads on an accelerated system, said Stefan Kraemer, Director of HPC for EMEA at NVIDIA: Engery-efficient accelerated computing is the only way to reach the ambitious goals Europe has set for its HPC future.

About E4 Computer Engineering

Since 2002, E4 Computer Engineering has been innovating and actively encouraging the adoption of new computing and storage technologies. Because new ideas are so important, we invest heavily in research and hence in our future. Thanks to our comprehensive range of hardware, software and services, we are able to offer our customers complete solutions for their most demanding workloads on: HPC, Big-Data, AI, Deep Learning, Data Analytics, Cognitive Computing and for any challenging Storage and Computing requirements. E4. When Performance Matters.

Source: E4 Computer Engineering

Visit link:

DAVIDE Supercomputer Named to TOP500, Green500 Lists – HPCwire (blog)

US Slips in New Top500 Supercomputer Ranking – IEEE Spectrum

Photo: CSCS The Piz Daint supercomputer, housed at the Swiss National Supercomputing Center, edged U.S. supercomputers from any of the top three positions.

In June, we can look forward to two things: the Belmont Stakes and the first of the twice-yearly TOP500 rankings of supercomputers. This month, a well-known gray and black colt named Tapwrit came in first at Belmont, and a well-known gray and black supercomputer named Sunway TaihuLightcame in first on Junes TOP500 list, released today in conjunction with the opening session of the ISC High Performance conference in Frankfurt. Neither was a great surprise.

Tapwrit was the second favorite at Belmont, and Sunway TaihuLight was the clear pick for the number-one position on TOP500 list, it having enjoyed that first-place ranking since June of 2016 when it beat out another Chinese supercomputer, Tianhe-2. The TaihuLight, capable of some 93 petaflops in this years benchmark tests, was designed by theNational Research Center of Parallel Computer Engineering & Technology(NRCPC) and is located at theNational Supercomputing Center in Wuxi, China. Tianhe-2, capable of almost 34 petaflops, was developed by Chinas National University of Defense Technology (NUDT), is deployed at the National Supercomputer Center in Guangzho, and still enjoys the number-two position on the list.

More of a surprise, and perhaps more of a disappointment for some, is that the highest-ranking U.S. contender, the Department of Energys Titan supercomputer (17.6 petaflops) housed at Oak Ridge National Laboratory, was edged out of the third position by an upgraded Swiss supercomputer called Piz Daint (19.6 petaflops), installed at the Swiss National Supercomputing Center, part of the Swiss Federal Institute of Technology (ETH) in Zurich.

Not since 1996 has a U.S. supercomputer not made it into one of the first three slots on the TOP500 list. But before we go too far in lamenting the sunset of U.S. supercomputing prowess, we should pause for a moment to consider that the computer that bumped it from the number-three position was built by Cray and is stuffed with Intel processors and NVIDIA GPUs, all the creations of U.S. companies.

Even the second-ranking Tianhe-2 is based on Intel processors and co-processors. Its only the TaihuLight that is truly a Chinese machine, being based on the SW26010, a 260-core processordesigned by the National High Performance Integrated Circuit Design Centerin Shanghai.And U.S. supercomputers hold five of the 10 highest ranking positions on the new TOPS500 list.

Still, national rivalries seem to have locked the United States into a supercomputer arms race with China, with both nations vying to be the first to reach the exascale thresholdthat is, to have a computer that can perform a 1018 floating-point operations per second. China hopes to do so by amassing largely conventional hardwareand is slated to have a prototype system ready around the end of this year. The United States, on the other hand, is looking to tackle the problems that come with scaling to that level using novel approaches, which require more research before even a prototype machine can be built. Just last week, the U.S.Department of Energy announced that it was awarding Advanced Micro Devices, Cray, Hewlett Packard, IBM, Intel, and NVIDIA US $258 million to support research toward building an exascale supercomputer. Who will get there first, is, of course, up for grabs. But one things for sure: Itll be a horse race worth watching.

IEEE Spectrums general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Piz Daint masters both speed and efficiency by keeping data close to its processors 27Jan2014

Silicon Valleys top employers made big staffing changes, according to Silicon Valley Business Journal 15Jun

Neuroscience will give us what weve sought for decades: computers that think like we do 2Jun

Why the merger of the Raspberry Pi and CoderDojo foundations makes senseand why it doesnt 2Jun

Large-scale brainlike systems are possible with existing technologyif were willing to spend themoney 1Jun

Massive efforts to better understand the human brain will deliver on the original promise of computer science: machines that think like us 31May

Modeling computers after the brain could revolutionize robotics and big data 31May

Researchers in this specialized field have hitched their wagon to deep learnings star 29May

Artificial intelligence might endow some computers with self-awareness. Heres how wed know 25May

By the end of 2017, Google hopes to make a 49-qubit chip that will prove quantum computers can beat classical machines 24May

Fujitsus new cooling system promises easy server maintenance while using less power and taking up less space 18May

Scott Borg, director of the U.S. Cyber Consequences Unit, says hardware design engineers hold the future of cybersecurity in their hands 15May

Simulations suggest old ICs should consume less power than they did in their youth 12May

All the coolest 3D printing projects from CHI 2017 11May

All the weirdest computer interfaces from CHI 2017 9May

The headlines show big hiring sprees for Amazon, GE Healthcare, and other software-focused operations. Hardware? The news isnt as good. 9May

This short video was shot at SXSW this year, where the IEEE held a panel to discuss the future of computing 1May

Fake hardware could open the door to malicious malware and critical failures 24Apr

The mysterious XPoint memory in Intels new Optane solid-state drive is a step toward universal memory 19Apr

Avegant is confident enough about its light-field-based mixed-reality technology that it’s willing to show and tell 18Apr

See original here:

US Slips in New Top500 Supercomputer Ranking – IEEE Spectrum

Championship 2017/18: Super Computer predicts table after five games of new season – talkSPORT.com

The Championship fixtures for 2017/18 have been announced and, here at talkSPORT, we cannot wait to get the season started.

Kick-off may still be around two months away, but it does not stop supporters fromdreaming about how their side will start the campaign.

READ MORE:EFL Championship fixtures 2017-18 in full: Every team, every match

The first gameyou look for is usually the season opener, followed by the final match, as well as the derby clashes home and away, plusmeetings with the newly-promoted sides.

Another thing is the first month or so of fixtures – how your side’s start could determine the way the whole season pans out, whether it could see them pushing for the automatic spots, a battle for a play-off placeor scraping for points and playing catch up near the bottom.

Well, no fear – talkSPORT has done the hard work for you.

READ MORE: talkSPORT becomes the new home of the English Football League

We have fed the data into the super computer, assessing the opening five rounds of the second tier, with predicted rankings. Bear in mind plenty can change between now and the start of the season, as the transfer window opens and managers sort out squads.

According to our system, Sunderland will feel the full force of a late managerial appointment, play-off finalists Reading will have a slow start while Harry Redknapp will have his Birmingham side well prepared.

Of course, the standings above have been collated just for fun it is interesting to speculate, but as we all know, football has a funny way of turning expectations on their head.

Click the right arrow, above, to see how the Championship table might look after five games and comment with your predictions below…

talkSPORT and talkSPORT 2 have exclusive radio rights to the Sky Bet EFL Championship,League OneandLeague Twofor the next three seasons.

The talkSPORT network will be the only place to hear 110 regular season EFL matches as well as the play-off semi-finals and finals – read more here.

Visit link:

Championship 2017/18: Super Computer predicts table after five games of new season – talkSPORT.com

China still has the world’s fastest supercomputer, but the US wants to change that – Recode

China holds the top two spots for fastest computers in the world, and Switzerland holds the third, with the U.S. in the fourth, fifth and sixth spots.

The Top500 list of the most powerful supercomputers in the world was released yesterday at the 2017 International Supercomputing Conference in Frankfurt, Germany.

But the U.S. might not miss its top spot for long. The Department of Energy awarded six companies a total of $258 million last Thursday to further the research and development of the worlds first exascale supercomputer. There are no computers that powerful today.

The U.S. formerly held the third spot, but this time it was edged out by a system from the Swiss National Supercomputing Centre, which moved up from eighth place. This is only the second time in 24 years of compiling the Top500 list that the U.S. did not have a computer place in one of the top three positions.

These computers process at petascale speeds, meaning their capabilities are measured in terms of one quadrillion (1,000,000,000,000,000) calculations per second. To put that in perspective, consumer laptops now operate at gigascale, which is one billion calculations per second.

The U.S. companies that received government funding Hewlett Packard, Intel, Nvidia, Advanced Micro Devices and Cray will all work to solve problems in energy efficiency, reliability and overall performance of a national exascale computer system.

An exascale computer is capable of processing a quintillion (1,000,000,000,000,000,000) calculations per second. Thats about a trillion times more powerful than a consumer laptop.

Exascale-level computing would allow scientists to make extremely precise digital simulations of biological systems, which could uncover answers to pressing questions like climate change and growing food that can withstand drought.

As you develop models that are more sophisticated that include more of the physics, chemistry and environmental issues that are important in predicting the climate, the computing resources you need increases, said Thom Dunning, a chemistry professor at the University of Washington and the co-director of the Northwest Institute for Advanced Computing.

Chemists are leading a lot of the advances in computing power, since advanced biological modeling requires really powerful processing. With more detailed biological modeling, chemists can, for example, learn how plant cells react to drought, which can help to better engineer crops a project Dunning is working on with his research group.

The more powerful the computer, the more realistic the models are, which in turn provide scientists with more reliable predictions about the future and more concrete recommendations about what companies and governments need to do.

Exascale computing would also have a tremendous impact on the countrys national security. The National Security Agency and other law enforcement organizations collect more data in their dragnet digital surveillance operations than can often be processed in a timely, meaningful way, according to Dunning. With higher processing power, that data can be analyzed quickly to assess and predict potential threats.

The companies awarded the grants will cover at least 40 percent of the cost of the research projects themselves.

Creating an exascale computer is well beyond anything that a private company can do on its own, said Dunning, who added that building an exascale computer is a multibillion-dollar effort.

U.S. investment in building an exascale machine will have benefits beyond just finishing the computer itself. The research and development gleaned along the way will flow down into lower-level systems that will give the U.S. a competitive advantage in terms of making powerful computing much more affordable and accessible, Dunning said.

Heres a list of the Top 10 most powerful supercomputers in the world. The U.S. holds the most spots on the list, with five supercomputers that made the cut.

Go here to read the rest:

China still has the world’s fastest supercomputer, but the US wants to change that – Recode

US Falls Behind China and Switzerland in Supercomputer Race – Fortune

Staff analyze the Tianhe-1 supercomputer at the National Supercomputing Center on Nov. 2, 2010 in Tianjin, China. VCG VCG via Getty Images

The U.S. may need a more powerful supercomputer.

Two Chinese supercomputers and an upgraded supercomputer in Switzerland rank ahead of the U.S. in a biannual list of top supercomputers released Monday by the TOP500 organization, which tracks supercomputer speeds.

It is only the second time that the U.S. absent from the top 3 most powerful supercomputers since the organization started compiling the rankings 24 years ago. In the previous ranking, published in November, the top U.S. supercomputerlocated at Oak Ridge National Laboratory in Oak Ridge, Tenn.was No. 3.

Get Data Sheet , Fortunes technology newsletter.

The only other time this occurred was in November 1996, when three Japanese systems captured the top three spots, the organization said in a statement.

But it wasnt all bad news for the U.S.

The U.S. has five of the top 10 supercomputers on the list, the most of any other country. Additionally, the U.S. has 169 supercomputers in the top 500, followed by China with 160.

As for the companies supplying the parts for the supercomputers, Intel ( intc ) is the biggest with 464 of the top supercomputers using its processors. IBM ( ibm ) and its Power processors are installed in 21 supercomputers, followed by AMDs ( amd ) chips, which are used in six supercomputers.

For more about technology and finance, watch:

Nvidias ( nvda ) GPU chips, which are specialized for heavy data crunching like deep learning, are being used in 91 supercomputers to make them more powerful beyond what the typical chips used inside. For example, the Swiss National Supercomputing Center outfitted its supercomputer with Nvidias chips, which caused the machine to double its performance and climb from No. 8 to the No. 3 in the supercomputer rankings.

Read the original here:

US Falls Behind China and Switzerland in Supercomputer Race – Fortune

The US falls farther down supercomputer rankings than it’s been in over 20 years – BGR


BGR
The US falls farther down supercomputer rankings than it's been in over 20 years
BGR
The United States is competing with China on so many fronts it's impossible to name them all, but one of the most visible rivalries between the two countries is based on computing power. In the newest TOP500 ranking of the world's most powerful …
Swiss supercomputer edges US out of top spot – BBC NewsBBC News
US Slips in New Top500 Supercomputer RankingIEEE Spectrum
America's Fastest Computer Just Got Beat. Again.Popular Mechanics
Le News –ITworld –TOP500 News –TOP500 News
all 30 news articles »

Follow this link:

The US falls farther down supercomputer rankings than it’s been in over 20 years – BGR

The US Is Investing $258 Million to Build a More Powerful Supercomputer – Futurism

In Brief For the first time since 1996, the United States is no longer home to one of the three fastest supercomputers in the world. To combat this, the DOE has announced plans to invest $258 million to help develop the next generation device. New Tech Race

The 20th century space race ushered in some of the most significant scientific discoveries of the era. Now, the efforts of private companies like SpaceX, Virgin Galactic, and Blue Origin, as well as traditional governmentalagencies like NASA, have sparked a new space racethats bringing about next-level space technologies.

However, the Space Race 2.0 isnt the only technological competition in the world today the smartest minds across the globeare competing to create the most powerful supercomputer on the planet.

Since 1996, the United States has consistently been home to one of the three fastest supercomputers in the world. Unfortunately for the U.S., that streak has ended as the Department of Energys (DOE) Titan supercomputer has been bumped to thenumber four slot. The Swiss National Supercomputing Centres Piz Daintnow holds the bronze following an upgrade involving the addition of Nvidia GPUs.

The U.S. is not taking this bump to fourth place lying down. Last week, the DOE announced that it was making $258 million availableto help fund the next big supercomputer.

According to MIT Technology Review, the U.S. government expects to have a system that can performone quintillion operations per second by 2021. That would be 50 times faster than Titan and 10 times faster thanChinas TaihuLight, the current world leader.

Of course, the rest of the world wont spend the next four years content with what theyve already created. China is looking to further cement its place at the top of the supercomputing heap by heavily investing in the next generation of supercomputers. The nation is even setting a more ambitious goal for itself than the U.S. they believe their more powerful machine will be ready by 2020.

Ultimately, this race for the worlds most powerful supercomputer will benefit us all, as the devices will help humanity with everything from healthcare to predicting the weather. Truly, there are no losers when innovation is the goal.

Excerpt from:

The US Is Investing $258 Million to Build a More Powerful Supercomputer – Futurism


12345...102030...