Climatologist keeps an eye on the Super Bowl sky

David Robinson grew up in Tenafly, N.J., harbors a rooting interest in football and is a trained climatologist.

Robinson saw those three unrelated threads of his life geography, sports and weather woven uniquely together when the National Football League decided to hold Super Bowl XLVIII at MetLife Stadium on Feb. 2, 2014 the first-ever outdoor, cold-weather site for the game.

Given the heightened interest about game-day conditions for this Super Bowl, Robinson, a Rutgers University professor and New Jerseys state climatologist, has launched a website to help satiate fans curiosity about all things Feb. 2 climatologically speaking. The site, designed by Robinsons research assistant, Dan Zarrow, is at biggameweather.com.

When I heard the Super Bowl would be held here I knew we had to do something weather-related, Robinson said. We started piecing the data together this past fall.

Then the New Jersey State Police contacted Robinson and asked him to prepare a report on what they might expect from the weather. Robinson and his team at Rutgers gathered data looking at weather for the week leading up to the game day as well as Feb. 2. Reliable data stretch back more than 80 years. Robinsons team generated about 50 pages of data, which he used to brief the state police.

Some of the data have been rendered into colorful bar graphs, pie charts and line graphs on Robinsons weather site.

Robinson is quick to note that while meteorology has made significant improvements in recent years, it is impossible to predict the weather for a particular day with any accuracy more than a week or so away from that date.

Maybe a week ahead you can start to see a potential storm threat, and only a couple of days out at best can you zero in on what the actual conditions are likely to be, he said.

His site shows what has historically occurred on Feb. 2, using data for Newark Liberty International Airport, which is close enough to MetLife Stadium to be representative.

If Feb. 2, 2014, turns out to be a typical Feb. 2, one might expect a temperature of 34 degrees at game time, with winds of 10 miles per hour out of the northwest, and only a 26 percent chance of precipitation certainly not Miami or Phoenix, but not unbearable.

Read the rest here:

Climatologist keeps an eye on the Super Bowl sky

Affordable Digital Audio: Audioengine D3

NEW YORK (TheStreet) -- We've told you about two stand-alone, super-affordable DACs (Digital-to-analog Audio Converters) from the company with an attention-grabbing name. And about the newly upgraded version of Audioquest's DragonFly.

External DACs are designed take the place of cheap integrated circuits inside your computer which translate digital audio files into an analog form that you can actually listen to.

There is another plug-in USB DAC you'll want to consider: the brand-new Audioengine D3. The company has made its name by offering incredible value with every product it offers. The self-powered A5+ speakers ($399 to $469) are terrific. The new smaller A2+s (starting at $249) set the standard for desktop speaker systems at any price.

Audioengine's newest converter, the D3, is the third such device designed by the company and the first to plug directly into a computer's USB port. Although it is somewhat similar in size and shape to the DragonFly, it's very different in a number of ways.

First, it's made from aluminum not plastic. It's small, but solid. There are few frills past the cute little carrying case and an adapter for headphones (with big plugs) which come in the box. There are no multi-colored lighting schemes, just an LED to show that the D3 is on and another to show you when you're listening to a higher resolution (24/88 or 24/96) digital music file. Those files are much better sounding than the usual Amazon (AMZN) or Apple (AAPL)iTunes download.

The D3 plugs into any free USB port on a PC or Mac. The computer automatically recognizes, installs and configures it. Plug in a set of good headphones (it's also a headphone amplifier) and you're all set.

What does set the D3 apart from the competition is the sound. After a sufficient break-in period (as with most sound gear) I found the Audioengine to sound terrific, especially when it comes to reproducing those low, bass notes which are the anchoring foundation of recorded music. Not overblown. In short: tiny device but big, accurate bass.

The D3 is available through the Audioengine Web site and retails for $189. Add a pair of Audioengine's speakers and a computer source and you'll have yourself one heck of a great sounding, 21st century music system.

-- Written by Gary Krakow in New York.

To submit a news tip, send an email to tips@thestreet.com.

Go here to read the rest:

Affordable Digital Audio: Audioengine D3

Gay hero super-boffin Turing ‘may have been murdered by MI5’

2014 predictions: Top technology trends

Legendary code-breaker and computing boffin Alan Turing - seen by many as the father of modern computing and credited with a huge contribution to the Allied victory in World War Two - may have been murdered by the British security services, it has been claimed.

The government should open a new inquiry into the death of gay war-time code-breaker, mathematical genius and computer pioneer Alan Turing, including an investigation into the possibility he was murdered by the security services, LGBTI*-rights campaigner Peter Tatchell stated last week in a press release.

The statement continues:

Although there is no evidence that Turing was murdered by state agents, the fact that this possibility has never been investigated is a major failing. The original inquest into his death was perfunctory and inadequate. Although it is said that he died from eating an apple laced with cyanide, the allegedly fatal apple was never tested for cyanide. A new inquiry is long overdue, even if only to dispel any doubts about the true cause of his death.

Turing was regarded as a high security risk because of his homosexuality and his expert knowledge of code-breaking, advanced mathematics and computer science. At the time of his death, Britain was gripped by a MacCarthyite-style anti-homosexual witch-hunt. Gay people were being hounded out of the armed forces and the civil and foreign services.

In this frenzied homophobic atmosphere, all gay men were regarded as security risks - open to blackmail at a time when homosexuality was illegal and punishable by life imprisonment. Doubts were routinely cast on their loyalty and patriotism. Turing would have fallen under suspicion.

Mr Tatchell suggests that the "security services" would have feared that Turing might pass critical information to the Soviets, and would have sought to kill him for being homosexual and thus a security risk subject to blackmail. The reference to "security services" and counter-espionage suggests that he has specifically in mind the Security Service itself, also known as MI5 - or perhaps the Secret Intelligence Service (aka MI6), though that organisation is more focused on carrying out espionage abroad rather than preventing it at home.

The idea that British intelligence operatives can or do deliberately set out to assassinate British citizens with official sanction would seem to be poorly supported, other than in the case of certain military operations during the fighting in Northern Ireland. Even those latter would normally have been characterised for the record as combat operations rather than targeted killings. However such accusations are often made: for example by biz kingpin Mohamed al-Fayed, who alleges that MI6 orchestrated the car crash in which his son Dodi and Princess Diana were killed.

Ironically perhaps, at the time when Mr Tatchell speculates that MI5 may have been murdering Alan Turing for being gay and possibly a Soviet agent, MI5 itself genuinely had been infiltrated at a high level by a ring of Soviet agents, some of whom were in fact gay.

See the article here:

Gay hero super-boffin Turing 'may have been murdered by MI5'

Super Bowl’s weatherman tracks chance for snow on Feb. 2

JENNIFER BROWN/SPECIAL TO THE RECORD

When I heard the Super Bowl would be held here, I knew we had to do something weather-related, said David Robinson, state climatologist and a Tenafly resident.

David Robinson grew up in Tenafly, harbors a rooting interest in football and is a trained climatologist.

Robinson saw those three unrelated threads of his life geography, sports and weather woven uniquely together when the National Football League decided to hold Super Bowl XLVIII at MetLife Stadium on Feb. 2, 2014 the first-ever outdoor, cold-weather site for the game.

Given the heightened interest about game-day conditions for this Super Bowl, Robinson, a Rutgers University professor and New Jerseys state climatologist, has launched a website to help satiate fans curiosity about all things Feb. 2 climatologically speaking. The site, designed by Robinsons research assistant, Dan Zarrow, is at biggameweather.com.

When I heard the Super Bowl would be held here I knew we had to do something weather-related, Robinson said. We started piecing the data together this past fall.

Then the New Jersey State Police contacted Robinson and asked him to prepare a report on what they might expect from the weather. Robinson and his team at Rutgers gathered data looking at weather for the week leading up to the game day as well as Feb. 2. Reliable data stretch back more than 80 years. Robinsons team generated about 50 pages of data, which he used to brief the state police.

Some of the data have been rendered into colorful bar graphs, pie charts and line graphs on Robinsons weather site.

Robinson is quick to note that while meteorology has made significant improvements in recent years, it is impossible to predict the weather for a particular day with any accuracy more than a week or so away from that date. Maybe a week ahead you can start to see a potential storm threat, and only a couple of days out at best can you zero in on what the actual conditions are likely to be, he said.

His site shows what has historically occurred on Feb. 2, using data for Newark Liberty International Airport, which is close enough to MetLife Stadium to be representative.

More:

Super Bowl's weatherman tracks chance for snow on Feb. 2

NFL predictions for Week 17, computer picks Packers, Eagles, 49ers to win

The Packers will win but won't cover the spread, while the Cowboys and Cardinals will be eliminated if the Odds Shark NFL picks computer is accurate in Week 17.

In a week full of huge spreads, the closest game on paper appears to be the San Francisco 49ers at Arizona Cardinals game, which many bookmakers have listed as a pick'em.

While Odds Shark NFL computer projections don't have the game as a blowout, San Francisco is seen as a clear favorite, winning the game 28 to 23.5.

This would coincide with recent history between these two teams as San Francisco is 8-1 SU and 7-2 ATS in its last nine games against the Cardinals. The projected total of 51.5 would go well OVER the posted total of 42 points in this game.

"Week 17 is always a tough week to handicap because of the various motivations of the teams, some needing a win, some resting starters, some auditioning players for next season," said Jack Randall of OddsShark.com.

"For bubble teams like the Chargers, who we still have a bet on the Super Bowl from Week 1, their motivation may change during Sunday because if early teams win, then they will be eliminated before their game starts."

According to the computer picks, one of the biggest values on the board this week could be the Kansas City Chiefs visiting the San Diego Chargers. Kansas City is a 9.5-point underdog at the betting window, but the computers are picking the Chiefs to win the game outright by a score of 29.1 to 24.

This is another game projected to go comfortably over the total of 44.5 with a combined score of 53.1.

The line was finally set on the Packers Bears matchup after Aaron Rodgers was declared fit to play. The Packers are picked to win by a point, but need to cover 2.5 points.

The New Orleans Saints have been virtually perfect at home this season, going 7-0 SU and 6-0-1 ATS, and are one more home win away from clinching a playoff spot hosting the Tampa Bay Buccaneers this week.

Read more from the original source:

NFL predictions for Week 17, computer picks Packers, Eagles, 49ers to win

Delta will honor tickets bought at super cheap, mistaken prices

}isW C%Kbu9J,k)JR,p$a (:UgVK<$Rr|Xmh4n9^/_tnr8f (GQ`c~{d?b~d_fGYX8cJX`#uOX"$wnX$@qDhDd0 a> B0IMylhvBaYC-}=yG0rpjqGVY2r4fx*zyHo H-Y|BG|g>@ !y}tDcQe9q}%nrD(!]1%1wHCheB?Boz@ kzrfSr{s<4SCgecV $s0G(-GcrXd s9=@|eC{Qy,V}zi:8VTjNmmw*[oT+3-Yo6{/w76$P[y_|5}zK<^dPfwN*gg:V[5nTZD JFvkgHj73A/mM9NN{_.&n1g%d9 z%P6GG kg*[{8@ d`ARt_$Q52N#1,B@#3QJU!w'B[Tt,m8bZ$~ ]oiE@ BrIuQQ3%h{;*I s!QQhS?*=1 DAz, $D.=A"91}q?G8::y'8V!bzIYoZI}?M4"c}Phkf$dJ#} u]5C>N: L%VRE=`!LkX_tjLc&+4W1~0#Psx1llq!cg`t)Gc#U;/10/|%JBL2M$p^e*)wuI3Y2MpR@%.aS2e(c2`a~:Pa[O87jO)9yY 54'@, 20+#MP=6]C{W~E > 6NF N 8`9Yf 0%x?JjV^mZ~YS,yf`zsqZ5 rAm&l"6K&cHZ'@|#VMI< $3*jkdSI-'rH"ETR9j"h#N*G-htbgAmC!S1BYHt

Originally posted here:

Delta will honor tickets bought at super cheap, mistaken prices

Final Super 25 high school football rankings

Result: QB Luke Bishop threw three TD passes and ran for 133 yards in a 38-10 win Saturday vs. Brenham in 4A-II championship at Arlington. Next: Season complete.

Dropped out: No. 4 Katy, Texas.

The Super 25 rankings are compiled by USA TODAY Sports Jim Halley, based on teams' success and strength of schedule. These are the final rankings of the season.

The next 25

26. Aquinas Institute, Rochester, N.Y. (13-0); 27. Dowling, West Des Moines, Iowa (12-0); 28. Salpointe Catholic, Tucson (13-0); 29.Cedar Hill, Texas (14-2); 30. Bishop Gorman, Las Vegas (13-2); 31. Scott County, Georgetown, Ky. (15-0); 32. John Curtis Christian, River Ridge, La. (10-2);33. Kimberly, Wis. (15-0); 34. Paramus (N.J.) Catholic (10-2); 35. Chaminade, West Hills, Calif. (14-2);36. Clarkston, Mich. (13-1); 37. Valor Christian, Highlands Ranch, Colo. (13-1); 38. Maryville, Tenn. (15-0); 39. Naperville Central, Naperville, Ill. (12-3) 40. Lee's Summit West, Lee's Summit, Mo. (13-1); 41. Mentor, Ohio (13-2); 42. DePaul Catholic, Wayne, N.J. (10-2); 43. South Dade, Homestead, Fla. (14-1); 44. Bentonville, Ark. (11-2); 45. Norcross, Ga. (13-2); 46. St. Joseph Prep, Philadelphia (12-3); 47. Oscar Smith, Chesapeake, Va. (14-1); 48. Archbishop Wood, Warminster, Pa. (13-2); 49. Central Catholic, Pittsburgh (15-1); 50. American Heritage, Plantation, Fla. (14-1).

Read the rest here:

Final Super 25 high school football rankings

Super rich benefit from ‘status quo bias’

23 hours ago by H. Roger Segelken

(Phys.org) Income inequality between the super-rich and the rest of us and a sorry record of progressive policy initiatives from Congress all can be traced to a built-in "status quo bias" in our political system, according to Cornell's Peter K. Enns and colleagues at three universities.

They analyzed the behavior of Congress and economic trends for the past 70 years in for their article, "Conditional Status Quo Bias and Top Income Share: How the U.S. Political Institutions Have Benefitted the Rich," forthcoming in the Journal of Politics.

"Policy change, to ease income inequality and other socioeconomic ills, is made more complicated by the U.S. Senate's filibuster rules," says Enns, an assistant professor of government. "Furthermore, because more policy action is necessary to change the income distribution as inequality increases, the effects of status quo bias grow as inequality rises."

Reports from 2012 showing more than half the nation's total income going to the top 10 percent of earners, and one-fifth to the top 1 percent bear out the political scientists' analysis, which covered the years 1940-2006. Data on "top income share" were easy enough to find and plot on graphs across seven decades of ups (mostly) and downs.

Quantifying the politics of status quo bias required two approaches: the so-called "filibuster pivot distance" which measures the ideological difference between the "median" senator, in terms of ideology, and the filibuster pivot, the senator who would cast the vote to end a filibuster and the "Congressional policy product," which measures the overall legislative productivity of Congress.

The wider the filibuster-pivot distance, the more difficult it is to enact policy change that reduces income inequality, the political scientists asserted. Except for some deviation between 1958-76, the 70-year plots of filibuster pivot distance and top income share were similar. The more successful obstructionist filibusters were, the richer the rich became.

Determining Congressional policy product was also conceptually straightforward especially in recent years when there wasn't much coming from Capitol Hill. (Policy product output peaked in the mid-60s to early 1970s, and again in the late 1980s.) When the graph of Congressional policy product was inverted and superimposed on top-income share, the ups and downs are eerily alike: Policy-wise, nothing puts distance between the super rich and the rest like a well-maintained status quo.

Another author of the "status quo bias" paper, the University of Tennessee's Nathan J. Kelly, noted the "nuclear option" recently invoked by the Senate Democratic majority leaders to curtail filibuster on presidential appointments. Limiting the use of filibusters on appointments "won't have much effect on policy gridlock in Washington," Kelly predicted. "Only the very rich benefit from today's anti-majoritarian, gridlocked government."

Concluded Enns, "Our evidence suggests that the filibuster gets in the way of policy change that could reduce inequality of all kinds, including income inequality. Given the polarized political environment in Congress, significant changes in policy will be difficult without institutional reform."

Read the original post:

Super rich benefit from 'status quo bias'

NAS Home

Before NASA's Curiosity rover could begin its scientific mission on Mars, it first had to get there with a little help from the agency's supercomputers. Read More

Using complex computer models and the Pleiades supercomputer, scientists are taking a closer look at Earth's oceans and sea-ice to understand more about how they work and why they are changing. Read More

As NASA pours both its brainpower and supercomputing power into the new Space Launch System, NAS Division modeling and simulation experts remind us that it really is rocket science. Read More

The NASA Earth Exchange (NEX) is a unique collaborative workspace where Earth scientists can join forces, make new discoveries, and share knowledge to gain a better understanding of our planet. Read More

With the help of one of NASA's largest space telescopes and its most powerful supercomputer, scientists are analyzing observational data gathered from the Kepler mission spacecraft to search the skies for Earth's sister planets. Read More

12.10.13 The NASA Earth Exchange (NEX) website has a new look-and-feel that makes it easier for the Earth science community to explore NEX resources, form collaborations, and share information. Learn about the three NEX user tiers, which provide access to collaboration tools and utilities, datasets and a prototyping sandbox, and supercomputing resources operated by the NAS Division. Visit the NEX website

11.13.12 Some of NASA's best and brightest will showcase more than 30 of the agency's exciting computational achievements at the Supercomputer Conference 2013 (SC13), Nov. 17-22, 2013, in Denver. Highlighted accomplishments include science revelations made during the Mars rover Curiosity's first year on the Red Planet, the Kepler mission's new data-centric strategy for finding new planets, unique insights into the physical mechanisms of galaxy formation, and methods to improve the design of the Space Launch System and next-generation launch pad. Visit the NASA SC13 WebsitePress Release

11.12.13 Satellite and computer modeling datasets produced by the NASA Earth Exchange (NEX), a collaborative workspace that relies on NAS Division supercomputing capabilities, will be made available to researchers and educators across the globe through a new agreement with Amazon Web Services (AWS). Read More

09.19.13 System engineers in the NASA Advanced Supercomputing (NAS) Division have upgraded the Pleiades supercomputer and repurposed its original hardware to help meet NASA's ever-increasing requirements for high-performance computing cycles. Read More

We welcome your input on features and topics that you would like to see included on this website.

Go here to see the original:

NAS Home

Supercomputer – Wikipedia, the free encyclopedia

A supercomputer is a computer at the frontline of contemporary processing capacity particularly speed of calculation.

Supercomputers were introduced in the 1960s, made initially and, for decades, primarily by Seymour Cray at Control Data Corporation (CDC), Cray Research and subsequent companies bearing his name or monogram. While the supercomputers of the 1970s used only a few processors, in the 1990s machines with thousands of processors began to appear and, by the end of the 20th century, massively parallel supercomputers with tens of thousands of "off-the-shelf" processors were the norm.[2][3] As of November 2013[update], China's Tianhe-2 supercomputer is the fastest in the world at 33.86 petaFLOPS.

Systems with massive numbers of processors generally take one of two paths: In one approach (e.g., in distributed computing), a large number of discrete computers (e.g., laptops) distributed across a network (e.g., the internet) devote some or all of their time to solving a common problem; each individual computer (client) receives and completes many small tasks, reporting the results to a central server which integrates the task results from all the clients into the overall solution.[4][5] In another approach, a large number of dedicated processors are placed in close proximity to each other (e.g. in a computer cluster); this saves considerable time moving data around and makes it possible for the processors to work together (rather than on separate tasks), for example in mesh and hypercube architectures.

The use of multi-core processors combined with centralization is an emerging trend; one can think of this as a small cluster (the multicore processor in a smartphone, tablet, laptop, etc.) that both depends upon and contributes to the cloud.[6][7]

Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion). Throughout their history, they have been essential in the field of cryptanalysis.[8]

The history of supercomputing goes back to the 1960s when a series of computers at Control Data Corporation (CDC) were designed by Seymour Cray to use innovative designs and parallelism to achieve superior computational peak performance.[9] The CDC 6600, released in 1964, is generally considered the first supercomputer.[10][11]

Cray left CDC in 1972 to form his own company.[12] Four years after leaving CDC, Cray delivered the 80MHz Cray 1 in 1976, and it became one of the most successful supercomputers in history.[13][14] The Cray-2 released in 1985 was an 8 processor liquid cooled computer and Fluorinert was pumped through it as it operated. It performed at 1.9 gigaflops and was the world's fastest until 1990.[15]

While the supercomputers of the 1980s used only a few processors, in the 1990s, machines with thousands of processors began to appear both in the United States and in Japan, setting new computational performance records. Fujitsu's Numerical Wind Tunnel supercomputer used 166 vector processors to gain the top spot in 1994 with a peak speed of 1.7 gigaflops per processor.[16][17] The Hitachi SR2201 obtained a peak performance of 600 gigaflops in 1996 by using 2048 processors connected via a fast three dimensional crossbar network.[18][19][20] The Intel Paragon could have 1000 to 4000 Intel i860 processors in various configurations, and was ranked the fastest in the world in 1993. The Paragon was a MIMD machine which connected processors via a high speed two dimensional mesh, allowing processes to execute on separate nodes; communicating via the Message Passing Interface.[21]

Approaches to supercomputer architecture have taken dramatic turns since the earliest systems were introduced in the 1960s. Early supercomputer architectures pioneered by Seymour Cray relied on compact innovative designs and local parallelism to achieve superior computational peak performance.[9] However, in time the demand for increased computational power ushered in the age of massively parallel systems.

While the supercomputers of the 1970s used only a few processors, in the 1990s, machines with thousands of processors began to appear and by the end of the 20th century, massively parallel supercomputers with tens of thousands of "off-the-shelf" processors were the norm. Supercomputers of the 21st century can use over 100,000 processors (some being graphic units) connected by fast connections.[2][3]

See the original post:

Supercomputer - Wikipedia, the free encyclopedia

Blue Gene – Wikipedia, the free encyclopedia

Blue Gene is an IBM project aimed at designing supercomputers that can reach operating speeds in the PFLOPS (petaFLOPS) range, with low power consumption.

The project created three generations of supercomputers, Blue Gene/L, Blue Gene/P, and Blue Gene/Q. Blue Gene systems have often led the TOP500[1] and Green500[2] rankings of the most powerful and most power efficient supercomputers, respectively. Blue Gene systems have also consistently scored top positions in the Graph500 list.[3] The project was awarded the 2009 National Medal of Technology and Innovation.[4]

In December 1999, IBM announced a US$100 million research initiative for a five-year effort to build a massively parallel computer, to be applied to the study of biomolecular phenomena such as protein folding.[5] The project had two main goals: to advance our understanding of the mechanisms behind protein folding via large-scale simulation, and to explore novel ideas in massively parallel machine architecture and software. Major areas of investigation included: how to use this novel platform to effectively meet its scientific goals, how to make such massively parallel machines more usable, and how to achieve performance targets at a reasonable cost, through novel machine architectures. The initial design for Blue Gene was based on an early version of the Cyclops64 architecture, designed by Monty Denneau. The initial research and development work was pursued at IBM T.J. Watson Research Center.

At IBM, Alan Gara started working on an extension of the QCDOC architecture into a more general-purpose supercomputer: The 4D nearest-neighbor interconnection network was replaced by a network supporting routing of messages from any node to any other; and a parallel I/O subsystem was added. DOE started funding the development of this system and it became known as Blue Gene/L (L for Light); development of the original Blue Gene system continued under the name Blue Gene/C (C for Cyclops) and, later, Cyclops64.

In November 2004 a 16-rack system, with each rack holding 1,024 compute nodes, achieved first place in the TOP500 list, with a Linpack performance of 70.72 TFLOPS.[1] It thereby overtook NEC's Earth Simulator, which had held the title of the fastest computer in the world since 2002. From 2004 through 2007 the Blue Gene/L installation at LLNL[6] gradually expanded to 104 racks, achieving 478 TFLOPS Linpack and 596 TFLOPS peak. The LLNL BlueGene/L installation held the first position in the TOP500 list for 3.5 years, until in June 2008 it was overtaken by IBM's Cell-based Roadrunner system at Los Alamos National Laboratory, which was the first system to surpass the 1 PetaFLOPS mark. The system was built in Rochester, MN IBM plant.

While the LLNL installation was the largest Blue Gene/L installation, many smaller installations followed. In November 2006, there were 27 computers on the TOP500 list using the Blue Gene/L architecture. All these computers were listed as having an architecture of eServer Blue Gene Solution. For example, three racks of Blue Gene/L were housed at the San Diego Supercomputer Center.

While the TOP500 measures performance on a single benchmark application, Linpack, Blue Gene/L also set records for performance on a wider set of applications. Blue Gene/L was the first supercomputer ever to run over 100 TFLOPS sustained on a real world application, namely a three-dimensional molecular dynamics code (ddcMD), simulating solidification (nucleation and growth processes) of molten metal under high pressure and temperature conditions. This achievement won the 2005 Gordon Bell Prize.

In June 2006, NNSA and IBM announced that Blue Gene/L achieved 207.3 TFLOPS on a quantum chemical application (Qbox).[7] At Supercomputing 2006,[8] Blue Gene/L was awarded the winning prize in all HPC Challenge Classes of awards.[9] In 2007, a team from the IBM Almaden Research Center and the University of Nevada ran an artificial neural network almost half as complex as the brain of a mouse for the equivalent of a second (the network was run at 1/10 of normal speed for 10 seconds).[10]

The Blue Gene/L supercomputer was unique in the following aspects:[11]

The Blue Gene/L architecture was an evolution of the QCDSP and QCDOC architectures. Each Blue Gene/L Compute or I/O node was a single ASIC with associated DRAM memory chips. The ASIC integrated two 700MHz PowerPC 440 embedded processors, each with a double-pipeline-double-precision Floating Point Unit (FPU), a cache sub-system with built-in DRAM controller and the logic to support multiple communication sub-systems. The dual FPUs gave each Blue Gene/L node a theoretical peak performance of 5.6 GFLOPS (gigaFLOPS). The two CPUs were not cache coherent with one another.

Read this article:

Blue Gene - Wikipedia, the free encyclopedia

IBM Roadrunner – Wikipedia, the free encyclopedia

IBM Roadrunner

Roadrunner components

Roadrunner was a supercomputer built by IBM for the Los Alamos National Laboratory in New Mexico, USA. The US$100-million Roadrunner was designed for a peak performance of 1.7 petaflops. It achieved 1.026 petaflops on May 25, 2008 to become the world's first TOP500 Linpack sustained 1.0 petaflops system.[2][3]

In November 2008, it reached a top performance of 1.456 petaflops, retaining its top spot in the TOP500 list.[4] It was also the fourth-most energy-efficient supercomputer in the world on the Supermicro Green500 list, with an operational rate of 444.94 megaflops per watt of power used. The hybrid Roadrunner design was then reused for several other energy efficient supercomputers.[5] Roadrunner was decommissioned by Los Alamos on March 31, 2013.[6] In its place, Los Alamos uses a supercomputer called Cielo, which was installed in 2010. Cielo is smaller and more energy efficient than Roadrunner, and cost $54 million.[6]

IBM built the computer for the U.S. Department of Energy's (DOE) National Nuclear Security Administration.[7][8] It was a hybrid design with 12,960 IBM PowerXCell 8i[9] and 6,480 AMD Opteron dual-core processors[10] in specially designed blade servers connected by Infiniband. The Roadrunner used Red Hat Enterprise Linux along with Fedora[11] as its operating systems and was managed with xCAT distributed computing software. It also used the Open MPI Message Passing Interface implementation.[12]

Roadrunner occupied approximately 296 server racks[13] which covered 560 square metres (6,000sqft)[14] and became operational in 2008. It was decommissioned March 31, 2013.[13] The DOE used the computer for simulating how nuclear materials age in order to predict whether the USA's aging arsenal of nuclear weapons are both safe and reliable. Other uses for the Roadrunner included the science, financial, automotive and aerospace industries.

Roadrunner differed from other contemporary supercomputers because it was the first hybrid supercomputer.[13] Previous supercomputers only used one processor architecture, since it was easier to design and program for. To realize the full potential of Roadrunner, all software had to be written specially for this hybrid architecture. The hybrid design consisted of dual-core Opteron server processors manufactured by AMD using the standard AMD64 architecture. Attached to each Opteron core is a PowerXCell 8i processor manufactured by IBM using Power Architecture and Cell technology. As a supercomputer, the Roadrunner was considered an Opteron cluster with Cell accelerators, as each node consists of a Cell attached to an Opteron core and the Opterons to each other.[15]

Roadrunner was in development from 2002 and went online in 2006. Due to its novel design and complexity it was constructed in three phases and became fully operational in 2008. Its predecessor was a machine also developed at Los Alamos named Dark Horse.[16] This machine was one of the earliest hybrid architecture systems originally based on ARM and then moved to the Cell processor. It was entirely a 3D design, its design integrated 3D memory, networking, processors and a number of other technologies.

The first phase of the Roadrunner was building a standard Opteron based cluster, while evaluating the feasibility to further construct and program the future hybrid version. This Phase 1 Roadrunner reached 71 teraflops and was in full operation at Los Alamos National Laboratory in 2006.

Phase 2 known as AAIS (Advanced Architecture Initial System) included building a small hybrid version of the finished system using an older version of the Cell processor. This phase was used to build prototype applications for the hybrid architecture. It went online in January 2007.

More:

IBM Roadrunner - Wikipedia, the free encyclopedia

macpro-AFPrelax-191213.jpg

December 19, 2013

The super-powerful desktop computer will go on sale today. - AFP/Relaxnews pic, December 19, 2013.The wait is over. Apple's super-sleek super-fast professional desktop will be launching in time for Christmas.

The fastest and most powerful personal computer in Apple's 37-year history will be going on sale today.

Apple has been teasing its design and features since June and promoting its endless scope for customisation.

And now, for $2999 (RM9,700), consumers will be able to snap up the "entry-level" model which boasts a 3.7GHz quad-core Intel Xeon E5 processor, two AMD FirePro D500 workstation GPUs (each one has 2GB of dedicated RAM) plus 256GB of on-board flash storage.

For $3999 (RM13,000) customers can upgrade all of that to 3.5GHz six-core processor, an extra 1GB of dedicated RAM for each graphics card, 16GB of RAM, but the same 265GB of flash storage.

But for those that need the ultimate in speed and performance and for whom money is no object, a 12-core processor, AMD FirePro D700 GPUs with 6GB of RAM and 64GB of system RAM can be specified, as can 1TB of flash storage. In other words, the computational equivalent of ordering a BMW. Just visit Apple's dedicated Mac Pro site.

Apple says that the computer will not just be available to order online from today, it can also be bought from its retail stores meaning that for some people it could be a very happy Christmas. But don't forget to buy a monitor, keyboard and mouse too, none of which are included as standard. - AFP/Relaxnews, December 19, 2013.

Visit link:

macpro-AFPrelax-191213.jpg

Supermicro® Expands Range of Energy Efficient VDI Server Solutions for NVIDIA GRID

- New Enterprise-Class GRID K1/K2 SuperServers Offer Customers More Configurations for Optimized Performance, Scalability and TCO

SAN JOSE, Calif., -- Super Micro Computer, Inc. (NASDAQ: SMCI), a global leader in high-performance, high-efficiency server, storage technology and green computing offers the industry's widest range of enterprise-class VDI server solutions optimized for NVIDIA GRID(TM) graphics-accelerated virtual desktops and applications. With high-performance virtual GPU technology enabling a new era in server-side computing, it is increasingly important to select platforms that provide optimal cooling alongside power-efficiency to maximize compute density and overall reliability. Supermicro's years of design and engineering expertise have yielded high-density GPU server platforms that offer the widest variety of flexible configurations in 1U, 2U, 4U/Tower, FatTwin(TM) and SuperBlade(R) computing solutions. The company's new NVIDIA GRID based server solutions take advantage of this to maximize user density and provide an uncompromised user experience in large scale virtualized environments. These application optimized systems deliver maximum productivity for Knowledge Workers and Power Users (GRID K1) or accelerated compute performance for Engineers and Designers (GRID K2). For a limited time (through January 31, 2014) and while supplies last*, Supermicro is offering a trial system discount on select NVIDIA GRID based VDI server solutions at http://www.supermicro.com/GRID_VDI. Additional systems supporting NVIDIA GRID K1/K2 include the new 4U 8x GPU SuperServer(R) (SYS-4027GR-TR) and 2x GPU SuperBlade(R) (SBI-7127RG-E( http://www.supermicro.com/products/superblade/module/SBI-7127RG-E.cfm )) supporting 20x GPUs + 20x CPUs per 7U.

"Supermicro provides Enterprise and Cloud Data Center customers with the best and widest range of energy efficient, performance optimized server solutions to help lower overall TCO and increase profit margins," said Charles Liang, President and CEO of Supermicro. "As computing resources and applications shift from office environments to the Data Center, IT experts that employ Supermicro systems like our high-density 1U 4x GPU SuperServer or cooling and resource optimized 4U FatTwin will win big. Our extensive selection of NVIDIA GRID certified platforms are exactly optimized for any scale application or virtualized workload, ensuring companies receive maximum performance per watt, per dollar, per square foot from their investment."

New NVIDIA GRID VDI Certified Systems:

1U SuperServers -- 2x Xeon E5-2680 V2, 16GB DDR3-1866, 2x Intel(R) 520 2.5" 240GB SATA 6Gb/s MLC SSD -- SYS-1027GR-TR2-NVK1 (1x K1) -- SYS-1027GR-TR2-NVK1 (1x K2) -- SYS-1027GR-TR2-2NVK1 (2x K1) -- SYS-1027GR-TR2-2NVK2 (2x K2)

2U SuperServers -- 2x Xeon E5-2680 V2, 16GB DDR3-1866, 2x Intel(R) 520 2.5" 240GB SATA 6Gb/s MLC SSD -- SYS-2027GR-TR-2NVK1 (1x K1) -- SYS-2027GR-TR-2NVK2 (2x K2) -- SYS-2027GR-TR-3NVK2 (3x K2)

4U/Tower Servers -- 2x Xeon E5-2680 V2, 16GB DDR3-1866, 2x Intel(R) 520 2.5" 240GB SATA 6Gb/s MLC SSD -- SYS-7047GR-TPRF-2NVK1 (2x K1) -- SYS-7047GR-TPRF-2NVK2 (2x K2) -- SYS-7047GR-TPRF-3NVK2 (3x K2)

4U 4-Node FatTwin(TM) SuperServers -- (each node) 2x Xeon E5-2680 V2, 1x K1 or K2 GPU, 16GB DDR3-1866, 2x Intel(R) 2.5" 520 240GB SATA 6Gb/s MLC SSD -- SYS-F627G2-FT+-NVK1 (4x K1) -- SYS-F627G2-FT+-NVK2 (4x K2)

*Visit http://www.supermicro.com/GRID_VDI for complete information on Supermicro's NVIDIA GRID based solutions and Terms and Conditions for the special limited time GRID system trial offer.

Follow Supermicro on Facebook( https://www.facebook.com/Supermicro ) and Twitter( http://twitter.com/Supermicro_SMCI ) to receive their latest news and announcements.

Go here to read the rest:

Supermicro® Expands Range of Energy Efficient VDI Server Solutions for NVIDIA GRID