15 Vintage Computer Ads That Show How Far Weve Come – Small Business Trends

Computers have gone through some major transformations over the last handful of decades. Old computer advertisements can really showcase some of these changes, from the 1950s when huge computers were just for industrial and business users to the days when they became ubiquitous for the average consumer.

Looking at old computer ads makes you realize see how far technology has come in recent decades. Take a look at our collection for a trip down memory lane.

Back in the 1950s, computers didnt really even resemble what we think of as computers today. But this vintage ad from Ford Instrument Co. shows how massive and different these devices used to be.

Back in the 1960s computer ads were less about showing the actual computers and more about exploring what they could do. They were also pretty much aimed at businesses, since most people didnt see a need for computers in their homes just yet. This ad from IBM explored the idea of giving small businesses access to major scientific advances, like those used in the Apollo missions.

This computer advertisement from the 1970s was aimed at business users. It touted the Nixdorf computer as a way to make a business more efficient. With a simple visual and a ton of copy, you can really see how large and completely different business computers used to look.

Today, small businesses cant get by without a computer. So it seems strange that back in the 1980s, companies had to actually be convinced that having a computer could benefit them. But thats exactly the purpose of this IBM ad. The commercial is aimed squarely at small businesses and shows how many different types of industries could benefit from adding just one device.

I may never use a typewriter again! Thats just one of the interesting lines that dates this 1980s ad for Radio Shacks line of computers. It also touts how affordable the models are (starting at nearly $3,000) and how small they are (even though theyre very boxy by todays standards).

This 1980s computer advertisement for the Atari 520st includes some incredible graphics and demonstrates the power of an old school gaming computer. In addition to showcasing the ability to play games, this commercial also points out how such a device can help users learn code.

Can the whole family benefit from having a personal computer at home? Now, this seems like an obvious yes. But back in the 1980s people had to be convinced about the versatility of such a device. Thats the basic premise behind this day in the life ad for the Commodore 64. It shows the computer being used by a dad to check the stock market, mom to pay bills, and kids to learn letters and numbers.

This ad also shows how the 1980s were the first decade where people started to really think about having a computer at home on a wide scale. To demonstrate how powerful this investment could be, this Texas Instruments ad decided to translate the power of a computer into different individuals.

This 1980s Timex ad clearly shows an image of the actual computer being sold. This one happens to be a fairly small and bare bones model that was really just used for typing. And the low price reflects that.

If you want to see a very 1980s commercial, it doesnt get better than this one for the Sinclair ZX Spectrum. Aside from the very timely visuals, the ad is also noteworthy for the features that its playing up like the computers keyboard.

This Apple commercial that aired during the 1984 Super Bowl is perhaps one of the most iconic tech ads of its generation. The ad introduced the Macintosh computer, but didnt actually feature the device. Instead, it caught peoples attention by referencing 1984 the book.

By the 1990s computers were starting to improve from their earlier models. This ad announces a new version of the Commodore 64. This one brings 128k memory instead of 64k. So the company called it the Commodore 128.

In addition to actual technological advances, computers were also starting to gain traction with new audiences by the 1990s. Not only had they become popular with individuals instead of just businesses, but now they were also becoming popular with young people instead of just adults. This collection of ads from Dell showcases how the company was marketing desktop computers for students.

And the progression continued even further. This Apple ad shows how the company marketed their product as helpful for kids and their education.

And finally, computers were officially aiming to reach people of all ages with this Rugrats themed Gateway commercial from the 1990s. The ad showcased how software programs for kids could help them learn and have fun.

Image: Depositphotos.com

Read the rest here:

15 Vintage Computer Ads That Show How Far Weve Come - Small Business Trends

Leicester City’s chances in title race with Liverpool and Man City predicted by supercomputer – Leicestershire Live

Leicester City are set to miss out on the Premier League title, but will secure a Champions League place according to bookie SportNation.bets latest Super Computer.

The table features two different markets - the title odds for the top 10 and relegation odds for the bottom half, striving to be the most accurate predictor of the final standings.

The Foxes are tipped to end the season in third place in the league, behind outright favourites for the title Liverpool and Manchester City.

Leicester are a 40/1 shot for the title, meaning they find themselves below Pep Guardiolas side (6/1), despite still holding a point-advantage over the defending champions.

It seems theres no stopping Liverpool though, who are 1/8 on to end their long wait for a league title this season they currently hold a 10-point advantage over Brendan Rodgers side ahead of their meeting on Boxing Day.

Elsewhere, Chelsea just beat Tottenham to fourth, having widened the gap on their North London rivals to four points with a victory over their London rivals on Sunday.

Manchester United and Wolves are tipped for sixth and seventh - a repeat of their final positions from last season, while Arsenal and Everton - who are under new management - are set for an almighty climb from 11th and 15th to eighth and ninth respectively, with Sheffield United rounding off the top half.

At the other end, it doesnt look good for Norwich, who are 1/12 for a return to the Championship, while Watford - despite picking up their first home win of the season last weekend - are still odds on (3/11) for relegation.

Aston Villa (5/7), who will be missing key man John McGinn for a lengthy period due to a fractured ankle, finish off the teams destined for the drop, with Southampton staying in the Premier League by the skin of their teeth once again.

A SportNation.bet spokesman said: "The odds suggest that Liverpool already have the league wrapped up and it's not even Christmas, while Leicester fans will be delighted with third place and a return to Champions League football and Frank Lampard will be pleased with a top four finish in his first season in charge at Stamford Bridge.

"Norwich and Watford are both six points from safety and the market predicts they will go down without a whimper, while Villa, who are also currently in the relegation zone, will join them back in the Championship."

1) Liverpool: 1/8(title odds)

2) Man City: 6/1

3) Leicester: 40/1

4) Chelsea: 450/1

5) Tottenham: 500/1

6) Man Utd: 650/1

7) Wolves: 900/1

8) Arsenal: 1000/1

9) Everton: 2000/1

10) Sheff Utd: 2500/1

-

11) Burnley: 65/4(relegation odds)

12) Palace: 11/1

13) Newcastle: 17/2

14) Brighton: 8/1

15) Bournemouth: 5/1

16) West Ham: 4/1

17) Southampton: 3/1

18) Aston Villa: 5/7

19) Watford: 3/11

20) Norwich: 1/12

Go here to read the rest:

Leicester City's chances in title race with Liverpool and Man City predicted by supercomputer - Leicestershire Live

Super Computer model rates Newcastle relegation probability and chances of beating Man Utd – The Mag

Interesting overview of Newcastle United for the season and Thursdays game at Old Trafford.

The super computer model predictions are based on the FiveThirtyEight revision to the Soccer Power Index, which is a rating mechanism for football teams which takes account of over half a million matches, and is based on Optas play-by-play data.

They have analysed all Premier League matches this midweek, including the game at Old Trafford.

Their computer model gives Man Utd a 65% chance of a home win, it is 22% for a draw and a 13% possibility of a Newcastle win (percentage probabilities rounded up/down to nearest whole number).

When it comes to winning the title, they have the probability at 82% Liverpool, 17% Man City and the rest nowhere.

Also interesting to see how the computer model now rates the percentage probability chances of relegation:

82% Norwich

54% Watford

48% Villa

28% Southampton

24% West Ham

18% Bournemouth

15% Brighton

11% Newcastle United

7% Palace

5% Burnley

5% Everton

3% Arsenal

1% Sheff Utd

So they now rate Newcastle only a one in eleven chance of going down.

See original here:

Super Computer model rates Newcastle relegation probability and chances of beating Man Utd - The Mag

Super Computer model rates Newcastle United relegation probability and chances of beating Everton – The Mag

Interesting overview of Newcastle United for the season and todays game at St James Park.

The super computer model predictions are based on the FiveThirtyEight revision to the Soccer Power Index, which is a rating mechanism for football teams which takes account of over half a million matches, and is based on Optas play-by-play data.

They have analysed all Premier League matches this midweek, including this game at SJP against Everton.

Their computer model gives Everton a 41% chance of an away win, it is 28% for a draw and a 31% possibility of a Newcastle win (percentage probabilities rounded up/down to nearest whole number).

When it comes to winning the title, they have the probability at 95% Liverpool, 5% Man City and the rest nowhere.

Also interesting to see how the computer model now rates the percentage probability chances of relegation:

85% Norwich

59% Watford

45% Villa

32% West Ham

20% Bournemouth

15% Brighton

14% Southampton

12% Newcastle United

7% Burnley

4% Palace

4% Everton

2% Arsenal

1% Sheff Utd

So they now rate Newcastle only a one in eight chance of going down, though still more likely to be relegated than Burnley, Arsenal and Everton despite that trio being below NUFC in the Premier League table at the halfway point.

Read the original post:

Super Computer model rates Newcastle United relegation probability and chances of beating Everton - The Mag

Will there ever be a supercomputer that can think like HAL? – Macquarie University

Whether or not Hal will one day refuse to 'open the pod bay doors' IRL will depend on the research goals the field of artificial intelligence (AI) sets for itself.

Supercomputer: HAL 9000 is a fictional artificial intelligence character and the main antagonist in the Space Odyssey series.

Currently, the field is not prioritising the goal of developing a flexible, general purpose intelligent system like HAL. Instead, most efforts are focused on building specialised AI systems that perform well often much better than humans in highly restricted domains.

These are the AI systems that power Googles search, Facebooks news feed and Netflixs recommendation engine; answer phones at call centres; translate natural languages from one to another; and even provide medical diagnoses. So the portrait of AI that Stanley Kubrick developed in his film 2001: A Space Odyssey, while appropriate for the time (after all, Kubricks film came out in 1968), appears pretty outdated in light of current developments.

That is not to say a superhuman general intelligence like HAL could not be built in principle, although what exactly it would take remains an open scientific question. But from a practical perspective, it seems highly unlikely that anything like HAL will be built in the near future either by academic researchers or industry.

The future of AI is probably more accurately depicted by a toaster that knows when you want to eat breakfast in the morning, than anything resembling a super intelligence like HAL.

Does this mean that artificial intelligence and other related fields like machine learning and computational neuroscience have nothing interesting to offer? Far from it. Its just that the goals have changed.

Artificial intelligence these days is more closely connected to the rapidly growing fields of machine learning, neural networks, and computational neuroscience. Major tech companies like Google and Facebook, among many others, have been investing heavily in these areas in recent years and large in-house AI research groups are quickly becoming the norm.A perfect example of this is Google Brain.

So AI isnt going anywhere, its just being transformed and incorporated into quite literally everything from internet search to self-driving cars to 'intelligent' appliances. The future of AI is probably more accurately depicted by a toaster that knows when you want to eat breakfast in the morning, than anything resembling a super intelligence like HAL.

Virtually everything in the popular media today about AI concerns deep learning. These algorithms work by using statistics to find patterns in data, and they have revolutionised the field of AI in recent years. Despite their immense power and ability to match, and in many cases exceed, human performance on image categorisation and other tasks, there are some things at which humans still excel.

For instance, deep convolutional neural networks must be trained on massive amounts of data, far more than humans require to exhibit comparable performance. Moreover, network training must be supervised in the sense that when the network is learning, each output the network produces for a given input is compared against a stored version of the correct output. The difference between actual and ideal provides an error signal to improve network performance.

Incredible brainpower: AI software has been designed with cognitive abilities similar to those of the human brain, explain Crossley and Kaplan.

And yet humans can learn to do a remarkable variety of things like visually categorise objects and drive cars based on relatively small data sets without explicit supervision. By comparison, a deep neural network might require a training set of millions of images or tens of millions of driving trials, respectively.

The critical question is, how do we do this? Our brains are powerful neural networks shaped by millions of years of evolution to do unsupervised, or better, self-supervised, learning sometimes on the basis of limited data. This is where AI will be informed by ongoing work in the cognitive science and neuroscience of learning.

Cognitive science is the study of how the brain gives rise to the many facets of the mind, including learning, memory, attention, decision making, skilled action, emotion, etc. Cognitive science is therefore inherently interdisciplinary. It draws from biology, neuroscience, philosophy, physics, psychology, among others.

In particular, cognitive science has a long and intimate relationship with computer science and artificial intelligence. The influence between these two fields is bidirectional. AI influences cognitive science by providing new analysis methods and computational frameworks with which neural and psychological phenomena can be crisply described.

Will artificial intelligence ever match or surpass human intelligence on every dimension? At the moment, all we can do is speculate, but a few things seem unambiguously true.

Cognitive science is at the heart of AI in the sense that the very concept of "intelligence" is fundamentally entangled with comparisons to human behaviour, but there are much more tangible instances of cognitive science influencing AI. For instance, the earliest artificial neural nets were created in an attempt to mimic the processing methods of the human brain.

More recent and further advanced artificial neural nets (e.g., deep neural nets) are sometimes deeply grounded in contemporary neuroscience. For instance, the architecture of artificial deep convolutional neural nets (the current state of the art in image classification) is heavily inspired by the architecture of the human visual system.

The spirit of appealing to how the brain does things to improve AI systems remains prevalent in the current AI research (e.g., complimentary learning systems, deep reinforcement learning, training protocols inspired by "memory replay" in the hippocampus), and it is common for modern AI research papers to include a section on biological plausibility that is, how closely matched are the workings of the computational system to what is known about how the brain performs similar tasks.

This all raises an interesting question about the frontiers of cognitive science and AI. The reciprocity between cognitive science and artificial intelligence can be seen even at the final frontier of each discipline. In particular, will cognitive science ever fully understand how the brain implements human cognition, and the corresponding general human intelligence?

And back to our original question about HAL: Will artificial intelligence ever match or surpass human intelligence on every dimension?

At the moment, all we can do is speculate, but a few things seem unambiguously true. The continued pursuit of how the brain implements the mind will yield ever richer computational principles that can inspire novel artificial intelligence approaches. Similarly, ongoing progress in AI will continue to inspire new frameworks for thinking about the wealth of data in cognitive science.

Dr Matthew Crossley is a researcher in the Department of Cognitive Science at Macquarie University working on category and motor learning. Dr David Kaplan is a researcher in the Department of Cognitive Science at Macquarie University working on motor learning and the foundations of cognitive science.

Understanding cognition, which includes processes such as attention, perception, memory, reading and language, is one of the greatest scientific challenges of our time. The new Bachelor of Brain and Cognitive Sciences degree the only one of its kind in Australia provides a strong foundation in the rapidly growing fields of cognitive science, neuroscience and computation.

View post:

Will there ever be a supercomputer that can think like HAL? - Macquarie University

AWS wants to reinvent the supercomputer, starting with the network – ZDNet

Image: Asha Barbaschow

Amazon Web Services wants to reinvent high performance computing (HPC) and according to VP of AWS global infrastructure Peter DeSantis, it all starts with the network.

Speaking at his Monday Night Live keynote, DeSantis said AWS has been working for the last decade to make supercomputing in the cloud a possibility.

"Over the past year we've seen this goal become reality," he said.

According to DeSantis, there's no precise definition of an HPC workload, but he said the one constant is that it is way too big to fit on a single server.

"What really differentiates HPC workloads is the need for high performance networking so those servers can work together to solve problems," he said, talking on the eve of AWS re:Invent about what the focus of the cloud giant's annual Las Vegas get together will be.

"Do I care about HPC? I hope so, because HPC impacts literally every aspect of our lives the big, hard problems in science and engineering."

See also: How higher-ed researchers leverage supercomputers in the fight for funding (TechRepublic)

DeSantis explained that typically in supercomputing, each server works out a portion of the problem, and then all the servers share the results with each other.

"This information exchange allows the servers to continue doing their work," he said. "The need for tight coordination puts significant pressure on the network."

To scale these HPC workloads effectively, DeSantis said a high-performance, low-latency network is required.

"If you look really closely at a modern supercomputer, it really is a cluster of servers with a purpose-built, dedicated network network provides specialised capabilities to help run HPC applications efficiently," he said.

In touting the cloud as the best place to run HPC workloads, DeSantis said the "other" problem with physical supercomputers is they're custom built, which he said means they're expensive and take years to procure and stand up.

"One of the benefits of cloud computing is elasticity," he continued.

Another problem AWS wants to fix with supercomputing in the cloud is the democratisation element, with DeSantis saying one issue is getting access to a supercomputer.

"Usually only the high-value applications have access to the supercomputer," he said.

"With more access to low-cost supercomputing we could have safer cars we could have more accurate forecasting, we could have better treatment for diseases, and we can unleash innovation by giving everybody [access].

"If we want to reinvent high performance computing, we have to reinvent supercomputers."

AWS also wants to reinvent machine learning infrastructure.

"Machine learning is quickly becoming an integral part of every application," DeSantis said.

However, the optimal infrastructure for the two components of machine learning -- training and inference -- are very different.

"A good machine learning dataset is big, and they're getting bigger and training involves doing multiple passes through your training data," De Santis said.

"We're excited in investments we've made in HPC and it's helping us with machine learning."

Earlier on Monday, Formula 1 announced it had partnered with AWS to carry out simulations that it says has resulted in the car design for the 2021 racing season, touting the completion of a Computational Fluid Dynamics (CFD) project that simulates the aerodynamics of cars while racing.

The CFD project used over 1,150 compute cores to run detailed simulations comprised of over 550 million data points that model the impact of one car's aerodynamic wake on another.

Asha Barbaschow travelled to re:Invent as a guest of AWS.

See the original post:

AWS wants to reinvent the supercomputer, starting with the network - ZDNet

Idaho’s Third Supercomputer Coming to Collaborative Computing Center – HPCwire

IDAHO FALLS, Idaho, Dec. 5, 2019 A powerful new supercomputer arrived this week at Idaho National Laboratorys Collaborative Computing Center. The machine has the power to run complex modeling and simulation applications, which are essential to developing next-generation nuclear technologies.

Named after a central Idaho mountain range, Sawtooth arrives in December and will be available to users early next year. The $19.2 million systemranks #37 on the 2019 Top 500fastest supercomputers in the world. That is the highest ranking reached by an INL supercomputer. Of 102 new systems added to the list in the past six months, only three were faster than Sawtooth.

It will be able to crunch much more complex mathematical calculations at approximately six times the speed of Falcon and Lemhi,INLs current systems.

The boost in computing power will enable researchers at INL and elsewhere to simulate new fuels and reactor designs, greatly reducing the time, resources and funding needed to transition advanced nuclear technologies from the concept phase into the marketplace.

Supercomputing reduces the need to build physical experiments to test every hypothesis, as was the process used to develop the majority of technologies used in currently operating reactors. By using simulations to predict how new fuels and designs will perform in a reactor environment, engineers can select only the most promising technologies for the real-world experiments,saving time and money.

INLs ability to model new nuclear technologies has become increasingly important as nations strive to meet growing energy needs while minimizing emissions. Today, there are about 450 nuclear power reactors operating in 30 countries plus Taiwan. These reactors produce approximately 10% of the worlds electricity and 60% of Americas carbon-free electricity. According to theWorld Nuclear Association, 15 countries are currently building about 50 power reactors.

John Wagner, the associate laboratory director for INLs Nuclear Science and Technology directorate, said Sawtooth plays an important role in developing and deploying advanced nuclear technologies and is a key capability for the National Reactor Innovation Center (NRIC).

In August, theU.S. Department of Energy designated INL to lead NRIC, which was established to provide developers the resources to test, demonstrate and assess performance of new nuclear technologies, critical steps that must be completed before they are available commercially.

With advanced modeling and simulation and the computing power now available, we expect to be able to dramatically shorten the time it takes to test, manufacture and commercialize new nuclear technologies, Wagner said. Other industries and organizations, such as aerospace, have relied on modeling and simulation to bring new technologies to market much faster without compromising safety and performance.

Sawtooth is funded by the DOEs Office of Nuclear Energy through the Nuclear Science User Facilities program. It will provide computer access to researchers at INL, other national laboratories, industry and universities. Idahos three research universitieswill be able to access Sawtoothand INLs other supercomputers remotely via the Idaho Regional Optical Network (IRON), an ultra-high-speed fiber optic network.

This system represents a significant increase in computing resources supporting nuclear energy research and development and will be the primary system for DOEs nuclear energy modeling and simulation activities, said Eric Whiting, INLs division director for Advanced Scientific Computing. It will help guide the future of nuclear energy.

Sawtooth, with its nearly 100,000 processors, is being installed in the new 67,000-square-foot Collaborative Computing Center,which opened in October. The new facility was designed to be the heart of modeling and simulation work for INL as well as provide floor space, power and cooling for systems such as Sawtooth. Falcon and Lemhi, the labs current supercomputing systems, also are slated to move to this new facility.

About INL

INL is one of the U.S. Department of Energys (DOEs) national laboratories. The laboratory performs work in each of DOEs strategic goal areas: energy, national security, science and environment. INL is the nations center for nuclear energy research and development. Day-to-day management and operation of the laboratory is the responsibility of Battelle Energy Alliance. See more INL news atwww.inl.gov. Follow @INL onTwitteror visit our Facebook page atwww.facebook.com/IdahoNationalLaboratory.

Source: Idaho National Laboratory

Follow this link:

Idaho's Third Supercomputer Coming to Collaborative Computing Center - HPCwire

The rise and fall of the PlayStation supercomputers – The Verge

Dozens of PlayStation 3s sit in a refrigerated shipping container on the University of Massachusetts Dartmouths campus, sucking up energy and investigating astrophysics. Its a popular stop for tours trying to sell the school to prospective first-year students and their parents, and its one of the few living legacies of a weird science chapter in PlayStations history.

Those squat boxes, hulking on entertainment systems or dust-covered in the back of a closet, were once coveted by researchers who used the consoles to build supercomputers. With the racks of machines, the scientists were suddenly capable of contemplating the physics of black holes, processing drone footage, or winning cryptography contests. It only lasted a few years before tech moved on, becoming smaller and more efficient. But for that short moment, some of the most powerful computers in the world could be hacked together with code, wire, and gaming consoles.

Researchers had been messing with the idea of using graphics processors to boost their computing power for years. The idea is that the same power that made it possible to render Shadow of the Colossus grim storytelling was also capable of doing massive calculations if researchers could configure the machines the right way. If they could link them together, suddenly, those consoles or computers started to be far more than the sum of their parts. This was cluster computing, and it wasnt unique to PlayStations; plenty of researchers were trying to harness computers to work as a team, trying to get them to solve increasingly complicated problems.

The game consoles entered the supercomputing scene in 2002 when Sony released a kit called Linux for the PlayStation 2. It made it accessible, Craig Steffen said. They built the bridges, so that you could write the code, and it would work. Steffen is now a senior research scientist at the National Center for Supercomputing Applications (NCSA). In 2002, he had just joined the group and started working on a project with the goal of buying a bunch of PS2s and using the Linux kits to hook them (and their Emotion Engine central processing units) together into something resembling a supercomputer.

They hooked up between 60 and 70 PlayStation 2s, wrote some code, and built out a library. It worked okay, it didnt work superbly well, Steffen said. There were technical issues with the memory two specific bugs that his team had no control over.

Every time you ran this thing, it would cause the kernel on whatever machine you ran it on to kind of go into this weird unstable state and it would have to be rebooted, which was a bummer, Steffen said.

They shut the project down relatively quickly and moved on to other questions at the NCSA. Steffen still keeps one of the old PS2s on his desk as a memento of the program.

But thats not where PlayStations supercomputing adventures met their end. The PS3 entered the scene in late 2006 with powerful hardware and an easier way to load Linux onto the devices. Researchers would still need to link the systems together, but suddenly, it was possible for them to imagine linking together all of those devices into something that was a game-changer instead of just a proof-of-concept prototype.

Thats certainly what black hole researcher Gaurav Khanna was imagining over at UMass Dartmouth. Doing pure period simulation work on black holes doesnt really typically attract a lot of funding, its just because it doesnt have too much relevance to society, Khanna said.

Money was tight, and it was getting tighter. So Khanna and his colleagues were brainstorming, trying to think of solutions. One of the people in his department was an avid gamer and mentioned the PS3s Cell processor, which was made by IBM. A similar kind of chip was being used to build advanced supercomputers. So we got kind of interested in it, you know, is this something interesting that we could misuse to do science? Khanna says.

Inspired by the specs of Sonys new machine, the astrophysicist started buying up PS3s and building his own supercomputer. It took Khanna several months to get the code into shape and months more to clean up his program into a working order. He started with eight, but by the time he was done, he had his own supercomputer, pieced together out of 176 consoles and ready to run his experiments no jockeying for space or paying other researchers to run his simulations of black holes. Suddenly, he could run complex computer models or win cryptography competitions at a fraction of the cost of a more typical supercomputer.

Around the same time, other researchers were having similar ideas. A group in North Carolina also built a PS3 supercomputer in 2007, and a few years later, at the Air Force Research Laboratory in New York, computer scientist Mark Barnell started working on a similar project called the Condor Cluster.

The timing wasnt great. Barnells team proposed the project in 2009, just as Sony was shifting toward the pared-back PS3 slim, which didnt have the capability to run Linux, unlike the original PS3. After a hack, Sony even issued a firmware update that pulled OpenOS, the system that allowed people to run Linux, from existing PS3 systems. That made finding useful consoles even harder. The Air Force had to convince Sony to sell it the un-updated PS3s that the company was pulling from shelves, which, at the time, were sitting in a warehouse outside Chicago. It took many meetings, but eventually, the Air Force got what it was looking for, and in 2010, the project had its big debut.

Running on more than 1,700 PS3s that were connected by five miles of wire, the Condor Cluster was huge, dwarfing Khannas project, and it used to process images from surveillance drones. During its heyday, it was the 35th fastest supercomputer in the world.

But none of this lasted long. Even while these projects were being built, supercomputers were advancing, becoming more powerful. At the same time, gaming consoles were simplifying, making them less useful to science. The PlayStation 4 outsold both the original PlayStation and the Wii nearing the best-selling status currently held by the PS2. But for researchers, it was nearly useless. Like the slimmer version of the PlayStation 3 released before it, the PS4 cant easily be turned into a cog for a supercomputing machine. Theres nothing novel about the PlayStation 4, its just a regular old PC, Khanna says. We werent really motivated to do anything with the PlayStation 4.

The era of the PlayStation supercomputer was over.

The one at UMass Dartmouth is still working, humming with life in that refrigerated shipping container on campus. The UMass Dartmouth machine is smaller than it used to be at its peak power of about 400 PlayStation 3s. Parts of it have been cut out and repurposed. Some are still working together in smaller supercomputers at other schools; others have broken down or been lost to time. Khanna has since moved on to trying to link smaller, more efficient devices together into his next-generation supercomputer. He says the Nvidia Shield devices hes working with now are about 50 times more efficient than the already-efficient PS3.

Its the Air Forces supercluster of super consoles that had the most star-studded afterlife. When the program ended about four years ago, some consoles were donated to other programs, including Khannas. But many of the old consoles were sold off as old inventory, and a few hundred were snapped up by people working with the TV show Person of Interest. In a ripped-from-the-headlines move, the consoles made their silver screen debut in the shows season 5 premiere, playing wait for it a supercomputer made of PlayStation 3s.

Its all Hollywood, Barnell said of the script, but the hardware is actually our equipment.

Correction, 7:05 PM ET: Supercomputer projects needed the original PS3, not the PS3 Slim, because Sony had removed Linux support from the console in response to hacks which later led to a class-action settlement. This article originally stated that it was because the PS3 Slim was less powerful. We regret the error.

Go here to see the original:

The rise and fall of the PlayStation supercomputers - The Verge

Premier League table: talkSPORT Super Computer predicts where every club will finish in 2019/20 campaign – talkSPORT.com

With managers falling on their swords left, right, and centre, the Premier League is proving to be just as exciting as ever.

The troubling seasons the likes of Tottenham, Manchester United, and Arsenal have had, along with the success of Leicester and Sheffield United mean this could end up being an incredible era-defining year in the English top-flight.

Getty Images - Getty

And the rest of the campaign certainly seems set to be enthralling with December seeing Manchester and Merseyside derbies along with more humdingers you can hear live on the talkSPORT Network.

But how will it all end?

We booted up the talkSPORT Super Computer to find out just what is going to happen.

You can see the results and the predicted Premier League table below

Getty Images - Getty

Getty Images - Getty

Getty Images - Getty

Getty Images - Getty

AFP or licensors

Getty Images - Getty

Getty Images - Getty

Getty Images

AFP or licensors

Getty Images - Getty

Saturday is GameDay on talkSPORT as we bring you THREE live Premier League commentaries across our network

Read more:

Premier League table: talkSPORT Super Computer predicts where every club will finish in 2019/20 campaign - talkSPORT.com

A Success on Arm for HPC: We Found a Fujitsu A64fx Wafer – AnandTech

When speaking about Arm in the enterprise space, the main angle for discussion is on the CPU side. Having a high-performance SoC at the heart of the server has been a key goal for many years, and we have had players such as Amazon, Ampere, Marvell, Qualcomm, Huawei, and others play for the server market. The other angle to attack is for co-processors and accelerators. Here we have one main participant: Fujitsu. We covered the A64FX when the design was disclosed at Hot Chips last year, with its super high cache bandwidth, and it will be available on a simple PCIe card. The main end-point for a lot of these cards will be the Fugaku / Post-K supercomputer in Japan, where we expect it to hit a one of the top numbers on the TOP500 supercomputer list next year.

After the design disclosure last year at Hot Chips, at Supercomputing 2018 we saw an individual chip on display.This year at Supercomputing 2019, we found a wafer.

I just wanted to post some photos. Enjoy.

The A64FX is the main recipient of the Arm Scalable Vector Extensions, new to Arm v8.2, which in this instance gives 48 computing cores with a 512-bit wide SIMD powered by 32 GiB of HBM2. Inside the chip is a custom network, and externally the chip is connected via a Tofu interconnect (6D/Torus), and the chip provides 2.7 TFLOPs of DGEMM performance. The chip itself is built on TSMC 7nm and has 8.786 billion transistors, but only 594 pins. Peak memory bandwidth is 1 TB/s.

The chip is built for both high performance, high throughput, and high performance per watt, supporting FP64 through to INT8. The L1 data cache is designed for sustained throughput, and power management is tightly controlled on chip. Either way you slice it, this chip is mightily impressive. We even saw HPE deploy two of these chips in a single half-width node.

See the original post:

A Success on Arm for HPC: We Found a Fujitsu A64fx Wafer - AnandTech

Why India Needed A 100PF Supercomputer To Help With Weather Forecasting – Analytics India Magazine

India has been dealing with the changing climatic conditions and needs reliable forecasts for extreme weather conditions like droughts, floods, cyclones, lightning and air quality, among others. On November 28, India announced that over the next 2 years it plans to augment the existing Supercomputer to 100PetaFlops for accurate weather forecasting. This announcement made at the 4-day workshop on Prediction skill of extreme Precipitation events and tropical cyclones: Present Status and future prospect (IP4) and Annual Climate Change.

Why use Supercomputers?

With Indias 70% of livelihood still depending on agriculture, increasing the accuracy of weather prediction becomes essential. To understand how supercomputers help in weather prediction, we have to understand a little about how weather forecasting works.

Weather forecasting uses what we call weather forecast models. These models are the closest thing to a time machine for meteorologists. The weather forecasting models are a computer programme which simulates what atmospheric conditions could look like in the foreseeable future. These models solve group mathematical equations that govern the climatic conditions, and these equations approximate the atmospheric changes before they take place. Now, there are two types Statistical and dynamic models. The statistical models havent been providing a reliable result, so a dynamic model has been developed for Indian conditions.

However, to run such dynamic models and provide such forecasting, enhanced supercomputing power is necessary.

Every hour weather satellites, weather balloons, ocean buoys, and surface weather stations around the world record billions of data. This large volume of data is stored, processed and analysed and thats where supercomputers come in. The meteorologists could solve the governing equations by themselves, but the equations are so complex that it would take months to solve them by hand. In contrast, supercomputers can solve them in a matter of hours. This process of using models equations to forecast the weather conditions numerically is called Numerical Weather Prediction.

Some of the worlds most famous weather forecast and climate monitoring models include:

When the supercomputer gives the output, the forecaster takes this information in consideration along with their knowledge of weather process, personal experience and the familiarity with natures unpredictability to issue the forecast.

Now, what is this 100 PF Supercomputer that India is planning to use?

First, 100PF stands for 100 petaflops like there are measuring units for speed for a device, the supercomputer has FLOPS.

A fun way to understand what a PF is like if 1 PetaFlop took 1 second to prove a complex mathematical theorem, then you would take 31,688,765 years to solve it.

These supercomputers are usually hard to manufacture and have an operating span of 5 years because of the enormous temperature conditions they operate in. These supercomputers need a large facility to be kept in because of their size. US supercomputer Summit which holds the top position when it comes to supercomputer rankings with a capacity of about 148.6PF, spreads over 5,600 sqft, which is about the size of two tennis courts.

Regarding India, in 2018 the French company, Atos won the 4500 crore tender for manufacturing these supercomputers, other competitors being Lenovo, HP, NetWeb Technologies. Atos has a 3-year contract with India to manufacture 70 HPCs under National Supercomputing Mission.

Over the last ten years, India has successfully upgraded its supercomputing capacity facility. Below is a list of some of the acquisitions of High-Performance Computing (HPC) systems over the years:

(A TF speed unit is like, what 1 TF computer system can do in just one second, youd have to perform one calculation every second for 31,688.77 years.)

Both Pratyush and Mihir were inaugurated in 2018. With Pratyush and Mihir, India is in the top-30 position (from 368th position) in the top 500 list of HPC facilities in the world. The facility will also place India at the 4th position after Japan, UK and USA, concerning the HPC resources for weather and climate community.

Some notable predictions and uses by supercomputing power in India:

Dr M Rajeevan, the Union Secretary for Earth Sciences, this week said that in 2019 the Govt has predicted five cyclones accurately. India is using supercomputers with a combined capacity of 6.8 PF. One can imagine how powerful the services of a 100PF Supercomputer can be because even with accurate predictions of the weather this year, some improvement is needed. Therefore to provide precise prediction at a high resolution, more supercomputing power is required. It will not only benefit India but also the neighbouring countries.

Also, here is a short video by NASA Goddard

comments

Read the original here:

Why India Needed A 100PF Supercomputer To Help With Weather Forecasting - Analytics India Magazine

The Snapdragon 865: The new chip where everything really is new and improved – Android Central

Another year means another new version of Qualcomm's high-end Snapdragon mobile chip. It's always a good thing to see and even though it never measures up to Apple's A-series processors in raw computing numbers (and it doesn't need to because raw numbers usually don't mean anything), you know Qualcomm is going to bring its A-game. There will always be a thing or two that turns out to be a significant upgrade from last year's model.

This year, what's a significant upgrade is best described as everything. Qualcomm spent the year fighting in court and staving off buyout attempts and building a chip where everything is newer, better, stronger, and faster.

Qualcomm Snapdragon 865: Top 4 best things (and 1 bad)

The Kryo CPU cores are more powerful, yet use the same arrangement that means the Snapdragon can have good battery life. The GPU is insane and is actually built with a mind for high-end gaming first and foremost. The Camera ISP can shoot 8K video or a 200-megapixel photo. Yes, 200-megapixels. And we haven't even mentioned the new AI capabilities. This thing is for real.

CPU cores and their arrangements aren't exciting to most of us. Qualcomm has used the same basics in its Snapdragon series for a while. You'll find a combination of low power using cores, moderate power using cores and one big honking battery bleeding group that's sickeningly powerful and kicks in when it's needed and sleeping when it's not. This works great and nothing here has changed. What did change in the CPU was a 25% across the board improvement in processing power on a 7-nanometer die so it's not going to kill your battery before lunch unless you are trying to push everything to the limit.

In 2009 the HTC Hero was released with the Qualcomm MSM7200A chip. It was the most powerful phone you could buy. 10 years later and we have super-computer chips going into our next phone.

The New Adreno GPU is a gaming-first hunk of silicon that's not only optimized for desktop-class titles (which you're going to need if you really want to put an ARM CPU in a Windows 10 laptop) but is built so that Qualcomm can work with game developers and optimized the driver and let you download it from the Play Store. That's amazing, and my favorite part of the whole announcement.

More: Qualcomm delivering Adreno 650 GPU updates via the Play Store is a huge deal for Android gaming

The Camera ISP (Image Signal Processor) in current-generation Snapdragon processors is one of the finest available. Companies like Google or Huawei may depend on AI to make great photos, but with the right camera hardware, the Qualcomm Spectra ISP can do a great job, too. And starting in 2020, it can do it in 8K videography, 200-MP photos, incorporate Dolby Vision or HDR10, and a handful of other things that the storage controller will never be able to keep up with and a measly 512GB of storage on the highest-end phones won't be able to hold. But that's not the point Qualcomm can do it, so now it's time for other companies to step up so it can happen.

The X55 modem is going to be in every phone using the Snapdragon 865. That means world-class LTE performance, both Sub-6 and mmWave 5G, and it will be able to use both Stand-Alone and Non-Stand Alone configurations in single or dual SIM modes. The RF stuff doesn't stop there, though.

The Snapdragon 865 also has Wi-Fi 6 with Qualcomm's patented fast connect setup that saves even more battery power when using the right Wi-Fi gear, and a new phase of the aptX codec designed for voice calls that allows super-wideband transfer over Bluetooth so your Bluetooth headphones don't make it sound like you're in a sewer when you make a call.

Imagine if your phone could "hear" the difference when talking to it at home or talking to it in the car. That's contextual awareness.

Finally as if this isn't quite enough Qualcomm has ramped up its Hexagon Tensor Accelerator package that's built for AI. It can process at an amazing 15 trillion operations per second, which means if you are a developer who wants to integrate AI into your software, the engine that can do it has enough power. Qualcomm specifically says that real-time translations and a Sensing Hub package that can be contextually aware of its surroundings are part of the 5th generation of its AI engine and I can't wait to see what it can do in the real world.

Most people aren't going to know they have a fancy upgraded chip inside their phone or care about it as long as they can do the things they bought a smartphone for. The new Snapdragon 865 brings that and so much more that the people who can and do care will also be plenty happy with Qualcomm's offering for 2020. I know I can't wait.

Read more:

The Snapdragon 865: The new chip where everything really is new and improved - Android Central

Super Computer model rates Newcastle relegation probability and chances of beating Sheffield United – The Mag

Interesting overview of Newcastle United for the season and tonights game at Sheffield United.

The super computer model predictions are based on the FiveThirtyEight revision to the Soccer Power Index, which is a rating mechanism for football teams which takes account of over half a million matches, and is based on Optas play-by-play data.

They have analysed all Premier League matches this midweek, including the game at Bramall Lane.

Their computer model gives Sheffield United a 49% chance of an away win, it is 27% for a draw and a 24% possibility of a Newcastle win.

They also have predictions as to how the final Premier League table will look, for winning the title it is now Man City 27% and Liverpool 70%, the rest basically nowhere, Leicester next highest at 2% each.

Interesting to see how the computer model rates the percentage probability chances of relegation as:

70% Norwich

65% Watford

26% Newcastle United

25% West Ham

24% Southampton

21% Villa

18% Brighton

16% Bournemouth

10% Burnley

8% Sheff Utd

8% Everton

4% Palace

3% Arsenal

2% Wolves

Link:

Super Computer model rates Newcastle relegation probability and chances of beating Sheffield United - The Mag

Penguin Computing to deliver Magma Supercomputer, one of the First Intel – AiThority

Penguin Computing, Inc., a leader in high-performance computing (HPC), artificial intelligence (AI), and enterprise data center solutions and services, announced that it, along with partners Intel and CoolIT, will deliver the Magma Supercomputer to Lawrence Livermore National Laboratory. The Magma system was procured through the Commodity Technology Systems (CTS-1) contract with the National Nuclear Security Administration (NNSA) and is one of the first deployments of Intel Xeon Platinum 9200 series processors with support from CoolIT Systems complete direct liquid cooling and Omni-Path interconnect.

Read More: Friendlys OMA-DM Client Received Certification From Verizon and Other US Mobile Carriers

Magma is based on RelionXE2142eAP compute servers. Magmas 752 compute nodes are each configured with dual Xeon Platinum 9242 processors, with a theoretical peak of over 7 TFLOPs and 293TB of system memory calculating an RPeak of 5.313 PFLOPS. CoolIT Systems provides the complete direct liquid cooling solution for Magma through a blind-mate coldplate loop design which captures +85% of the server heat through CPU, DIMM and VR coldplates, allowing the servers to operate at maximum efficiency. The CoolIT subfloor piping, in-rack manifolds and row-based CHx750 CDUs deliver the required heat exchanging capability and coolant flow to support all racks.

Funded through NNSAs Advanced Simulation & Computing (ASC) program, Magma will support NNSAs Life Extension Program and efforts critical to ensuring the safety, security and reliability of the nations nuclear weapons in the absence of underground testing.

The convergence of HPC and AI is here today. We are excited to deliver Magma, an HPC system that is enhanced by artificial intelligence technology, said William Wu, Vice President of Hardware Products at Penguin Computing. We are seeing artificial intelligence permeate every industry and, specifically in HPC, we can now deliver a converged platform that allows AI to accelerate HPC modeling for our data scientist customers.

We continue designing new, leading edge solutions with our partners for the DOE NNSAs CTS-1 contract. Magma is another example of a great shared effort resulting in an HPC cluster designed and built to meet new demanding workloads. We anticipate this system to qualify for the November 2019 Top500 HPC list, said Ken Gudenrath, DOE Director at Penguin Computing.

Read More: appsFreedom Raises Growth Equity Round and Rebrands to Pillir

Penguin Computing is committed to Expanding the worlds vision of what is possible! The Magma cluster brings a new level of synergy amongst our clients, partners and Penguin Computing. One of our primary goals with Magma is to bring new mission technologies and capabilities to Livermore National Labs and its user communities, said Sid Mair, President of Penguin Computing.

Magma is a major leap forward in HPC and AI convergence that could only be achieved with trusted engineering collaboration between Lawrence Livermore National Lab, Penguin Computing, and Intel, said Phil Harris, VP and GM of Intels Datacenter Solutions Group. With up to 96 cores per node, massive memory bandwidth, and integrated AI acceleration with Intel DL Boost technology, the Intel Xeon Platinum 9200 processor will provide a powerful foundation for Lawrence Livermore National Lab to enhance its ability to achieve its mission goals.

The Commodity Technology System efforts at NNSA represent a very cost-effective way to manage our workload at each of our three laboratories, said Mark Anderson, Director for NNSAs Office of Advanced Simulation and Computing and Institutional Research and Development Programs. In this model, commodity-based systems take on the bulk of day-to-day computing, leaving the larger advanced technology capability systems available for only the most demanding problems across the Tri-Lab community. This is just an example of the sophisticated approach NNSA is taking to manage demanding workloads in the most efficient manner for the country.

Magma represents a timely addition to our CTS machines in order to address the significant surge in demand coming from NNSAs major Life Extension Program, said Michel McCoy, LLNLs Advanced Simulation & Computing program director. It is essential to have available a supply chain that can respond essentially instantly, delivering state-of-the-art technology in just a few months to meet pressing national security needs. We look forward to moving this system into production as fast as possible.

Read More: Jama Software Boosts Team Collaboration and Eases Review Cycles with Updates to Jama Connect Review Center

See the original post here:

Penguin Computing to deliver Magma Supercomputer, one of the First Intel - AiThority

Supercomputer simulation reveals how galaxies eat gas and evolve – Sky News

A new simulation from a NASA supercomputer has revealed how galaxies evolve by eating the gas spread through space around them.

When stars reach the end of their life cycle they can explode as a supernova, blowing gas formed of elements made inside the star back into space.

This gas and dust collects into enormous clouds which can eventually collapse, leading to anywhere between dozens to tens of thousands of stars forming almost simultaneously.

Without showing the light from the stars themselves, the NASA simulation depicts gases moving in and out of an evolving galaxy over 13 billion years.

It shows gases in a range of colours, from purple to yellow, to indicate the density of the gas where purple is the lower density gas and yellow is higher density.

There are blue and red colours which indicate the temperature of the gas too.

What the supercomputer simulation reveals is how colder, denser gas flows in along cosmic filaments to the spots where stars are forming.

When these stars explode as supernovae they blast galactic superwinds out of the galaxy, and these are the less dense hotter gases in the simulation.

"As there is more star formation and thus more supernovae at early times, these winds become calmer as the galaxy evolves," according to NASA.

"Unlike bright galaxies that emit plenty of light for us to observe, it's far more difficult to see the dark gases in the unlit corners of the cosmos," said NASA.

"One way to do this is by finding bright sources of light - such as other galaxies - and measuring how these gases absorb that light, to get a glimpse of what's in these hidden areas.

"Scientists use such 'cosmic lighthouses' to illuminate this cosmic fog rolling in from the dark 'oceans' between galaxies," NASA explained.

But interpreting the data from these observations is very difficult.

Powerful and complicated supercomputer simulations are carried out using the Pleiades supercomputer at the NASA Advanced Supercomputing facility at the Ames Research Centre in Silicon Valley.

The simulations are then matched up with observations from the Hubble space telescope to extrapolate the properties of the gas hidden between galaxies.

NASA said: "The results tell us that this space is far from empty. It has complex structures made of churning, turbulent gases and small clouds, as well as extreme temperatures.

"Light is one of our few tools to directly observe the cosmos, but combined with scientific ingenuity and supercomputing, we can uncover so much more."

Link:

Supercomputer simulation reveals how galaxies eat gas and evolve - Sky News

Supercomputer Market Outline and Pipeline Review from 2019-2025|Cray – The Connect Report

Global Supercomputer market is a detailed research study that helps provides answers and related questions with respect to the emerging trends and rise moment in this particular industry. It helps select each of the easily seen barriers to rise, apart from identifying the trends within various application sector of the global market.

The study focuses on the driving factors, restraints and hurdles for the expansion of the market. The research worker offers Industry insights with reference to the approaching areas within the business and therefore the impact of technological innovations on the expansion of the market.

Request for Free Sample Copy at: http://www.researchreportcenter.com/request-sample/1264162

Cray, Dell, HPE, Lenovo, Fujitsu

North America, China, Rest of Asia-Pacific, UK, Europe, Central & South America, Middle East & Africa

Get Discount on this Report: http://www.researchreportcenter.com/check-discount/1264162

To Clear Any Query about Report, Please Refer Link: http://www.researchreportcenter.com/send-an-enquiry/1264162

Customization of the Report: This report can be customized as per your needs for additional data or countries.Please connect with our sales team (sales@researchreportcenter.com)

View original post here:

Supercomputer Market Outline and Pipeline Review from 2019-2025|Cray - The Connect Report

Watch This Ultra-Hypnotic Supercomputer Simulation of Galaxies Feasting – Free

Galaxies are ravenous eaters. In addition to occasionally cannibalizing each other, galaxies are constantly feeding on gases strewn across the vast spaces that separate them.

These gases spill into the intergalactic medium when stars explode into supernovae, and are subsequently recycled when they are sucked back into galaxies to fuel the formation of new stars.

On Wednesday, NASA released a mesmerizing new visualization of this dynamic process playing out over the course of billions of years. The simulation was generated by the Pleiades supercomputer at NASAs Ames Research Center in Silicon Valley, based on observations of galaxies and the rare glimpses scientists sometimes get of gas surrounding them.

The team that collected the data and ran the supercomputer models is called Figuring Out Gas and Galaxies in Enzo (FOGGIE). The FOGGIE acronym refers to the term cosmic fog which describes intergalactic gases that are illuminated by nearby galaxies. These spectral gases look something like fog rolling in from the dark oceans between galaxies," according to a NASA statement.

Molly Peeples, an associate research scientist at Johns Hopkins University who leads the FOGGIE project, presented the simulation at the supercomputing conference SC19, which is being held this week in Denver, Colorado.

The visualization is color-coded, with yellow representing regions with high densities of gas, such as the core of the simulated galaxy, while purples show where gas is more sparse. Reds and blues illustrate the temperature gradient, from hot to cold. Sudden bursts of red show how hot energetic supernovae explosions create superwinds which blow gas into intergalactic space, where it cools into blue cosmic fog.

Gases expelled in supernovae tend to be drawn back into the galaxy along so-called large-scale structures, which scientists think connect galaxies in a cosmic web of filaments and knots made of gas and dark matter.

In the early life of a galaxy, the process of sneezing out gases in supernovae winds, then slurping them back in to make more stars, is much wilder and more turbulent. As galaxies mature, they tend to calm down, though this quiescence is easily interrupted by collisions between galaxies. This type of crash is fated to happen to the Milky Way when it collides with the nearby Andromeda galaxy in about five billion years.

Because the gas in between galaxies does not emit much light, its tough to reconstruct the bigger picture of gas exchange between galaxies and the intergalactic medium. As shown by FOGGIE and NASA in this new video, supercomputers can help to fill in the gaps, while also producing stunning visualizations of the epic cycle of star death and rebirth that drives so much of galactic evolution.

More:

Watch This Ultra-Hypnotic Supercomputer Simulation of Galaxies Feasting - Free

Gospel according to HPE: And lo, on the 32,768th hour did thy SSD give up the ghost – The Register

Updated Using an HPE solid-state drive? You might want to take a look at your firmware after the computer outfit announced that some of its SSDs could auto-bork after less than four years of use.

The problem affects "certain" HPE SAS Solid State Drive models once the 32,768th hour of operation has been reached and, frankly, is a bit of a disaster for admins not on top of their firmware patching game.

Failing to update to version HPD8 will, according to a blunt missive from HPE, "result in drive failure and data loss".

Do we detect the use of an integer or something similar in a counter by one of HPE's SSD suppliers, perchance? The Register asked HPE, but we have not received a response as yet.

Once borked, users must restore from backups. "Neither the SSD nor the data can be recovered," says HPE. Oh, and those of you looking nervously at your RAIDs: "SSDs which were put into service at the same time will likely fail nearly simultaneously."

That's nice.

The potentially affected boxen include HPE ProLiant, Synergy, Apollo, JBOD D3xxx, D6xxx, D8xxx, MSA and StoreVirtual 3200.

Readers may recall that it was an Apollo-based supercomputer that spent some quality time on orbit from 2017. That computer had its own share of SSD problems, with nine of its SSDs failing while in space, but we suspect that might be more down to the environment than a magical number of uptime hours being hit.

As for HPE, while it administers a stern word to the unnamed SSD manufacturer, users of affected SKUs should take a close look at the company's advisory, check their hours and patch if needed.

"By disregarding this notification and not performing the recommended resolution," thundered HPE, "the customer accepts the risk of incurring future related errors."

So there.

Thanks to Reg reader Paul for the tip.

HPE has sent us a statement:

A supplier notified HPE on 11/15 of a manufacturer firmware defect in certain solid state drives used in select HPE server and storage products. HPE immediately began working around the clock to develop a firmware update that will fix the defect. We are currently notifying customers of the need to install this update as soon as possible. Helping our customers to remediate this issue is our highest priority.

Sponsored: Beyond the Data Frontier

Excerpt from:

Gospel according to HPE: And lo, on the 32,768th hour did thy SSD give up the ghost - The Register

Google’s claims of quantum supremacy: Groundbreaking, overhyped, or both? – Penn: Office of University Communications

What makes quantum computing so challenging?

Real quantum systems are subject to a lot of noise, and the hard thing about quantum engineering is making devices that preserve the probability amplitudes. The low temperatures, a few thousandths of a degree above absolute zero, are all about removing noise, but Googles device is still really noisy. What they measure is almost entirely a random signal with a small deviation, where the small deviation is coming from the quantum mechanics.

Based on Googles estimate in their Nature paper, a classical supercomputer would need 10,000 years to complete what the quantum computer did, but then IBM says it would only need a couple of days using a different method. Could you explain this discrepancy?

IBM said they have an algorithm that could be faster than the 10,000 years that Google stated and that was because they realized that it is just possible to store that state of 254 qubits on the hard drives of the Oak Ridge supercomputer, the largest in the world, operating for two days.

Does IBMs conjecture take away from the overall significance of what Google did?

I dont think it changes the fact that this demonstration is showing a clear separation in how hard it is to perform this calculation in a classical computer versus a quantum device. Its absolutely true that people can come up with different ways of calculating things, and the performance of our classical supercomputers and algorithms will continue to improve.

IBM is absolutely right to point out this discrepancy and also to make the larger point that the quantum supremacy demonstration is not really useful, so we should continue to wait for devices that can run quantum algorithms with known applications. Its also important for IBM to run the simulation to see if it really does take two days because sometimes running things on supercomputers is not as obvious as in a theorists head. Google posted the output from their quantum calculations, so then we can check to see if they really are measuring the quantum effects they believe.

Ultimately, I think this demonstration will go down in history as a landmark achievement. Although there are other quantum devicesor materials for that matterthat are hard to simulate classically, this is the first device matching that description that is an engineered, fully programmable quantum computer. That is an important distinction since there is a natural blueprint for how one scales the system into larger devices that can run more complex calculations. With a quantum computer, adding just one qubit doubles the computational capacity, so things can move quickly now.

What comes next?

Were still a long way from having the types of quantum machines in many peoples heads, like ones that can simulate chemical reactions or break encryption models. The best estimates for what you need in a quantum computer to break encryption codes is around 10 million qubits with the same properties as these 54.

Googles quantum computer is in some ways analogous to ENIAC, the first general-purpose digital computer, which was built at Penn in the 1940s. ENIAC was built for a special purpose, using the best technology available at the time, but it ultimately found far wider applications and spawned the information age. It was a huge engineering feat to take something from a basic concept, in ENIACs case vacuum tubes that can perform logic gates, and put enough of them together to calculate something that was previously inaccessible.

That is very much what Googles approach has been. Theyve known for several years that a device could be assembled into something of this scale, and it has really just been a matter of building it. It is important to note that there are many other ways to build quantum devices, and we do not yet know what form the useful quantum computers of the future will take.

It may be that these superconducting qubits continue to push the boundaries, but it also may be that there is some other technologymaybe yet to be discoveredthat will push it forward. That is whyit is so important to continue with basic research in this area. In the case of classical computers, ENIAC was completed in 1945, and the transistor was invented two years later.

Another difference between classical and quantum computing is that we do not have great ideas for what to do with machines like Googles. The last sentence of Googles paper essentially sums up the field: We are only one creative algorithm away from valuable near-term applications. They are acknowledging two things: That its not useful right now, and also theres a lot of uncertainty. Tomorrow, somebody could publish an algorithm that uses a device like this for something useful, and that would be a game changer.

More:

Google's claims of quantum supremacy: Groundbreaking, overhyped, or both? - Penn: Office of University Communications

NVIDIA Jetson Xavier NX Debuts As The Smallest Super Computer For AI At The Edge – Forbes

On November 6th, NVIDIA introduced the latest member of the Jetson family - the Jetson Xavier NX. With the size thats smaller than a credit card, this module packs a punch.

Earlier this year, NVIDIA launched Jetson Nano, the smallest yet the most powerful GPU-based edge computing device. Jetson Xavier NX, much-advanced edge computing device, has the pin compatibility with Jetson Nano making it possible to port the AIoT applications deployed on the Nano. It also supports all major AI frameworks, including TensorFlow, PyTorch, MXNet, Caffe and others.

Jetson Xavier NX

According to NVIDIA, Jetson Xavier NX delivers up to 14 terra operations per second (TOPS) at 10W or 21 terra operations per second at 15W, running multiple neural networks in parallel and processing data from multiple high-resolution sensors simultaneously in a Nano form factor. Like other Jetson products, the Xavier NX runs on CUDA-X AI software which makes it easy to optimize deep learning networks for inference at the edge. Similar to Jetson Nano and Jetson TX2, Xavier NX is powered by JetPack software that comes with all the components need to train and infer neural networks.

When it comes to the CPU, the Xavier NX is powered by a 6-core Carmel Arm 64-bit CPU, 6MB of L2 and 4MB of L3 cache. The device can support up to six CSI cameras over 12 MIPI CSI-2 lanes. It comes with 8GB of 128-bit LPDDR4x RAM, which is capable of performing data transfer at 51.2GB/second. The device runs an Ubuntu-based Linux operating system.

Priced at USD 399, The Jetson Xavier NX device is expected to be shipped in March of 2020.

In the recently announced MLPerf inferencing benchmark results, Jetson Xavier has helped NVIDIA top the inferencing benchmark results. According to NVIDIA, Xavier ranked as the highest performer under both edge-focused scenarios (single- and multi-stream) among commercially available edge and mobile SoCs.

MLPerf Benchmark

Though there are AI accelerators available from Google, Intel, and Qualcomm, what differentiates NVIDIA is the compatibility of the software stack. With an increased number of GPU cores, the Jetson family of devices can not only be used for inferencing but also for performing training on the device. CUDA compatibility ensures that neural networks trained in mainstream frameworks such as TensorFlow, Caffe, PyTorch, and MXNet can be run on these devices with no conversion or optimization. But for increased performance, NVIDIA ships SDK and tools to convert the trained models into TensorRT, a software layer optimized for inferencing.

Jetson Nano and Xavier NX are the most affordable and powerful edge computing devices available in the market. With CUDA-X AI software and support for popular deep learning frameworks, developers are adopting the Jetson family for running AI at the edge.

When it becomes available early next year, Jetson Xavier NX will enable developers and businesses to run sophisticated AI-infused applications at the edge.

NVIDIA is moving fast with its AI strategy. Having captured the majority of the AI training segment with K80, P100, and T4 GPUs, NVIDIA is now eyeing the inferencing segment with the Jetson family of products.

Read the original here:

NVIDIA Jetson Xavier NX Debuts As The Smallest Super Computer For AI At The Edge - Forbes