Google claims to have invented a quantum computer, but IBM begs to differ – The Conversation CA

On Oct. 23, 2019, Google published a paper in the journal Nature entitled Quantum supremacy using a programmable superconducting processor. The tech giant announced its achievement of a much vaunted goal: quantum supremacy.

This perhaps ill-chosen term (coined by physicist John Preskill) is meant to convey the huge speedup that processors based on quantum-mechanical systems are predicted to exhibit, relative to even the fastest classical computers.

Googles benchmark was achieved on a new type of quantum processor, code-named Sycamore, consisting of 54 independently addressable superconducting junction devices (of which only 53 were working for the demonstration).

Each of these devices allows the storage of one bit of quantum information. In contrast to the bits in a classical computer, which can only store one of two states (0 or 1 in the digital language of binary code), a quantum bit qbit can store information in a coherent superposition state which can be considered to contain fractional amounts of both 0 and 1.

Sycamore uses technology developed by the superconductivity research group of physicist John Martinis at the University of California, Santa Barbara. The entire Sycamore system must be kept cold at cryogenic temperatures using special helium dilution refrigeration technology. Because of the immense challenge involved in keeping such a large system near the absolute zero of temperature, it is a technological tour de force.

The Google researchers demonstrated that the performance of their quantum processor in sampling the output of a pseudo-random quantum circuit was vastly better than a classical computer chip like the kind in our laptops could achieve. Just how vastly became a point of contention, and the story was not without intrigue.

An inadvertent leak of the Google groups paper on the NASA Technical Reports Server (NTRS) occurred a month prior to publication, during the blackout period when Nature prohibits discussion by the authors regarding as-yet-unpublished papers. The lapse was momentary, but long enough that The Financial Times, The Verge and other outlets picked up the story.

A well-known quantum computing blog by computer scientist Scott Aaronson contained some oblique references to the leak. The reason for this obliqueness became clear when the paper was finally published online and Aaronson could at last reveal himself to be one of the reviewers.

The story had a further controversial twist when the Google groups claims were immediately countered by IBMs quantum computing group. IBM shared a preprint posted on the ArXiv (an online repository for academic papers that have yet to go through peer review) and a blog post dated Oct. 21, 2019 (note the date!).

While the Google group had claimed that a classical (super)computer would require 10,000 years to simulate the same 53-qbit random quantum circuit sampling task that their Sycamore processor could do in 200 seconds, the IBM researchers showed a method that could reduce the classical computation time to a mere matter of days.

However, the IBM classical computation would have to be carried out on the worlds fastest supercomputer the IBM-developed Summit OLCF-4 at Oak Ridge National Labs in Tennessee with clever use of secondary storage to achieve this benchmark.

While of great interest to researchers like myself working on hardware technologies related to quantum information, and important in terms of establishing academic bragging rights, the IBM-versus-Google aspect of the story is probably less relevant to the general public interested in all things quantum.

For the average citizen, the mere fact that a 53-qbit device could beat the worlds fastest supercomputer (containing more than 10,000 multi-core processors) is undoubtedly impressive. Now we must try to imagine what may come next.

The reality of quantum computing today is that very impressive strides have been made on the hardware front. A wide array of credible quantum computing hardware platforms now exist, including ion traps, superconducting device arrays similar to those in Googles Sycamore system and isolated electrons trapped in NV-centres in diamond.

These and other systems are all now in play, each with benefits and drawbacks. So far researchers and engineers have been making steady technological progress in developing these different hardware platforms for quantum computing.

What has lagged quite a bit behind are custom-designed algorithms (computer programs) designed to run on quantum computers and able to take full advantage of possible quantum speed-ups. While several notable quantum algorithms exist Shors algorithm for factorization, for example, which has applications in cryptography, and Grovers algorithm, which might prove useful in database search applications the total set of quantum algorithms remains rather small.

Much of the early interest (and funding) in quantum computing was spurred by the possibility of quantum-enabled advances in cryptography and code-breaking. A huge number of online interactions ranging from confidential communications to financial transactions require secure and encrypted messages, and modern cryptography relies on the difficulty of factoring large numbers to achieve this encryption.

Quantum computing could be very disruptive in this space, as Shors algorithm could make code-breaking much faster, while quantum-based encryption methods would allow detection of any eavesdroppers.

The interest various agencies have in unbreakable codes for secure military and financial communications has been a major driver of research in quantum computing. It is worth noting that all these code-making and code-breaking applications of quantum computing ignore to some extent the fact that no system is perfectly secure; there will always be a backdoor, because there will always be a non-quantum human element that can be compromised.

More appealing for the non-espionage and non-hacker communities in other words, the rest of us are the possible applications of quantum computation to solve very difficult problems that are effectively unsolvable using classical computers.

Ironically, many of these problems emerge when we try to use classical computers to solve quantum-mechanical problems, such as quantum chemistry problems that could be relevant for drug design and various challenges in condensed matter physics including a number related to high-temperature superconductivity.

So where are we in the wonderful and wild world of quantum computation?

In recent years, we have had many convincing demonstrations that qbits can be created, stored, manipulated and read using a number of futuristic-sounding quantum hardware platforms. But the algorithms lag. So while the prospect of quantum computing is fascinating, it will likely be a long time before we have quantum equivalents of the silicon chips that power our versatile modern computing devices.

[ Deep knowledge, daily. Sign up for The Conversations newsletter. ]

More here:

Google claims to have invented a quantum computer, but IBM begs to differ - The Conversation CA

ASC20 Finals to be Held in Shenzhen, Tasks Include Quantum Computing Simulation and AI Language Exam – HPCwire

BEIJING, Jan. 21, 2020 The 2020 ASC Student Supercomputer Challenge (ASC20) announced the tasks for the new season: using supercomputers to simulate Quantum circuit and training AI models to take English test. These tasks can be unprecedented challenges for the 300+ ASC teams from around the world. From April 25 to 29, 2020, top 20 finalists will fiercely compete at SUSTech in Shenzhen, China.

ASC20 set up Quantum Computing tasks for the first time. Teams are going to use the QuEST (Quantum Exact Simulation Toolkit) running on supercomputers to simulate 30 qubits in two cases: quantum random circuits (random.c), and quantum fast Fourier transform circuits (GHZ_QFT.c). Quantum computing is a disruptive technology, considered to be the next generation high performance computing. However the R&D of quantum computers is lagging behind due to the unique properties of quantum. It adds extra difficulties for scientists to use real quantum computers to solve some of the most pressing problems such as particle physics modeling, cryptography, genetic engineering, and quantum machine learning. From this perspective, the quantum computing task presented in the ASC20 challenge, hopefully, will inspire new algorithms and architectures in this field.

The other task revealed is Language Exam Challenge. Teams will take on the challenge to train AI models on an English Cloze Test dataset, vying to achieve the highest test scores. The dataset covers multiple levels of English language tests in China, including the college entrance examination, College English Test Band 4 and Band 6, and others. Teaching the machines to understand human language is one of the most elusive and long-standing challenges in the field of AI. The ASC20 AI task signifies such a challenge, by using human-oriented problems to evaluate the performance of neural networks.

Wang Endong, ASC Challenge initiator, member of the Chinese Academy of Engineering and Chief Scientist at Inspur Group, said that through these tasks, students from all over the world get to access and learn the most cutting-edge computing technologies. ASC strives to foster supercomputing & AI talents of global vision, inspiring technical innovation.

Dr. Lu Chun, Vice President of SUSTech host of the ASC20 Finals, commented that supercomputers are important infrastructure for scientific innovation and economic development. SUSTech makes focused efforts on developing supercomputing and hosting ASC20, hoping to drive the training of supercomputing talent, international exchange and cooperation, as well as inter discipline development at SUSTech.

Furthermore, during January 15-16, 2020, the ASC20 organizing committee held a competition training camp in Beijing to help student teams prepare for the ongoing competition. HPC and AI experts from the State Key Laboratory of High-end Server and Storage Technology, Inspur, Intel, NVIDIA, Mellanox, Peng Cheng Laboratory and the Institute of Acoustics of the Chinese Academy of Sciences gathered to provide on-site coaching and guidance. Previous ASC winning teams also shared their successful experiences.

About ASC

The ASC Student Supercomputer Challenge is the worlds largest student supercomputer competition, sponsored and organized by Asia Supercomputer Community in China and supported by Asian, European, and American experts and institutions. The main objectives of ASC are to encourage exchange and training of young supercomputing talent from different countries, improve supercomputing applications and R&D capacity, boost the development of supercomputing, and promote technical and industrial innovation. The annual ASC Supercomputer Challenge was first held in 2012 and has since attracted over 8,500 undergraduates from all over the world. Learn more ASC athttps://www.asc-events.org/.

Source: ASC

Go here to read the rest:

ASC20 Finals to be Held in Shenzhen, Tasks Include Quantum Computing Simulation and AI Language Exam - HPCwire

AMD Epyc 7742 CPUs Tapped for European Weather-Predicting Supercomputer – Tom’s Hardware

After recently pulling off a big win in the Archer 2 Supercomputer installed in Edinburgh, AMD this week announced another big supercomputer contract,. This time it's to build the weather-predicting BullSequana XH2000 with Atos. The supercomputer will be installed for the European Centre for Medium-Range Weather Forecasts (ECMWF) research center and 24/7 operational weather service in Bologna, Italy.

Atos has multiple XH2000 supercomputers already, but the one in question will be built on AMD Epyc 7742 processors, which is a 64-core 128-thread chip with a 225W TDP.

Exactly how many chips will be installed wasn't detailed, but we do know that each individual XH2000 42U cabinet will be able to house up to 32 compute blades. These are liquid-cooled 1U blades each capable of housing three compute nodes that can support a wide variety of platforms, including AMD's Epyc chips as well as Intel's Xeon and Nvidia's Volta V100. On the modular platform, this means that the total systems can be configured exactly to the customer's liking.

This new solution will optimize ECMWF's current workflow to enable it to deliver vastly improved numerical weather predictions," Sophie Proust, Atos Group CTO, said in a statement. "Most importantly though, this is a long-term collaboration, one in which we will work closely with ECMWF to explore new technologies in order to be prepared for next-generation applications.

Assembly of the XH2000 installation in Bologna will commence this year with the aim of starting its service life in 2021. It will be accessible to researchers from more than 30 countries, and the goal is running higher-resolution weather predictions of 10km.

Original post:

AMD Epyc 7742 CPUs Tapped for European Weather-Predicting Supercomputer - Tom's Hardware

Supercomputer forecasts Liverpool’s chances of winning Premier League title – Daily Star

Liverpool have forged a 13-point lead in the Premier League this season having gone on an astounding run of form under Jurgen Klopp.

The Reds havent lost in over a year of league football and have two games in hand over second-placed Manchester City.

Despite failing to win the top flight in almost 30 years, Liverpool have a 98 per cent chance of doing so over the next few months.

The experts at fivethirtyeight have crunched the numbers to rate the chances of Klopps side coming out on top having missed out by a point last season, and they are nothing sort of conclusive.

The Merseyside outfit have been tipped to claim 100 points over the course of the season to match Citys record total from the 2017-18 campaign, having only failed to win once in their first 21 fixtures.

In terms of the rest of the table, Leicester will hang on to third as Chelsea grab the final Champions League qualifying spot.

Manchester United will finish a place above Tottenham in fifth while Mikel Artetas Arsenal sit in the bottom half of the table in 11th.

Fivethirtyeight got their predictions from simulating the remaining fixtures 20,000 times, with Norwich, Bournemouth and Aston Villa facing the drop.

Liverpool boss Klopp hasnt won a league title since his Bundesliga triumph with Borussia Dortmund back in 2012, and remains cautious over his chances this time around.

Video Unavailable

Click to playTap to play

Play now

He said of United ahead of their meeting at Anfield: As a fellow manager it is easy to see the progress they make, although others on the outside are often not so quick to recognise.

They have recruited some exceptional players and the evolution of their squad makes them one of the strongest in Europe, I would say.

Look at their talent from the 'keeper through to the forwards. They have world-class talent in that team and a squad full of match-winners. Big investment has resulted in a very strong group.

Of course, looking to build something takes time. We know this ourselves. But they build with outstanding results and performances as a foundation.

Their win this week in the FA Cup against Wolves was yet more evidence of how dangerous they are.

United compete in four competitions still this says a lot about their quality.

They remain with much to fight for in the league, a semi-final second leg to come in the Carabao Cup, into the fourth round of the FA Cup and the knockout stages in Europe.

You cannot compete as they do in these competitions if they are not an outstanding football team.

Go here to read the rest:

Supercomputer forecasts Liverpool's chances of winning Premier League title - Daily Star

Super Computer model rates Newcastle United chances of beating Everton and relegation probability – The Mag

Interesting overview of Newcastle United for the season and todays match against Everton at Goodison Park.

The super computer model predictions are based on the FiveThirtyEight revision to the Soccer Power Index, which is a rating mechanism for football teams which takes account of over half a million matches, and is based on Optas play-by-play data.

They have analysed all Premier League matches this midweek, including this game against Carlo Ancelottis team.

Their computer model gives Everton a 62% chance of a win, it is 23% for a draw and a 15% possibility of a Newcastle win (percentage probabilities rounded up/down to nearest whole number).

When it comes to winning the title, they have the probability at 99% Liverpool and the rest (including Man City now) nowhere, a quite remarkable situation with still four months of the season remaining.

Also interesting to see how the computer model now rates the percentage probability chances of relegation:

91% Norwich

67% Bournemouth

50% Villa

31% West Ham

19% Watford

12% Brighton

11% Burnley

10% Newcastle United

4% Southampton

3% Palace

1% Arsenal

So they now rate Newcastle a one in ten chance of going down, Steve Bruces team seven points clear of the relegation zone.

The bookies have Newcastle at 8/1 to be relegated after the win over Chelsea, pretty much matching the Soccer Power Index model chances of 9/1 (10%).

Read more from the original source:

Super Computer model rates Newcastle United chances of beating Everton and relegation probability - The Mag

Scientists Combine AI With Biology to Create Xenobots, the World’s First ‘Living Robots’ – EcoWatch

Formosa's plastics plant is seen dominating the landscape in Point Comfort, Texas. Julie Dermansky / DeSmogBlog

Diane Wilson is seen with volunteers before their meeting across the street from Formosa's Point Comfort manufacturing plant. Julie Dermansky / DeSmogBlog

Within 10 minutes she collected an estimated 300 of the little plastic pellets. Wilson says she will save them as evidence, along with any additional material the group collects, to present to the official and yet-to-be-selected monitor.

Wilson received the waiver forms from Formosa a day after the deadline. The group planned to set out by foot on Jan. 18, which would allow them to cover more ground on their next monitoring trip. They hope to check all of the facility's 14 outtakes where nurdles could be still be escaping. Any nurdles discharged on or after Jan. 15 in the area immediately surrounding the plant would be in violation of the court settlement.

Ronnie Hamrick picks up a mixture of new and legacy nurdles near Formosa's Point Comfort plant. Julie Dermansky / DeSmogBlog

Ronnie Hamrick holds a few of the countless nurdles that litter the banks of Cox Creek near Formosa's Point Comfort facility. Julie Dermansky / DeSmogBlog

Lawsuit Against Formosas Planned Louisiana Plant

On that same afternoon, Wilson learned that conservation and community groups in Louisiana had sued the Trump administration, challenging federal environmental permits for Formosa's planned $9.4 billion plastics complex in St. James Parish.

The news made Wilson smile. "I hope they win. The best way to stop the company from polluting is not to let them build another plant," she told me.

The lawsuit was filed in federal court against the Army Corps of Engineers, accusing the Corps of failing to disclose environmental damage and public health risks and failing to adequately consider environmental damage from the proposed plastics plant. Wilson had met some of the Louisiana-based activists last year when a group of them had traveled to Point Comfort and protested with her outside Formosa's plastics plant that had begun operations in 1983. Among them was Sharon Lavigne, founder of the community group Rise St. James, who lives just over a mile and a half from the proposed plastics complex in Louisiana.

Back then, Wilson offered them encouragement in their fight. A few months after winning her own case last June, she gave them boxes of nurdles she had used in her case against Formosa. The Center for Biological Diversity, one of the environmental groups in the Louisiana lawsuit, transported the nurdles to St. James. The hope was that these plastic pellets would help environmental advocates there convince Louisiana regulators to deny Formosa's request for air permits required for building its proposed St. James plastics complex that would also produce nurdles. On Jan. 6, Formosa received those permits, but it still has a few more steps before receiving full approval for the plant.

Anne Rolfes, founder of the Louisiana Bucket Brigade, holding up a bag of nurdles discharged from Formosa's Point Comfort, Texas plant, at a protest against the company's proposed St. James plant in Baton Rouge, Louisiana, on Dec. 10, 2019. Julie Dermansky / DeSmogBlog

Construction underway to expand Formosa's Point Comfort plant. Julie Dermansky / DeSmogBlog

Silhouette of Formosa's Point Comfort Plant looming over the rural landscape. Julie Dermansky / DeSmogBlog

From the Gulf Coast toEurope

Just a day after Wilson found apparently new nurdles in Point Comfort, the Plastic Soup Foundation, an advocacy group based in Amsterdam, took legal steps to stop plastic pellet pollution in Europe. On behalf of the group, environmental lawyers submitted an enforcement request to a Dutch environmental protection agency, which is responsible for regulating the cleanup of nurdles polluting waterways in the Netherlands.

The foundation is the first organization in Europe to take legal steps to stop plastic pellet pollution. It cites in its enforcement request to regulators Wilson's victory in obtaining a "zero discharge" promise from Formosa and is seeking a similar result against Ducor Petrochemicals, the Rotterdam plastic producer. Its goal is to prod regulators into forcing Ducor to remove tens of millions of plastic pellets from the banks immediately surrounding its petrochemical plant.

Detail of a warning sign near the Point Comfort Formosa plant. The waterways near the plant are polluted by numerous industrial facilities in the area. Julie Dermansky / DeSmogBlog

Nurdles on Cox Creek's bank on Jan. 15. Wilson hopes her and her colleagues' work of the past four years will help prevent the building of more plastics plants, including the proposed Formosa plant in St. James Parish. Julie Dermansky / DeSmogBlog

A sign noting the entrance to the Formosa Wetlands Walkway at Port Lavaca Beach. The San Antonio Estuary Waterkeeper describes the messaging as an example of greenwashing. Julie Dermansky / DeSmogBlog

From Your Site Articles

Related Articles Around the Web

Read more:

Scientists Combine AI With Biology to Create Xenobots, the World's First 'Living Robots' - EcoWatch

New, detailed pictures of planets, moons, and comets are neither photos nor animations theyre made using data from 50 years of NASA missions -…

captionA snapshot of the American Natural History Museums Worlds Beyond Earth immersive theater experience.sourceD. Finnin/ AMNH

For many years, there were only two ways for astronomers to see distant worlds in our solar system: Either they used a powerful telescope, or they sent spacecraft into the inky blackness to get up close and personal.

But a third option is emerging to offer unprecedented detail and accuracy: data visualization.

At the American Museum of Natural History, a new planetarium show reveals images of Saturns moon Titan, the 67P comet, and the lunar surface, all generated using data collected during 50 years of space missions.

Were not making anything up here, Carter Emmart, director of astrovisualization for that show, said at a press conference. The height, color, and shapes we see come from actual measurements. You get to see these beautiful objects as they actually are, to the best of our abilities.

Carter and his team relied on data gathered by robotic probes, telescopes, and supercomputer simulations from NASA, the European Space Agency (ESA), and Japan Aerospace Exploration since the 1970s.

Were taking numbers and turning that into a picture, he told Business Insider. Weve created a 3D world that lives in the computer and can be shown on screen.

Take a look at some of the most impressive visuals from the show,Worlds Beyond Earth, which opened Tuesday.

Other planetariums, like Chicagos Adler Planetarium, also utilize data visualization, relying on research about planetary orbits, surface maps, and the location of spacecraft to create accurate imagery. But the Hayden Planetarium in New York displays the most comprehensive color palette.

In 1971, Falcon carried astronauts David Scott and Jim Irwin, along with the first Lunar Roving Vehicle, down to the moon. That so-called moon buggy helped Scott and Irwin explore a much wider swath of the lunar surface.

Spacecraft are extensions of ourselves: our eyes, ears, and nose, the shows curator, Denton Ebel, told Business Insider.

The red planet was once akin to Earth, with plentiful water and active volcanoes. But the planets core cooled just 500 million years after Mars formation. That cooling, according to museum scientists, caused the decay of Mars magnetic field, which protected the planet from solar winds. Without it, Mars lost its atmosphere.

According to Ebel, Magellan revealed that Venus was also once like Earth but now has a surface hot enough to melt lead.

Venus is a hellscape, really, Ebel said, because it, too, lacks a magnetic field. Without that protection, solar winds stripped away any water.

Cassini discovered that Saturns moon Enceladus sprays plumes of water into space. That told scientists that the moon has an ocean of liquid water under its icy surface.

Titan boasts a thick atmosphere and weather. But its too cold to hold liquid water; its lakes and rain are made of liquid methane.

Saturns rings bubble with moonlets: house-sized baby moons that form as space dust coalesces. Ebel said the process by which these moonlets were born parallels how all eight planets in our solar system formed 4.5 billion years ago.

Planets formed within this disk which resembled Saturns rings as did moons, comets, and asteroids. Then eight planets (along with dwarf planet Pluto) grew as they incorporated more material.

In 1979, NASAs Voyager I mission took snapshots of Io, revealing the moon to be the most volcanically active object in the solar system. Some of Ios volcanoes spew lava dozens of miles into the sky; its surface is peppered by lakes of lava.

Ebels group used data from NASAs Galileo mission to glean insight into Jupiters magnetic field.

The largest planet in our solar system, Jupiter has hot, liquid, metallic hydrogen churning around its rocky core. That generates a powerful magnetic field.

One of Jupiters moons, Ganymede, also has such a field.

The comet, called 67P/Churyumov-Gerasimenko, is only a few kilometers across, but it contains frozen water, dust, and amino acids the basic chemical building blocks of life.

When comets collided with planets and moons earlier in the solar systems history, some delivered life-giving chemicals like phosphorus.

It took Rosetta 10 years to reach 67P, enter its orbit, and send a lander down to the surface.

He added that production work on the Worlds Beyond Earth show lasted a full year so that the team could make damn sure that every bit of space that were putting up there on the screen is accurate to the best of our knowledge.

Read more:

New, detailed pictures of planets, moons, and comets are neither photos nor animations theyre made using data from 50 years of NASA missions -...

What is supercomputer? – Definition from WhatIs.com

A supercomputer is a computer that performs at or near the currently highest operational rate for computers. Traditionally, supercomputers have been used for scientific and engineering applications that must handle very large databases or do a great amount of computation (or both).Although advances likemulti-core processors and GPGPUs (general-purpose graphics processing units)have enabled powerful machinesfor personal use (see: desktop supercomputer, GPU supercomputer),by definition, a supercomputer is exceptional in terms of performance.

At any given time, there are a few well-publicized supercomputers that operate at extremely high speeds relative to all other computers. The term is also sometimes applied to far slower (but still impressively fast) computers. The largest, most powerful supercomputers are really multiple computers that perform parallel processing. In general, there are two parallel processing approaches: symmetric multiprocessing (SMP) and massively parallel processing (MPP).

As of June 2016, the fastest supercomputer in the world was the Sunway TaihuLight, in the city of Wixu in China. A few statistics on TaihuLight:

The first commercially successful supercomputer, the CDC (Control Data Corporation) 6600 was designed by Seymour Cray. Released in 1964, the CDC 6600 had a single CPU and cost $8 million the equivalent of $60 million today. The CDC could handle three million floating point operations per second (flops).

Cray went on to found a supercomputer company under his name in 1972. Although the company has changed hands a number of times it is still in operation. In September 2008, Cray and Microsoft launched CX1, a $25,000 personal supercomputer aimed at markets such as aerospace, automotive, academic, financial services and life sciences.

IBM has been a keen competitor. The company's Roadrunner, once the top-ranked supercomputer, was twice as fast as IBM's Blue Gene and six times as fast as any of other supercomputers at that time. IBM's Watson is famous for having adopted cognitive computing to beat champion Ken Jennings on Jeopardy!, a popular quiz show.



Peak speed (Rmax)



Sunway TaihuLight


Wuxi, China




Guangzhou, China




Oak Ridge, U.S.




Livermore, U.S.


FujitsuK computer


Kobe, Japan




Tianjin, China




Oak Ridge, U.S.




Los Alamos, U.S.


In the United States, some supercomputer centers are interconnected on an Internet backbone known as vBNS or NSFNet. This network is the foundation for an evolving network infrastructure known as the National Technology Grid. Internet2 is a university-led project that is part of this initiative.

At the lower end of supercomputing, clustering takes more of a build-it-yourself approach to supercomputing. The Beowulf Project offers guidance on how to put together a number of off-the-shelf personal computer processors, using Linux operating systems, and interconnecting the processors with Fast Ethernet. Applications must be written to manage the parallel processing.


What is supercomputer? - Definition from WhatIs.com

What is supercomputer? – Definition

Definition: A supercomputer is the fastest computer in the world that can process a significant amount of data very quickly. The computing Performance of a "supercomputer" is measured very high as compared to a general purpose computer. The computing Performance of a supercomputer is measured in FLOPS (that is floating-point operations per second) instead of MIPS. The supercomputer consists of tens of thousands of processors which can perform billions and trillions of calculations per second, or you can say that supercomputers can deliver up to nearly a hundred quadrillions of FLOPS.

They have evolved from grid to cluster system of massively parallel computing. Cluster system computing means that machine uses multiple processors in one system instead of arrays of separate computers in a network.

These computers are most massive concerning size. A most powerful supercomputer can occupy few feet to hundreds of feet. The supercomputer price is very high, and they can vary from 2 lakh dollar to over 100 million dollars.

Supercomputers were introduced in the 1960s and developed by Seymour Cray with the Atlas at the University of Manchester. The Cray designed CDC 1604 which was the first supercomputer in the world, and it replaces vacuum tube with transistors.

The fastest supercomputer in the world was the Sunway TaihuLight, in the city of Wixu in China which is developed by Chinas National Research center of Parallel Computer Engineering & Technology (NRCPC), maintains its number 1 ranking for the first time, with a High-Performance Linpack(HPL) mark of 93.01 petaflops.

They can support more than a hundred users at a time.

These machines are capable of handling the massive amount of calculations that are beyond the human capabilities, i.e., the human is unable to solve such extensive calculations.

Many individuals can access supercomputers at the same time.

These are the most expensive computers that can ever be made.

They have more than 1 CPU (Central Processing Unit) which contains instructions so that it can interpret instructions and execute arithmetic and logical operations.

The supercomputer can support extremely high computation speed of CPUs.

They can operate on pairs of lists of numbers instead of pairs of numbers.

They were used initially in applications related to national security, nuclear weapon design, and cryptography. But nowadays they are also employed by the aerospace, automotive and petroleum industries.

Supercomputers are not used for everyday tasks because of their superiority.

Supercomputer handles those applications, which required the real-time processing. The uses are as follows:

They're used for scientific simulations and research such as weather forecasting, meteorology, nuclear energy research, physics, and chemistry, as well as for extremely complex animated graphics. They are also used to interpret new diseases and predict illness behavior and treatment.

The military uses supercomputers for testing new aircrafts, tanks, and weapons. They also use them to understand the effect on soldiers and wars. These machines are also used for encrypting the data.

Scientists use them to test the impact of nuclear weapon detonation.

Hollywood uses supercomputers for the creation of animations.

In entertainment, supercomputers are used for online gaming.

Supercomputers help in stabilizing the game performance when a lot of users are playing the game.

Besides raw speed, one big difference between a supercomputer and a mainframe is that a mainframe serves many people at once or runs several programs concurrently, whereas a supercomputer funnels its power into executing a few programs at high speeds. Mainframes are mostly used for large data storage and manipulation tasks, not for computationally-intensive tasks.


What is supercomputer? - Definition

The race to exascale is on while Canada watches from the sidelines – CBC.ca

This column is an opinionby Kris Rowe,a computational scientist working to get science and engineering applications ready for the next generation of exascale supercomputers. Born and educated in Canada, he has worked at major Canadian and American Universities, as well as two U.S. national laboratories.For more information about CBC's Opinion section, please see theFAQ.

Some of the brightest minds from around the globe have been quietly working on technology that promises to turn the world on its head, but so far Canada has been watching from the sidelines.

While it is unlikely that people will be huddled around their televisions to watch the power to these incredible machines being switched on, the scientific discoveries that follow the debut of exascale computers will change our daily lives in unimaginable ways.

So what exactly is an exascale computer?

It's a supercomputer capable of performing more than a billion billion calculations per second or 1 exaflops.

"Exa"is the metric system prefix for such grandiose numbers, and "flops"is an abbreviation of "floating-point operations per second."

For comparison, my laptop computer is capable of about 124 gigaflops, or 124 billion calculations per second, which sounds fast.

According to the TOP500 list, today's fastest supercomputer is Oak Ridge National Laboratory's Summit, which tops out at a measured 148.6 petaflops about onemillion times faster than my laptop.

However, Summit is a mere welterweight relative to an exascale supercomputer, which is more than 60 times faster.

To put that speed in perspective, if you took all the calculations a modern laptop can perform in a single second, and instead did the arithmetic non-stop with pencil and paper at a rate of one calculation per second, it would take roughly 3,932 years to finish.

In a single second, a supercomputer capable of 1 exaflops could do a series of calculations that would take about 31.7 billion years by hand.

While colloquially a supercomputer is referred to as a single entity, it is actually composed of thousands of servers or compute nodes connected by a dedicated high-speed network.

You might assume that an exascale supercomputer could be built simply by using 60 times more compute nodes than today's fastest supercomputer; however, the cost, power consumption, and other constraints make this approach nearly impossible.

Fortunately, computer scientists have an ace up their sleeves, known as a GPU accelerator.

Graphics processing units (GPUs) are the professional-grade cousins of the graphics card in your personal computer and are capable of performing arithmetic at a rate of several teraflops (ie. really, really fast). And a feasible route to exascale can be realized by not only making supercomputers largerbut also denser.

Sporting six extremely powerful GPUs per compute node, Argonne National Laboratory's Aurora will follow this approach. Scheduled to come online in 2021, Aurora will be the first exascale supercomputer in North America althoughthe title of first in the world may go to China's Tianhe-3, which is slated to power up sometime in 2020.

Several other machines in the U.S., China, Europeand Japan are scheduled to be brought to life soon after Aurora, using similar architectures

What exactly does one do with all that computing power? Change the world, of course.

Exascale supercomputers will allow researchers to tackle problems which were impossible to simulate using the previous generation of machines, due to the massive amounts of data and calculations involved.

Small modular nuclear reactor (SMR) design, wind farm optimizationand cancer drug discovery are just a few of the applications that are priorities of the U.S. Department of Energy (DOE) Exascale Computing Project. The outcomes of this project will have a broad impact and promise to fundamentally change society, both in the U.S. and abroad.

So why isn't Canada building one?

One reason is that exascale supercomputers come with a pretty steep sticker price. The contracts for the American machines are worth more than $500 million US each. On the other side of the Atlantic, the EU signed off on 1 billion for their own exascale supercomputer.

While the U.S. and Europe have much larger populations, the annual per capita spending on large-scale computing projects demonstrates how much Canada is lagging in terms of investment. The U.S. DOE alone will spend close to $1 billion US on its national supercomputing facilities next year, a number which does not take into account spending by other federal organizations, such as the U.S. National Science Foundation.

In comparison, Compute Canada the national advanced research computing consortium providing supercomputing infrastructure to Canadian researchers has a budget that is closer to $114 million Cdn.

In its 2018 budget submission, Compute Canada clearly lays out what it will take to bring our country closer to the forefront of supercomputing on the world stage. Included is the need for increasing the annual national spending on advanced research computing infrastructure to an estimated $151 million a 32 per cent increase from where it is now. Given cost of the American exascale supercomputers, this is likely a conservative estimate.

However, the need for an exascale supercomputer in Canada does not seem to be on the radar of the decision-makers in the federal and provincial governments.

Hanlon's razor would suggest that this is not due to some sinister plot by politicians to punish the nation's computer geeks; rather, our politicians likely don't fully understand the benefits of investing in the technology.

For example, the recent announcement by the premiers of Ontario, Saskatchewanand New Brunswick to collaborate on aggressively developing Canada's small modular reactor (SMR) technology failed to mention the need for advanced computing resources. In contrast, corresponding U.S. DOE projects explicitly state that they will require exascale computing resources to meet their objectives.

Why should the Canadian government and you care?

For the less altruistic, a benefit of supercomputing research is "trickle-down electronics." The quiet but persistent legacy of the space race is technology like the microwave oven found in most kitchens. Similarly, the technological advances necessary to achieve exascale computing will also lead to lower-cost and more energy-efficient laptops, improved high-definition computer graphics, and prevalent AI in our connected devices.

But more importantly for Canada, how we invest our federal dollars says a lot about what we value as a nation.

It's a statement about how we value the sciences. Do we want to attract world-class researchers to our universities? Do we want Canada to be a leader in climate research, renewable energyand medical advances?

It's also a statement about how much we value Canadian businesses and innovation.

The user-facility model of the U.S. DOE provides businesses with access to singular resources, which gives American companies a competitive advantage in the world marketplace. Compute Canada has a similar mandate, and given the large number of startup companies and emerging industries in Canada, we leave our economy on an unequal footing without significant investments in advanced computing infrastructure.

Ultimately, supercomputers are apolitical: they can just as easily be used for oil exploration as wind farming. Their benefits can be applied across the economy and throughout society to develop new products and solve problems.

At a time when Canada seems so divided, building an exascale computer is the kind of project we need to bring the country together.

[Note:The opinions expressed are those of the author and do not necessarily represent the official policy or position of Argonne National Laboratory, the U.S. Department of Energyor the U.S. government.]

Follow this link:

The race to exascale is on while Canada watches from the sidelines - CBC.ca

SC19 Invited Talk: HPC Solutions for Geoscience Application on the Sunway Supercomputer – insideHPC

Lin Gan from Tsinghua University

In this video fromSC19, Lin Gan from Tsinghua University presents: HPC Solutions for Geoscience Application on the Sunway Supercomputer.

In recent years, many complex and challenging numerical problems, in areas such as climate modeling and earthquake simulation, have been efficiently resolved on Sunway TaihuLight, and have been successfully scaled to over 10 million cores with inspiringly good performance. To carefully deal with different issues such as computing efficiency, data locality, and data movement, novel optimizing techniques from different levels are proposed, including some specific ones that fit well with the unique architectural futures of the system and significantly improve the performance. While the architectures for current- and next-generation supercomputers have already diverted, it is important to see the pro and cons of whatever we already have, and to make up the bottlenecks as well as maximize the advantages. This talk contains the summary of the most essential HPC solutions that greatly contribute to the performance boost in our efforts on porting geoscience applications, the discussions and rethinking, and the potential improvements we could undertake.

Lin Gan is a postdoctoral research fellow in the Department of Computer Science and Technology at Tsinghua University and the assistant director of the National Supercomputing Center in Wuxi. His research interests include high-performance computing solutions to geoscience applications based on hybrid platforms such as CPUs, FPGAs, and GPUs. Gan received a PhD in computer science from Tsinghua University. He has received the 2016 ACM Gordon Bell Prize, Tsinghua-Inspur Computational Geosciences Youth Talent Award, and the FPL Significant Paper award. He is a member of IEEE.

Download the MP3

See our complete coverage of SC19

Check out our insideHPC Events Calendar

Continue reading here:

SC19 Invited Talk: HPC Solutions for Geoscience Application on the Sunway Supercomputer - insideHPC

Claiming dividends? ‘Big Brother’ tax computer is watching you – The Times

HMRCs Connect system allows it to match different bits of information about peoples affairsALAMY

The taxman is cracking down on self-assessment taxpayers who claim dividends on shares or income from rental properties.

HM Revenue & Customs is using data from its Connect super-computer programme, which allows it to match different bits of information about peoples affairs.

Saffery Champness, one of Britains biggest chartered accountancy firms, reports that more of its clients are receiving warning letters from HMRC after filling in their tax returns last year.

The letters often question whether information people have put down is correct, asking them to double check.

The accountant says clients have been accused of claiming entrepreneurs relief when not entitled to it; receiving dividends from shares or foreign income that has not been declared for capital gains tax purposes; failing to declare full

Want to read more?

Subscribe now and get unlimited digital access on web and our smartphone and tablet apps, free for your first month.

Read more:

Claiming dividends? 'Big Brother' tax computer is watching you - The Times

15 Vintage Computer Ads That Show How Far Weve Come – Small Business Trends

Computers have gone through some major transformations over the last handful of decades. Old computer advertisements can really showcase some of these changes, from the 1950s when huge computers were just for industrial and business users to the days when they became ubiquitous for the average consumer.

Looking at old computer ads makes you realize see how far technology has come in recent decades. Take a look at our collection for a trip down memory lane.

Back in the 1950s, computers didnt really even resemble what we think of as computers today. But this vintage ad from Ford Instrument Co. shows how massive and different these devices used to be.

Back in the 1960s computer ads were less about showing the actual computers and more about exploring what they could do. They were also pretty much aimed at businesses, since most people didnt see a need for computers in their homes just yet. This ad from IBM explored the idea of giving small businesses access to major scientific advances, like those used in the Apollo missions.

This computer advertisement from the 1970s was aimed at business users. It touted the Nixdorf computer as a way to make a business more efficient. With a simple visual and a ton of copy, you can really see how large and completely different business computers used to look.

Today, small businesses cant get by without a computer. So it seems strange that back in the 1980s, companies had to actually be convinced that having a computer could benefit them. But thats exactly the purpose of this IBM ad. The commercial is aimed squarely at small businesses and shows how many different types of industries could benefit from adding just one device.

I may never use a typewriter again! Thats just one of the interesting lines that dates this 1980s ad for Radio Shacks line of computers. It also touts how affordable the models are (starting at nearly $3,000) and how small they are (even though theyre very boxy by todays standards).

This 1980s computer advertisement for the Atari 520st includes some incredible graphics and demonstrates the power of an old school gaming computer. In addition to showcasing the ability to play games, this commercial also points out how such a device can help users learn code.

Can the whole family benefit from having a personal computer at home? Now, this seems like an obvious yes. But back in the 1980s people had to be convinced about the versatility of such a device. Thats the basic premise behind this day in the life ad for the Commodore 64. It shows the computer being used by a dad to check the stock market, mom to pay bills, and kids to learn letters and numbers.

This ad also shows how the 1980s were the first decade where people started to really think about having a computer at home on a wide scale. To demonstrate how powerful this investment could be, this Texas Instruments ad decided to translate the power of a computer into different individuals.

This 1980s Timex ad clearly shows an image of the actual computer being sold. This one happens to be a fairly small and bare bones model that was really just used for typing. And the low price reflects that.

If you want to see a very 1980s commercial, it doesnt get better than this one for the Sinclair ZX Spectrum. Aside from the very timely visuals, the ad is also noteworthy for the features that its playing up like the computers keyboard.

This Apple commercial that aired during the 1984 Super Bowl is perhaps one of the most iconic tech ads of its generation. The ad introduced the Macintosh computer, but didnt actually feature the device. Instead, it caught peoples attention by referencing 1984 the book.

By the 1990s computers were starting to improve from their earlier models. This ad announces a new version of the Commodore 64. This one brings 128k memory instead of 64k. So the company called it the Commodore 128.

In addition to actual technological advances, computers were also starting to gain traction with new audiences by the 1990s. Not only had they become popular with individuals instead of just businesses, but now they were also becoming popular with young people instead of just adults. This collection of ads from Dell showcases how the company was marketing desktop computers for students.

And the progression continued even further. This Apple ad shows how the company marketed their product as helpful for kids and their education.

And finally, computers were officially aiming to reach people of all ages with this Rugrats themed Gateway commercial from the 1990s. The ad showcased how software programs for kids could help them learn and have fun.

Image: Depositphotos.com

Read the rest here:

15 Vintage Computer Ads That Show How Far Weve Come - Small Business Trends

Leicester City’s chances in title race with Liverpool and Man City predicted by supercomputer – Leicestershire Live

Leicester City are set to miss out on the Premier League title, but will secure a Champions League place according to bookie SportNation.bets latest Super Computer.

The table features two different markets - the title odds for the top 10 and relegation odds for the bottom half, striving to be the most accurate predictor of the final standings.

The Foxes are tipped to end the season in third place in the league, behind outright favourites for the title Liverpool and Manchester City.

Leicester are a 40/1 shot for the title, meaning they find themselves below Pep Guardiolas side (6/1), despite still holding a point-advantage over the defending champions.

It seems theres no stopping Liverpool though, who are 1/8 on to end their long wait for a league title this season they currently hold a 10-point advantage over Brendan Rodgers side ahead of their meeting on Boxing Day.

Elsewhere, Chelsea just beat Tottenham to fourth, having widened the gap on their North London rivals to four points with a victory over their London rivals on Sunday.

Manchester United and Wolves are tipped for sixth and seventh - a repeat of their final positions from last season, while Arsenal and Everton - who are under new management - are set for an almighty climb from 11th and 15th to eighth and ninth respectively, with Sheffield United rounding off the top half.

At the other end, it doesnt look good for Norwich, who are 1/12 for a return to the Championship, while Watford - despite picking up their first home win of the season last weekend - are still odds on (3/11) for relegation.

Aston Villa (5/7), who will be missing key man John McGinn for a lengthy period due to a fractured ankle, finish off the teams destined for the drop, with Southampton staying in the Premier League by the skin of their teeth once again.

A SportNation.bet spokesman said: "The odds suggest that Liverpool already have the league wrapped up and it's not even Christmas, while Leicester fans will be delighted with third place and a return to Champions League football and Frank Lampard will be pleased with a top four finish in his first season in charge at Stamford Bridge.

"Norwich and Watford are both six points from safety and the market predicts they will go down without a whimper, while Villa, who are also currently in the relegation zone, will join them back in the Championship."

1) Liverpool: 1/8(title odds)

2) Man City: 6/1

3) Leicester: 40/1

4) Chelsea: 450/1

5) Tottenham: 500/1

6) Man Utd: 650/1

7) Wolves: 900/1

8) Arsenal: 1000/1

9) Everton: 2000/1

10) Sheff Utd: 2500/1


11) Burnley: 65/4(relegation odds)

12) Palace: 11/1

13) Newcastle: 17/2

14) Brighton: 8/1

15) Bournemouth: 5/1

16) West Ham: 4/1

17) Southampton: 3/1

18) Aston Villa: 5/7

19) Watford: 3/11

20) Norwich: 1/12

Go here to read the rest:

Leicester City's chances in title race with Liverpool and Man City predicted by supercomputer - Leicestershire Live

Super Computer model rates Newcastle relegation probability and chances of beating Man Utd – The Mag

Interesting overview of Newcastle United for the season and Thursdays game at Old Trafford.

The super computer model predictions are based on the FiveThirtyEight revision to the Soccer Power Index, which is a rating mechanism for football teams which takes account of over half a million matches, and is based on Optas play-by-play data.

They have analysed all Premier League matches this midweek, including the game at Old Trafford.

Their computer model gives Man Utd a 65% chance of a home win, it is 22% for a draw and a 13% possibility of a Newcastle win (percentage probabilities rounded up/down to nearest whole number).

When it comes to winning the title, they have the probability at 82% Liverpool, 17% Man City and the rest nowhere.

Also interesting to see how the computer model now rates the percentage probability chances of relegation:

82% Norwich

54% Watford

48% Villa

28% Southampton

24% West Ham

18% Bournemouth

15% Brighton

11% Newcastle United

7% Palace

5% Burnley

5% Everton

3% Arsenal

1% Sheff Utd

So they now rate Newcastle only a one in eleven chance of going down.

See original here:

Super Computer model rates Newcastle relegation probability and chances of beating Man Utd - The Mag

Super Computer model rates Newcastle United relegation probability and chances of beating Everton – The Mag

Interesting overview of Newcastle United for the season and todays game at St James Park.

The super computer model predictions are based on the FiveThirtyEight revision to the Soccer Power Index, which is a rating mechanism for football teams which takes account of over half a million matches, and is based on Optas play-by-play data.

They have analysed all Premier League matches this midweek, including this game at SJP against Everton.

Their computer model gives Everton a 41% chance of an away win, it is 28% for a draw and a 31% possibility of a Newcastle win (percentage probabilities rounded up/down to nearest whole number).

When it comes to winning the title, they have the probability at 95% Liverpool, 5% Man City and the rest nowhere.

Also interesting to see how the computer model now rates the percentage probability chances of relegation:

85% Norwich

59% Watford

45% Villa

32% West Ham

20% Bournemouth

15% Brighton

14% Southampton

12% Newcastle United

7% Burnley

4% Palace

4% Everton

2% Arsenal

1% Sheff Utd

So they now rate Newcastle only a one in eight chance of going down, though still more likely to be relegated than Burnley, Arsenal and Everton despite that trio being below NUFC in the Premier League table at the halfway point.

Read the original post:

Super Computer model rates Newcastle United relegation probability and chances of beating Everton - The Mag

Will there ever be a supercomputer that can think like HAL? – Macquarie University

Whether or not Hal will one day refuse to 'open the pod bay doors' IRL will depend on the research goals the field of artificial intelligence (AI) sets for itself.

Supercomputer: HAL 9000 is a fictional artificial intelligence character and the main antagonist in the Space Odyssey series.

Currently, the field is not prioritising the goal of developing a flexible, general purpose intelligent system like HAL. Instead, most efforts are focused on building specialised AI systems that perform well often much better than humans in highly restricted domains.

These are the AI systems that power Googles search, Facebooks news feed and Netflixs recommendation engine; answer phones at call centres; translate natural languages from one to another; and even provide medical diagnoses. So the portrait of AI that Stanley Kubrick developed in his film 2001: A Space Odyssey, while appropriate for the time (after all, Kubricks film came out in 1968), appears pretty outdated in light of current developments.

That is not to say a superhuman general intelligence like HAL could not be built in principle, although what exactly it would take remains an open scientific question. But from a practical perspective, it seems highly unlikely that anything like HAL will be built in the near future either by academic researchers or industry.

The future of AI is probably more accurately depicted by a toaster that knows when you want to eat breakfast in the morning, than anything resembling a super intelligence like HAL.

Does this mean that artificial intelligence and other related fields like machine learning and computational neuroscience have nothing interesting to offer? Far from it. Its just that the goals have changed.

Artificial intelligence these days is more closely connected to the rapidly growing fields of machine learning, neural networks, and computational neuroscience. Major tech companies like Google and Facebook, among many others, have been investing heavily in these areas in recent years and large in-house AI research groups are quickly becoming the norm.A perfect example of this is Google Brain.

So AI isnt going anywhere, its just being transformed and incorporated into quite literally everything from internet search to self-driving cars to 'intelligent' appliances. The future of AI is probably more accurately depicted by a toaster that knows when you want to eat breakfast in the morning, than anything resembling a super intelligence like HAL.

Virtually everything in the popular media today about AI concerns deep learning. These algorithms work by using statistics to find patterns in data, and they have revolutionised the field of AI in recent years. Despite their immense power and ability to match, and in many cases exceed, human performance on image categorisation and other tasks, there are some things at which humans still excel.

For instance, deep convolutional neural networks must be trained on massive amounts of data, far more than humans require to exhibit comparable performance. Moreover, network training must be supervised in the sense that when the network is learning, each output the network produces for a given input is compared against a stored version of the correct output. The difference between actual and ideal provides an error signal to improve network performance.

Incredible brainpower: AI software has been designed with cognitive abilities similar to those of the human brain, explain Crossley and Kaplan.

And yet humans can learn to do a remarkable variety of things like visually categorise objects and drive cars based on relatively small data sets without explicit supervision. By comparison, a deep neural network might require a training set of millions of images or tens of millions of driving trials, respectively.

The critical question is, how do we do this? Our brains are powerful neural networks shaped by millions of years of evolution to do unsupervised, or better, self-supervised, learning sometimes on the basis of limited data. This is where AI will be informed by ongoing work in the cognitive science and neuroscience of learning.

Cognitive science is the study of how the brain gives rise to the many facets of the mind, including learning, memory, attention, decision making, skilled action, emotion, etc. Cognitive science is therefore inherently interdisciplinary. It draws from biology, neuroscience, philosophy, physics, psychology, among others.

In particular, cognitive science has a long and intimate relationship with computer science and artificial intelligence. The influence between these two fields is bidirectional. AI influences cognitive science by providing new analysis methods and computational frameworks with which neural and psychological phenomena can be crisply described.

Will artificial intelligence ever match or surpass human intelligence on every dimension? At the moment, all we can do is speculate, but a few things seem unambiguously true.

Cognitive science is at the heart of AI in the sense that the very concept of "intelligence" is fundamentally entangled with comparisons to human behaviour, but there are much more tangible instances of cognitive science influencing AI. For instance, the earliest artificial neural nets were created in an attempt to mimic the processing methods of the human brain.

More recent and further advanced artificial neural nets (e.g., deep neural nets) are sometimes deeply grounded in contemporary neuroscience. For instance, the architecture of artificial deep convolutional neural nets (the current state of the art in image classification) is heavily inspired by the architecture of the human visual system.

The spirit of appealing to how the brain does things to improve AI systems remains prevalent in the current AI research (e.g., complimentary learning systems, deep reinforcement learning, training protocols inspired by "memory replay" in the hippocampus), and it is common for modern AI research papers to include a section on biological plausibility that is, how closely matched are the workings of the computational system to what is known about how the brain performs similar tasks.

This all raises an interesting question about the frontiers of cognitive science and AI. The reciprocity between cognitive science and artificial intelligence can be seen even at the final frontier of each discipline. In particular, will cognitive science ever fully understand how the brain implements human cognition, and the corresponding general human intelligence?

And back to our original question about HAL: Will artificial intelligence ever match or surpass human intelligence on every dimension?

At the moment, all we can do is speculate, but a few things seem unambiguously true. The continued pursuit of how the brain implements the mind will yield ever richer computational principles that can inspire novel artificial intelligence approaches. Similarly, ongoing progress in AI will continue to inspire new frameworks for thinking about the wealth of data in cognitive science.

Dr Matthew Crossley is a researcher in the Department of Cognitive Science at Macquarie University working on category and motor learning. Dr David Kaplan is a researcher in the Department of Cognitive Science at Macquarie University working on motor learning and the foundations of cognitive science.

Understanding cognition, which includes processes such as attention, perception, memory, reading and language, is one of the greatest scientific challenges of our time. The new Bachelor of Brain and Cognitive Sciences degree the only one of its kind in Australia provides a strong foundation in the rapidly growing fields of cognitive science, neuroscience and computation.

View post:

Will there ever be a supercomputer that can think like HAL? - Macquarie University

AWS wants to reinvent the supercomputer, starting with the network – ZDNet

Image: Asha Barbaschow

Amazon Web Services wants to reinvent high performance computing (HPC) and according to VP of AWS global infrastructure Peter DeSantis, it all starts with the network.

Speaking at his Monday Night Live keynote, DeSantis said AWS has been working for the last decade to make supercomputing in the cloud a possibility.

"Over the past year we've seen this goal become reality," he said.

According to DeSantis, there's no precise definition of an HPC workload, but he said the one constant is that it is way too big to fit on a single server.

"What really differentiates HPC workloads is the need for high performance networking so those servers can work together to solve problems," he said, talking on the eve of AWS re:Invent about what the focus of the cloud giant's annual Las Vegas get together will be.

"Do I care about HPC? I hope so, because HPC impacts literally every aspect of our lives the big, hard problems in science and engineering."

See also: How higher-ed researchers leverage supercomputers in the fight for funding (TechRepublic)

DeSantis explained that typically in supercomputing, each server works out a portion of the problem, and then all the servers share the results with each other.

"This information exchange allows the servers to continue doing their work," he said. "The need for tight coordination puts significant pressure on the network."

To scale these HPC workloads effectively, DeSantis said a high-performance, low-latency network is required.

"If you look really closely at a modern supercomputer, it really is a cluster of servers with a purpose-built, dedicated network network provides specialised capabilities to help run HPC applications efficiently," he said.

In touting the cloud as the best place to run HPC workloads, DeSantis said the "other" problem with physical supercomputers is they're custom built, which he said means they're expensive and take years to procure and stand up.

"One of the benefits of cloud computing is elasticity," he continued.

Another problem AWS wants to fix with supercomputing in the cloud is the democratisation element, with DeSantis saying one issue is getting access to a supercomputer.

"Usually only the high-value applications have access to the supercomputer," he said.

"With more access to low-cost supercomputing we could have safer cars we could have more accurate forecasting, we could have better treatment for diseases, and we can unleash innovation by giving everybody [access].

"If we want to reinvent high performance computing, we have to reinvent supercomputers."

AWS also wants to reinvent machine learning infrastructure.

"Machine learning is quickly becoming an integral part of every application," DeSantis said.

However, the optimal infrastructure for the two components of machine learning -- training and inference -- are very different.

"A good machine learning dataset is big, and they're getting bigger and training involves doing multiple passes through your training data," De Santis said.

"We're excited in investments we've made in HPC and it's helping us with machine learning."

Earlier on Monday, Formula 1 announced it had partnered with AWS to carry out simulations that it says has resulted in the car design for the 2021 racing season, touting the completion of a Computational Fluid Dynamics (CFD) project that simulates the aerodynamics of cars while racing.

The CFD project used over 1,150 compute cores to run detailed simulations comprised of over 550 million data points that model the impact of one car's aerodynamic wake on another.

Asha Barbaschow travelled to re:Invent as a guest of AWS.

See the original post:

AWS wants to reinvent the supercomputer, starting with the network - ZDNet

Idaho’s Third Supercomputer Coming to Collaborative Computing Center – HPCwire

IDAHO FALLS, Idaho, Dec. 5, 2019 A powerful new supercomputer arrived this week at Idaho National Laboratorys Collaborative Computing Center. The machine has the power to run complex modeling and simulation applications, which are essential to developing next-generation nuclear technologies.

Named after a central Idaho mountain range, Sawtooth arrives in December and will be available to users early next year. The $19.2 million systemranks #37 on the 2019 Top 500fastest supercomputers in the world. That is the highest ranking reached by an INL supercomputer. Of 102 new systems added to the list in the past six months, only three were faster than Sawtooth.

It will be able to crunch much more complex mathematical calculations at approximately six times the speed of Falcon and Lemhi,INLs current systems.

The boost in computing power will enable researchers at INL and elsewhere to simulate new fuels and reactor designs, greatly reducing the time, resources and funding needed to transition advanced nuclear technologies from the concept phase into the marketplace.

Supercomputing reduces the need to build physical experiments to test every hypothesis, as was the process used to develop the majority of technologies used in currently operating reactors. By using simulations to predict how new fuels and designs will perform in a reactor environment, engineers can select only the most promising technologies for the real-world experiments,saving time and money.

INLs ability to model new nuclear technologies has become increasingly important as nations strive to meet growing energy needs while minimizing emissions. Today, there are about 450 nuclear power reactors operating in 30 countries plus Taiwan. These reactors produce approximately 10% of the worlds electricity and 60% of Americas carbon-free electricity. According to theWorld Nuclear Association, 15 countries are currently building about 50 power reactors.

John Wagner, the associate laboratory director for INLs Nuclear Science and Technology directorate, said Sawtooth plays an important role in developing and deploying advanced nuclear technologies and is a key capability for the National Reactor Innovation Center (NRIC).

In August, theU.S. Department of Energy designated INL to lead NRIC, which was established to provide developers the resources to test, demonstrate and assess performance of new nuclear technologies, critical steps that must be completed before they are available commercially.

With advanced modeling and simulation and the computing power now available, we expect to be able to dramatically shorten the time it takes to test, manufacture and commercialize new nuclear technologies, Wagner said. Other industries and organizations, such as aerospace, have relied on modeling and simulation to bring new technologies to market much faster without compromising safety and performance.

Sawtooth is funded by the DOEs Office of Nuclear Energy through the Nuclear Science User Facilities program. It will provide computer access to researchers at INL, other national laboratories, industry and universities. Idahos three research universitieswill be able to access Sawtoothand INLs other supercomputers remotely via the Idaho Regional Optical Network (IRON), an ultra-high-speed fiber optic network.

This system represents a significant increase in computing resources supporting nuclear energy research and development and will be the primary system for DOEs nuclear energy modeling and simulation activities, said Eric Whiting, INLs division director for Advanced Scientific Computing. It will help guide the future of nuclear energy.

Sawtooth, with its nearly 100,000 processors, is being installed in the new 67,000-square-foot Collaborative Computing Center,which opened in October. The new facility was designed to be the heart of modeling and simulation work for INL as well as provide floor space, power and cooling for systems such as Sawtooth. Falcon and Lemhi, the labs current supercomputing systems, also are slated to move to this new facility.

About INL

INL is one of the U.S. Department of Energys (DOEs) national laboratories. The laboratory performs work in each of DOEs strategic goal areas: energy, national security, science and environment. INL is the nations center for nuclear energy research and development. Day-to-day management and operation of the laboratory is the responsibility of Battelle Energy Alliance. See more INL news atwww.inl.gov. Follow @INL onTwitteror visit our Facebook page atwww.facebook.com/IdahoNationalLaboratory.

Source: Idaho National Laboratory

Follow this link:

Idaho's Third Supercomputer Coming to Collaborative Computing Center - HPCwire

The rise and fall of the PlayStation supercomputers – The Verge

Dozens of PlayStation 3s sit in a refrigerated shipping container on the University of Massachusetts Dartmouths campus, sucking up energy and investigating astrophysics. Its a popular stop for tours trying to sell the school to prospective first-year students and their parents, and its one of the few living legacies of a weird science chapter in PlayStations history.

Those squat boxes, hulking on entertainment systems or dust-covered in the back of a closet, were once coveted by researchers who used the consoles to build supercomputers. With the racks of machines, the scientists were suddenly capable of contemplating the physics of black holes, processing drone footage, or winning cryptography contests. It only lasted a few years before tech moved on, becoming smaller and more efficient. But for that short moment, some of the most powerful computers in the world could be hacked together with code, wire, and gaming consoles.

Researchers had been messing with the idea of using graphics processors to boost their computing power for years. The idea is that the same power that made it possible to render Shadow of the Colossus grim storytelling was also capable of doing massive calculations if researchers could configure the machines the right way. If they could link them together, suddenly, those consoles or computers started to be far more than the sum of their parts. This was cluster computing, and it wasnt unique to PlayStations; plenty of researchers were trying to harness computers to work as a team, trying to get them to solve increasingly complicated problems.

The game consoles entered the supercomputing scene in 2002 when Sony released a kit called Linux for the PlayStation 2. It made it accessible, Craig Steffen said. They built the bridges, so that you could write the code, and it would work. Steffen is now a senior research scientist at the National Center for Supercomputing Applications (NCSA). In 2002, he had just joined the group and started working on a project with the goal of buying a bunch of PS2s and using the Linux kits to hook them (and their Emotion Engine central processing units) together into something resembling a supercomputer.

They hooked up between 60 and 70 PlayStation 2s, wrote some code, and built out a library. It worked okay, it didnt work superbly well, Steffen said. There were technical issues with the memory two specific bugs that his team had no control over.

Every time you ran this thing, it would cause the kernel on whatever machine you ran it on to kind of go into this weird unstable state and it would have to be rebooted, which was a bummer, Steffen said.

They shut the project down relatively quickly and moved on to other questions at the NCSA. Steffen still keeps one of the old PS2s on his desk as a memento of the program.

But thats not where PlayStations supercomputing adventures met their end. The PS3 entered the scene in late 2006 with powerful hardware and an easier way to load Linux onto the devices. Researchers would still need to link the systems together, but suddenly, it was possible for them to imagine linking together all of those devices into something that was a game-changer instead of just a proof-of-concept prototype.

Thats certainly what black hole researcher Gaurav Khanna was imagining over at UMass Dartmouth. Doing pure period simulation work on black holes doesnt really typically attract a lot of funding, its just because it doesnt have too much relevance to society, Khanna said.

Money was tight, and it was getting tighter. So Khanna and his colleagues were brainstorming, trying to think of solutions. One of the people in his department was an avid gamer and mentioned the PS3s Cell processor, which was made by IBM. A similar kind of chip was being used to build advanced supercomputers. So we got kind of interested in it, you know, is this something interesting that we could misuse to do science? Khanna says.

Inspired by the specs of Sonys new machine, the astrophysicist started buying up PS3s and building his own supercomputer. It took Khanna several months to get the code into shape and months more to clean up his program into a working order. He started with eight, but by the time he was done, he had his own supercomputer, pieced together out of 176 consoles and ready to run his experiments no jockeying for space or paying other researchers to run his simulations of black holes. Suddenly, he could run complex computer models or win cryptography competitions at a fraction of the cost of a more typical supercomputer.

Around the same time, other researchers were having similar ideas. A group in North Carolina also built a PS3 supercomputer in 2007, and a few years later, at the Air Force Research Laboratory in New York, computer scientist Mark Barnell started working on a similar project called the Condor Cluster.

The timing wasnt great. Barnells team proposed the project in 2009, just as Sony was shifting toward the pared-back PS3 slim, which didnt have the capability to run Linux, unlike the original PS3. After a hack, Sony even issued a firmware update that pulled OpenOS, the system that allowed people to run Linux, from existing PS3 systems. That made finding useful consoles even harder. The Air Force had to convince Sony to sell it the un-updated PS3s that the company was pulling from shelves, which, at the time, were sitting in a warehouse outside Chicago. It took many meetings, but eventually, the Air Force got what it was looking for, and in 2010, the project had its big debut.

Running on more than 1,700 PS3s that were connected by five miles of wire, the Condor Cluster was huge, dwarfing Khannas project, and it used to process images from surveillance drones. During its heyday, it was the 35th fastest supercomputer in the world.

But none of this lasted long. Even while these projects were being built, supercomputers were advancing, becoming more powerful. At the same time, gaming consoles were simplifying, making them less useful to science. The PlayStation 4 outsold both the original PlayStation and the Wii nearing the best-selling status currently held by the PS2. But for researchers, it was nearly useless. Like the slimmer version of the PlayStation 3 released before it, the PS4 cant easily be turned into a cog for a supercomputing machine. Theres nothing novel about the PlayStation 4, its just a regular old PC, Khanna says. We werent really motivated to do anything with the PlayStation 4.

The era of the PlayStation supercomputer was over.

The one at UMass Dartmouth is still working, humming with life in that refrigerated shipping container on campus. The UMass Dartmouth machine is smaller than it used to be at its peak power of about 400 PlayStation 3s. Parts of it have been cut out and repurposed. Some are still working together in smaller supercomputers at other schools; others have broken down or been lost to time. Khanna has since moved on to trying to link smaller, more efficient devices together into his next-generation supercomputer. He says the Nvidia Shield devices hes working with now are about 50 times more efficient than the already-efficient PS3.

Its the Air Forces supercluster of super consoles that had the most star-studded afterlife. When the program ended about four years ago, some consoles were donated to other programs, including Khannas. But many of the old consoles were sold off as old inventory, and a few hundred were snapped up by people working with the TV show Person of Interest. In a ripped-from-the-headlines move, the consoles made their silver screen debut in the shows season 5 premiere, playing wait for it a supercomputer made of PlayStation 3s.

Its all Hollywood, Barnell said of the script, but the hardware is actually our equipment.

Correction, 7:05 PM ET: Supercomputer projects needed the original PS3, not the PS3 Slim, because Sony had removed Linux support from the console in response to hacks which later led to a class-action settlement. This article originally stated that it was because the PS3 Slim was less powerful. We regret the error.

Go here to see the original:

The rise and fall of the PlayStation supercomputers - The Verge