12345...102030...


Inside the Volunteer Supercomputer Team That’s Hunting for COVID Clues – Defense One

The White House's team recently added the world's fastest computer to its informal network of more than 40.

The worlds fastest supercomputer teamed up with the White Houses expanding supercomputing effort to fight the novelcoronavirus.

Japans Fugakuwhichsurpassedleading U.S. machines on the Top 500listof global supercomputers in late Junejoinedthe COVID-19 High Performance ComputingConsortium.

Jointlylaunchedby the Energy Department, Office of Science and Technology Policy and IBM in late March, the consortium currently facilitates more than 65 active researchprojectsand envelops a vast supercomputer-powered search for new findings pertaining to the novel coronavirus spread, how to effectively treat and mitigate it, and more. Dozens of national and internationalmembersare volunteering free compute time through the effort, providing at least 485 petaflops ofcapacityand steadily growing, to more rapidly generate new solutions againstCOVID-19.

What started as a simple concept has grown to span three continents with over 40 supercomputer providers, Dario Gil, director of IBM Research and consortium co-chair, toldNextgovlast week. In the face of a global pandemic like COVID-19, hopefully a once-in-a-lifetime event, the speed at which researchers can drive discovery is a critical factor in the search for a cure and it is essential that we combineforces.

Subscribe

Receive daily email updates:

Subscribe to the Defense One daily.

Be the first to receive updates.

Gil and other members and researchers briefedNextgovon how the work is unfolding, how theyre measuring success and the research the consortium is increasinglyunderpinning.

The ConsortiumsEvolution

Energys Office of Science Director Chris Fall toldNextgovlast week that since the consortiumsfounding, its resources have been used to sort through billions of molecules to identify promising compounds that can be manufactured quickly and tested for potency to target the novel coronavirus, produce large data sets to study variations in patient responses, perform airflow simulations on a new device that will allow doctors to use one ventilator to support multiple patientsand more. The complex systems are powering calculations, simulations and results in a matter of days that several scientists have noted would take a matter of months on traditionalcomputers.

From a small conversation three months ago, we overcame a myriad of institutional and organizational boundaries to establish the consortium, Fall said, adding that the effort is building an international team of COVID-19 researchers that are sharing their best ideas, methods and results to understand the virus and its effects on humans which will [allow] the world to ultimately conquer or confine thevirus.

In a recent interview, Energys Undersecretary for Science Paul Dabbarexplainedthat any researcher interested in tapping into advanced computing capabilities can submit relevant research proposals to the consortium through an onlineportalthatll then be reviewed for selection. An executive committee supports the groups organization and helps steer policies, while a science committee is tasked with evaluating research proposals submitted to the consortium for potential impact upon selection. And a third committee allocates the time and cycles on the supercomputing machines once theyrechosen.

Whats really interesting about this from an organizational point of view is that its basically a volunteer organization, Dabbarnoted.

As of July 1, the consortium had received more than 148 COVID-19 research proposals with 78 approved and 68 up and running via the involved supercomputing resources, Energy confirmed. Though researchers are tapping into the assets free of charge, the work doesnt come without cost. Dabbar said the consortium taps into some of the departments user facilities and resources that were built and funded by taxpayer dollars. The effort induces operating costs such as runtime, electricity and cooling for the machines, which Dabbar said are relatively minor compared to actually building the capacity to beginwith.

It does absolutely cost money, Dabbar said. But at the end of the day, a lot of this is taking advantage of what the American people invested in, and using the flexibility, and shifting it towards thisproblem.

The combined, supercomputing resources are speeding up the chase for answers and solutions against COVID-19, but that faster pace isnt the only metric for success. IBMs Gil said in the early days, the establishment of the consortium and the efficiency we have achieved in expedited expert review of proposals and rapid matching of approved proposals to supercomputing systems, along with rapid on-boarding onto those systems would have to be considered our first majorsuccess.

Those involved also measure success by the number of up-and-running research projects, and highlighted that 27 projects already have experimental, clinical or policy transition plans in place. Insiders also consider the fact that they were able to quickly bring together industry players, as Gil noted many of whom are competitors, labs, federal agencies, universities and several international partners to share their systems to be a majorachievement.

NASA is one consortium member thats been involved in the initiative from the very beginning when it was invited by OSTP, Piyush Mehrotra, chief of NASAs Advanced Supercomputing, or NAS Division toldNextgovThursday.

The division, at Ames Research Center in Silicon Valley, hosts the space agencys supercomputers, which Mehrotra noted are typically used for large-scale simulations supporting NASAs aerospace, earth science, space science and space exploration missions. But, a selection of the agencys supercomputing assets are also reserved for national priorities that transcend beyond the agencysscope.

In order to understand COVID-19, and to develop treatments and vaccines, extensive research is required in complex areas such as epidemiology and molecular modelingresearch that can be significantly accelerated by supercomputing resources, Mehrotra explained. We are therefore making the full reserve portion of NASA supercomputing resources available to researchers working on the COVID-19 response, along with providing our expertise and support to port and run their applications on NASAsystems.

Amazon Web Services is another that joined among the consortiums first wave of members and participated in the initial roundtable discussion at the White House where the concept emerged in March. The companys Vice President of Public Policy Shannon Kellogg toldNextgovin late May that, in joining, AWS saw a clear opportunity to bring the benefits of cloud to bear in the race for treatments and a vaccine. The company has since provided cloud computing resources to more than a dozen of the consortiums active projects, and according to Kellogg, provides in-kind credits to the research teams, which provide them with cloud computing resources. The tech-giants team then communicates regularly with the researchers to help address technicalquestions.

This effort has shown how collaboration and commitment from leaders across government, business, and academia can empower researchers and accelerate the pace of their work, Kelloggsaid.

Outside of IBM, NASA and AWS, other early members of the consortium include Google Cloud, Microsoft, the Massachusetts Institute of Technology, Rensselaer Polytechnic Institute, the National Science Foundation, as well as Argonne, Lawrence Livermore, Los Alamos, Oak Ridge and Sandia National laboratories. And as the consortium progresses, its alsoexpandingalong the way. In April, the National Center for Atmospheric Researchs Wyoming Supercomputing Center, chipmaker AMD and graphics processing units-maker NVIDIA joined, amongothers.

Dell Technologies also began the process to participate in April, according to Janet Morss, senior consultant, high performance computing. It took about a month for the involvement to come into fruition and the company is now donating cycles from the Zenith supercomputer and otherresources.

Excerpt from:

Inside the Volunteer Supercomputer Team That's Hunting for COVID Clues - Defense One

Japan supercomputer finds ways to nix airborne virus at work and on trains – The Japan Times

Supercomputer-driven models simulated in Japan have suggested that operating commuter trains with windows open and limiting the number of passengers may help reduce the risk of novel coronavirus infection, as scientists warn the virus may spread in the air.

In an open letter published Monday, 239 scientists in 32 countries outlined evidence they say shows floating virus particles can infect people who breathe them in.

The World Health Organization (WHO) acknowledged evidence emerging of airborne transmission, but said it was not definitive.

Even if the coronavirus is airborne, questions remain about how many infections occur through that route. How concentrated the virus is in the air may also decide contagion risks, said Professor Yuki Furuse of Kyoto University.

In the open letter, scientists urged for improvements to ventilation and the avoidance of crowded, enclosed environments recommendations Japan broadly adopted months ago, according to Shin-ichi Tanabe, one of the co-authors of the letter.

In Japan, the committee for COVID-19 countermeasures insisted on the 3Cs at an early stage, said Tanabe, a professor at Waseda University in Tokyo, referring to Japans public campaign to avoid closed spaces, crowded places and close-contact settings. This was ahead of the world.

As the nation tamed the pandemic, with over 19,000 confirmed cases and 977 deaths so far, economy minister Yasutoshi Nishimura credited its success to the 3Cs and its cluster-tracing strategy.

The recent study by Japanese research giant Riken using the worlds fastest supercomputer, the Fugaku, to simulate how the virus travels in the air in various environments recommended several ways to lower infection risks in public settings.

Makoto Tsubokura, the studys lead researcher, said that opening windows on commuter trains can increase ventilation two- to threefold, lowering the concentration of ambient microbes.

But to achieve adequate ventilation, there needs to be space between passengers, the simulations showed, representing a drastic change from the custom of packing commuter trains tightly, for which the nation is notorious.

Other findings advised the installation of partitions in offices and classrooms, while hospital beds should be surrounded by curtains that touch the ceiling.

See more here:

Japan supercomputer finds ways to nix airborne virus at work and on trains - The Japan Times

EEENEW debuts APC, an Android/Apple Phone Computer, the world’s first hot-swap smartphone and Windows tablet – PR Web

APC, the worlds first smartphone & Windows tablet revealed!

HONG KONG (PRWEB) July 13, 2020

Nowadays, daily smartphone usage is highly frequent and important, which will boost mobility computing rapid demands in the coming years. Many smartphone users are eager for a versatile desktop-mode for their multi-window or multi-task usage to replace heavy computers when on-the-go. It means smartphones becoming mobile computers is coming true. Like Samsung DeX or Android Q onward smartphone OS, users are ready for smartphone desktop-mode usage.

In the past, a conventional laptop/tablet's screen, keyboard, and mouse touchpad were only used for itself, not for external mobile devices or smartphones. A company called EEENEW thoroughly built a new type of tablet, which can be hot-swapped between smartphone desktop-mode and PC Windows mode. Hot-swap in the same tablet, name it APC+, stands for Android/Apple Phone Computer. APC+ also represents Advanced Phone Computer or Advanced Personal Computer. It has built-in Windows and smartphone desktop mode switchable hardware. Thats truly convenient for work efficiency or smartphone gaming on-the-go.

Before the APC was introduced to the world, the traditional way to use a smartphone desktop-mode was to connect a smartphone to a dock, an external monitor, and a keyboard mouse. It needed many peripheral connections, and it was not possible to hot-swap between smartphone desktop-mode and PC mode. Since the rise of APC, that awkward has changed, the tablet's touchscreen and keyboard mouse touchpad can be used for the smartphone or Windows side upon user request.

Unlike other counterfeit products, APC provides a real desktop-mode for smartphones and has video-in USB-C and HDMI ports. APC+ provides a true tablet PC inside. Fully compatible with Nintendo Switch, Samsung DeX, EMUI, OnePlus, TNT, Windows10, Linux, LG, and Asus smartphones - all work in APC tablet!

APC will change the computer world history. The APC has hardware hot-swap for smartphones to become the desktop computer, built-in Intel & Windows. APC is a super-efficient tablet for computer and smartphone users.

To learn more details, check out the below links. APC is about to launch, dont miss out get your super early bird offer.

https://eeenew.com/https://apc.eeenew.com/

Share article on social media or email:

Read the original:

EEENEW debuts APC, an Android/Apple Phone Computer, the world's first hot-swap smartphone and Windows tablet - PR Web

Global Supercomputer Market 2020 Research with COVID-19 After Effects and Industry Progression till 2027 – Cole of Duty

Global Supercomputer Marketby Fior Markets specializes in market strategy, market orientation, expert opinion, and knowledgeable information on the global market. The report is a combination of pivotal insights including competitive landscape; global, regional, and country-level market size; market players; market growth analysis; market share; opportunities analysis, recent developments, and segmentation growth. The report also covers other thoughtful insights and facts such as historical data, sales, revenue, and global market share ofSupercomputer, product scope, market overview, opportunities, driving force, and market risks. The report segregates the market size, status, and forecast the 2020-2027 market by segments and applications/end businesses.

NOTE: Our analysts monitoring the situation across the globe explains that the market will generate remunerative prospects for producers post COVID-19 crisis. The report aims to provide an additional illustration of the latest scenario, economic slowdown, and COVID-19 impact on the overall industry.

DOWNLOAD FREE SAMPLE REPORT:https://www.fiormarkets.com/report-detail/418092/request-sample

One of the important factors that make this report worth a buy is the extensive overview of the competitive landscape of the industry. The report comprises upstream raw materials and downstream demand analysis. The most notable players in the market are examined. The report provides a detailed perspective on the trends observed in the market and the main areas with growth potential. The study predicts the growth of the globalSupercomputermarket size, market share, demand, trends, and gross sales. Key players are studied with their information like associated companies, downstream buyers, upstream suppliers, market position, historical background, and top competitors based on revenue along with sales contact information.

REQUEST FOR CUSTMIZATION:https://www.fiormarkets.com/enquiry/request-customization/418092

The major players covered in the report are:NVIDIA Corp., Fujitsu Ltd., Hewlett Packard Enterprise Co., Lenovo Group Ltd., Dell Technologies Inc., International Business Machines Corp., Huawei Investment & Holding Co. Ltd., Dawning Information Industry Co. Ltd., NEC Technologies India Private Limited., Atos SE, and Cray Inc. among others.

The globalSupercomputermarket has been analyzed and proper study of the market has been done on the basis of all the regions in the world. The regions as listed in the report include:North America, Europe, Asia Pacific, South America, and the Middle East and Africa.

Moreover, the report studies the value, volume trends, and the pricing history of the market. Then it covers the sales volume, price, revenue, gross margin, manufacturers, suppliers, distributors, intermediaries, customers, historical growth, and future perspectives in the globalSupercomputermarket. The study on market inherently projects this industry space to follow modest proceeds by the end of the forecast duration.

BROWSE COMPLETE REPORT AND TABLE OF CONTENTS:https://www.fiormarkets.com/report/supercomputer-market-by-operating-system-windows-linux-unix-418092.html

It Includes Analysis of The Following:

Market Overview: The section covers sector size, market size, detailed insights, and growth analysis by segmentation

Competitive Illustration: The report includes the list of major companies/competitors and their competition data that helps the user to determine their current position in the market and to maintain or increase their share holds.

Country-Wise Analysis: This section offers information on the sales growth in these regions on a country-levelSupercomputermarket.

Challenges And Future Outlook: Provides the challenges and future outlook aboutSupercomputer

This report will be beneficial for any new business establishment or business looking to upgrade and make impactful changes in their businesses. The overall report is a comprehensive documentation that covers all the aspects of a market study and provides a concise conclusion to its readers. For the purpose of this study, this market research report has segmented the globalSupercomputermarket report on the basis of recommendations and regions covered in this report.

Customization of the Report:This report can be customized to meet the clients requirements. Please connect with our sales team ([emailprotected]), who will ensure that you get a report that suits your needs.

See the original post:

Global Supercomputer Market 2020 Research with COVID-19 After Effects and Industry Progression till 2027 - Cole of Duty

Japan has long accepted COVID’s airborne spread, and scientists say ventilation is key – CBS News

Tokyo Under pressure from the scientific community, the World Health Organization acknowledged last week the airborne transmission of "micro-droplets" as a possible third cause of COVID-19 infections. To many researchers in Japan, the admission felt anti-climactic.

This densely populated country has operated for months on the assumption that tiny, "aerosolized" particles in crowded settings are turbo-charging the spread of the new coronavirus.

Very few diseases tuberculosis, chicken pox and measles have been deemed transmissible through aerosols. Most are spread only through direct contact with infected persons or their bodily fluids, or contaminated surfaces.

Still the WHO has refused to confirm aerosols as a major source of new coronavirus infections, saying more evidence is needed. But scientists are keeping the pressure on.

"If the WHO recognizes what we did in Japan, then maybe in other parts of the world, they will change (their antiviral procedures)," said Shin-Ichi Tanabe, a professor in the architecture department of Japan's prestigious Waseda University. He was one of the 239 international scientists who co-wrote an open letter to the WHO urging the United Nations agency to revise its guidelines on how to stop the virus spreading.

Large droplets expelled through the nose and mouth tend to fall to the ground quickly, explained Makoto Tsubokura, who runs the Computational Fluid Dynamics lab at Kobe University. For these larger respiratory particles, social distancing and face masks are considered adequate safeguards. But in rooms with dry, stale air, Tsubokura said his research showed that people coughing, sneezing, and even talking and singing, emit tiny particles that defy gravity able to hang in the air for many hours or even days, and travel the length of a room.

The key defense against aerosols, Tsubokura said, is diluting the amount of virus in the air by opening windows and doors and ensuring HVAC systems circulate fresh air. In open-plan offices, he said partitions must be high enough to prevent direct contact with large droplets, but low enough to avoid creating a cloud of virus-heavy air (55 inches, or head height.) Small desk fans, he said, can also help diffuse airborne viral density.

To the Japanese, the latest WHO admission did at least vindicate a strategy that the country adopted in February, when residents were told to avoid "the three Cs" cramped spaces, crowded areas and close conversation.

After a lull, new infections primarily among younger residents in Tokyo have resurged recently, topping 200 for four straight days, before falling back down to 119 on Monday.

Alarmingly, new cases are cropping up not just in notoriously cramped and crowded nightlife spots, but also within homes and workplaces, prompting the national government to consider asking businesses to shut down again in the greater metro region. Authorities are anxious to prevent a corresponding surge in serious cases and deaths, which, thus far, have remained low.

Tsubokura, who also serves as the lead researcher for government institute RIKEN, has run simulations on Japan's new Fugaku supercomputer studying how to guard against airborne transmission inside subways, offices, schools, hospitals, and other public spaces.

His computer model of riders on Tokyo's congested Yamanote train line (see the animation at 7:15 minutes in this video) illustrated how air flow stagnates on packed trains with closed windows, in contrast to free-flowing air on carriages with few passengers and open windows. He suggests keeping windows open at all times to mitigate risks when trains fill up.

But Japan's infamously congested trains, he argues, probably aren't as as risky as his model suggests. "It is very crowded, and the air is bad," Kurokabe said. "But nobody is speaking, and everyone is wearing a mask. The risk is not that high."

Even riding on a crowded subway train if windows are kept open, as they are in Japan these days "is much safer than a pub, restaurant or gym," said Waseda University's Tanabe.

Masking noses and mouths is all the more important, he said, because his research shows men touch their faces up to 40 times an hour. (He said women, more likely to wear makeup, are less face-touchy.)

"Non-woven (surgical) masks are high-performance, but cloth also works it's much better than nothing," he said. "The only way to avoid leaks (of droplets) is to tightly fit the mask."

Mask-wearing and ventilation directives are helping the Japanese reopen concert halls, baseball stadiums and other venues. As of last Friday, such venues are permitted to admit up to 5,000 patrons.

Tanabe will be relying on Japan's new Fugaku supercomputer recently declared the world's fastest to plot optimal ventilation system efficiency.

"It's like predicting a typhoon," he said, noting that forecasting both extreme weather and air flow through crowded trains rely on the same equations to calculate fluid dynamics.

In an article to be published in the September issue of the scientific journal Environment International as schools and other public facilities struggle to reopen Tanabe and other experts argue that safeguarding indoor spaces can be done relatively simply and cheaply, by avoiding crowding and maintaining the flow of fresh air.

Go here to read the rest:

Japan has long accepted COVID's airborne spread, and scientists say ventilation is key - CBS News

Inside the Volunteer Supercomputer Team That’s Hunting for COVID Clues – Defense One

The White House's team recently added the world's fastest computer to its informal network of more than 40.

The worlds fastest supercomputer teamed up with the White Houses expanding supercomputing effort to fight the novelcoronavirus.

Japans Fugakuwhichsurpassedleading U.S. machines on the Top 500listof global supercomputers in late Junejoinedthe COVID-19 High Performance ComputingConsortium.

Jointlylaunchedby the Energy Department, Office of Science and Technology Policy and IBM in late March, the consortium currently facilitates more than 65 active researchprojectsand envelops a vast supercomputer-powered search for new findings pertaining to the novel coronavirus spread, how to effectively treat and mitigate it, and more. Dozens of national and internationalmembersare volunteering free compute time through the effort, providing at least 485 petaflops ofcapacityand steadily growing, to more rapidly generate new solutions againstCOVID-19.

What started as a simple concept has grown to span three continents with over 40 supercomputer providers, Dario Gil, director of IBM Research and consortium co-chair, toldNextgovlast week. In the face of a global pandemic like COVID-19, hopefully a once-in-a-lifetime event, the speed at which researchers can drive discovery is a critical factor in the search for a cure and it is essential that we combineforces.

Subscribe

Receive daily email updates:

Subscribe to the Defense One daily.

Be the first to receive updates.

Gil and other members and researchers briefedNextgovon how the work is unfolding, how theyre measuring success and the research the consortium is increasinglyunderpinning.

The ConsortiumsEvolution

Energys Office of Science Director Chris Fall toldNextgovlast week that since the consortiumsfounding, its resources have been used to sort through billions of molecules to identify promising compounds that can be manufactured quickly and tested for potency to target the novel coronavirus, produce large data sets to study variations in patient responses, perform airflow simulations on a new device that will allow doctors to use one ventilator to support multiple patientsand more. The complex systems are powering calculations, simulations and results in a matter of days that several scientists have noted would take a matter of months on traditionalcomputers.

From a small conversation three months ago, we overcame a myriad of institutional and organizational boundaries to establish the consortium, Fall said, adding that the effort is building an international team of COVID-19 researchers that are sharing their best ideas, methods and results to understand the virus and its effects on humans which will [allow] the world to ultimately conquer or confine thevirus.

In a recent interview, Energys Undersecretary for Science Paul Dabbarexplainedthat any researcher interested in tapping into advanced computing capabilities can submit relevant research proposals to the consortium through an onlineportalthatll then be reviewed for selection. An executive committee supports the groups organization and helps steer policies, while a science committee is tasked with evaluating research proposals submitted to the consortium for potential impact upon selection. And a third committee allocates the time and cycles on the supercomputing machines once theyrechosen.

Whats really interesting about this from an organizational point of view is that its basically a volunteer organization, Dabbarnoted.

As of July 1, the consortium had received more than 148 COVID-19 research proposals with 78 approved and 68 up and running via the involved supercomputing resources, Energy confirmed. Though researchers are tapping into the assets free of charge, the work doesnt come without cost. Dabbar said the consortium taps into some of the departments user facilities and resources that were built and funded by taxpayer dollars. The effort induces operating costs such as runtime, electricity and cooling for the machines, which Dabbar said are relatively minor compared to actually building the capacity to beginwith.

It does absolutely cost money, Dabbar said. But at the end of the day, a lot of this is taking advantage of what the American people invested in, and using the flexibility, and shifting it towards thisproblem.

The combined, supercomputing resources are speeding up the chase for answers and solutions against COVID-19, but that faster pace isnt the only metric for success. IBMs Gil said in the early days, the establishment of the consortium and the efficiency we have achieved in expedited expert review of proposals and rapid matching of approved proposals to supercomputing systems, along with rapid on-boarding onto those systems would have to be considered our first majorsuccess.

Those involved also measure success by the number of up-and-running research projects, and highlighted that 27 projects already have experimental, clinical or policy transition plans in place. Insiders also consider the fact that they were able to quickly bring together industry players, as Gil noted many of whom are competitors, labs, federal agencies, universities and several international partners to share their systems to be a majorachievement.

NASA is one consortium member thats been involved in the initiative from the very beginning when it was invited by OSTP, Piyush Mehrotra, chief of NASAs Advanced Supercomputing, or NAS Division toldNextgovThursday.

The division, at Ames Research Center in Silicon Valley, hosts the space agencys supercomputers, which Mehrotra noted are typically used for large-scale simulations supporting NASAs aerospace, earth science, space science and space exploration missions. But, a selection of the agencys supercomputing assets are also reserved for national priorities that transcend beyond the agencysscope.

In order to understand COVID-19, and to develop treatments and vaccines, extensive research is required in complex areas such as epidemiology and molecular modelingresearch that can be significantly accelerated by supercomputing resources, Mehrotra explained. We are therefore making the full reserve portion of NASA supercomputing resources available to researchers working on the COVID-19 response, along with providing our expertise and support to port and run their applications on NASAsystems.

Amazon Web Services is another that joined among the consortiums first wave of members and participated in the initial roundtable discussion at the White House where the concept emerged in March. The companys Vice President of Public Policy Shannon Kellogg toldNextgovin late May that, in joining, AWS saw a clear opportunity to bring the benefits of cloud to bear in the race for treatments and a vaccine. The company has since provided cloud computing resources to more than a dozen of the consortiums active projects, and according to Kellogg, provides in-kind credits to the research teams, which provide them with cloud computing resources. The tech-giants team then communicates regularly with the researchers to help address technicalquestions.

This effort has shown how collaboration and commitment from leaders across government, business, and academia can empower researchers and accelerate the pace of their work, Kelloggsaid.

Outside of IBM, NASA and AWS, other early members of the consortium include Google Cloud, Microsoft, the Massachusetts Institute of Technology, Rensselaer Polytechnic Institute, the National Science Foundation, as well as Argonne, Lawrence Livermore, Los Alamos, Oak Ridge and Sandia National laboratories. And as the consortium progresses, its alsoexpandingalong the way. In April, the National Center for Atmospheric Researchs Wyoming Supercomputing Center, chipmaker AMD and graphics processing units-maker NVIDIA joined, amongothers.

Dell Technologies also began the process to participate in April, according to Janet Morss, senior consultant, high performance computing. It took about a month for the involvement to come into fruition and the company is now donating cycles from the Zenith supercomputer and otherresources.

Follow this link:

Inside the Volunteer Supercomputer Team That's Hunting for COVID Clues - Defense One

Japan supercomputer finds ways to nix airborne virus at work and on trains – The Japan Times

Supercomputer-driven models simulated in Japan have suggested that operating commuter trains with windows open and limiting the number of passengers may help reduce the risk of novel coronavirus infection, as scientists warn the virus may spread in the air.

In an open letter published Monday, 239 scientists in 32 countries outlined evidence they say shows floating virus particles can infect people who breathe them in.

The World Health Organization (WHO) acknowledged evidence emerging of airborne transmission, but said it was not definitive.

Even if the coronavirus is airborne, questions remain about how many infections occur through that route. How concentrated the virus is in the air may also decide contagion risks, said Professor Yuki Furuse of Kyoto University.

In the open letter, scientists urged for improvements to ventilation and the avoidance of crowded, enclosed environments recommendations Japan broadly adopted months ago, according to Shin-ichi Tanabe, one of the co-authors of the letter.

In Japan, the committee for COVID-19 countermeasures insisted on the 3Cs at an early stage, said Tanabe, a professor at Waseda University in Tokyo, referring to Japans public campaign to avoid closed spaces, crowded places and close-contact settings. This was ahead of the world.

As the nation tamed the pandemic, with over 19,000 confirmed cases and 977 deaths so far, economy minister Yasutoshi Nishimura credited its success to the 3Cs and its cluster-tracing strategy.

The recent study by Japanese research giant Riken using the worlds fastest supercomputer, the Fugaku, to simulate how the virus travels in the air in various environments recommended several ways to lower infection risks in public settings.

Makoto Tsubokura, the studys lead researcher, said that opening windows on commuter trains can increase ventilation two- to threefold, lowering the concentration of ambient microbes.

But to achieve adequate ventilation, there needs to be space between passengers, the simulations showed, representing a drastic change from the custom of packing commuter trains tightly, for which the nation is notorious.

Other findings advised the installation of partitions in offices and classrooms, while hospital beds should be surrounded by curtains that touch the ceiling.

Read the original post:

Japan supercomputer finds ways to nix airborne virus at work and on trains - The Japan Times

Global Supercomputer Market 2020 Research with COVID-19 After Effects and Industry Progression till 2027 – Cole of Duty

Global Supercomputer Marketby Fior Markets specializes in market strategy, market orientation, expert opinion, and knowledgeable information on the global market. The report is a combination of pivotal insights including competitive landscape; global, regional, and country-level market size; market players; market growth analysis; market share; opportunities analysis, recent developments, and segmentation growth. The report also covers other thoughtful insights and facts such as historical data, sales, revenue, and global market share ofSupercomputer, product scope, market overview, opportunities, driving force, and market risks. The report segregates the market size, status, and forecast the 2020-2027 market by segments and applications/end businesses.

NOTE: Our analysts monitoring the situation across the globe explains that the market will generate remunerative prospects for producers post COVID-19 crisis. The report aims to provide an additional illustration of the latest scenario, economic slowdown, and COVID-19 impact on the overall industry.

DOWNLOAD FREE SAMPLE REPORT:https://www.fiormarkets.com/report-detail/418092/request-sample

One of the important factors that make this report worth a buy is the extensive overview of the competitive landscape of the industry. The report comprises upstream raw materials and downstream demand analysis. The most notable players in the market are examined. The report provides a detailed perspective on the trends observed in the market and the main areas with growth potential. The study predicts the growth of the globalSupercomputermarket size, market share, demand, trends, and gross sales. Key players are studied with their information like associated companies, downstream buyers, upstream suppliers, market position, historical background, and top competitors based on revenue along with sales contact information.

REQUEST FOR CUSTMIZATION:https://www.fiormarkets.com/enquiry/request-customization/418092

The major players covered in the report are:NVIDIA Corp., Fujitsu Ltd., Hewlett Packard Enterprise Co., Lenovo Group Ltd., Dell Technologies Inc., International Business Machines Corp., Huawei Investment & Holding Co. Ltd., Dawning Information Industry Co. Ltd., NEC Technologies India Private Limited., Atos SE, and Cray Inc. among others.

The globalSupercomputermarket has been analyzed and proper study of the market has been done on the basis of all the regions in the world. The regions as listed in the report include:North America, Europe, Asia Pacific, South America, and the Middle East and Africa.

Moreover, the report studies the value, volume trends, and the pricing history of the market. Then it covers the sales volume, price, revenue, gross margin, manufacturers, suppliers, distributors, intermediaries, customers, historical growth, and future perspectives in the globalSupercomputermarket. The study on market inherently projects this industry space to follow modest proceeds by the end of the forecast duration.

BROWSE COMPLETE REPORT AND TABLE OF CONTENTS:https://www.fiormarkets.com/report/supercomputer-market-by-operating-system-windows-linux-unix-418092.html

It Includes Analysis of The Following:

Market Overview: The section covers sector size, market size, detailed insights, and growth analysis by segmentation

Competitive Illustration: The report includes the list of major companies/competitors and their competition data that helps the user to determine their current position in the market and to maintain or increase their share holds.

Country-Wise Analysis: This section offers information on the sales growth in these regions on a country-levelSupercomputermarket.

Challenges And Future Outlook: Provides the challenges and future outlook aboutSupercomputer

This report will be beneficial for any new business establishment or business looking to upgrade and make impactful changes in their businesses. The overall report is a comprehensive documentation that covers all the aspects of a market study and provides a concise conclusion to its readers. For the purpose of this study, this market research report has segmented the globalSupercomputermarket report on the basis of recommendations and regions covered in this report.

Customization of the Report:This report can be customized to meet the clients requirements. Please connect with our sales team ([emailprotected]), who will ensure that you get a report that suits your needs.

Visit link:

Global Supercomputer Market 2020 Research with COVID-19 After Effects and Industry Progression till 2027 - Cole of Duty

Supercomputer Industry Market Size, Growth Opportunities, Trends by Manufacturers, Regions, Application & Forecast to 2025 – Cole of Duty

The latest trending report Global Supercomputer Industry Market to 2025 available at MarketStudyReport.com is an informative study covering the market with detailed analysis. The report will assist reader with better understanding and decision making.

The Supercomputer Industry market report is an in-depth analysis of this business space. The major trends that defines the Supercomputer Industry market over the analysis timeframe are stated in the report, along with additional pointers such as industry policies and regional industry layout. Also, the report elaborates on the impact of existing market trends on investors.

Request a sample Report of Supercomputer Industry Market at:https://www.marketstudyreport.com/request-a-sample/2769152?utm_source=coleofduty.com&utm_medium=AN

COVID-19, the disease it causes, surfaced in late 2020, and now had become a full-blown crisis worldwide. Over fifty key countries had declared a national emergency to combat coronavirus. With cases spreading, and the epicentre of the outbreak shifting to Europe, North America, India and Latin America, life in these regions has been upended the way it had been in Asia earlier in the developing crisis. As the coronavirus pandemic has worsened, the entertainment industry has been upended along with most every other facet of life. As experts work toward a better understanding, the world shudders in fear of the unknown, a worry that has rocked global financial markets, leading to daily volatility in the U.S. stock markets.

Other information included in the Supercomputer Industry market report is advantages and disadvantages of products offered by different industry players. The report enlists a summary of the competitive scenario as well as a granular assessment of downstream buyers and raw materials.

Revealing a gist of the competitive landscape of Supercomputer Industry market:

Ask for Discount on Supercomputer Industry Market Report at:https://www.marketstudyreport.com/check-for-discount/2769152?utm_source=coleofduty.com&utm_medium=AN

An outlook of the Supercomputer Industry market regional scope:

Additional takeaways from the Supercomputer Industry market report:

This report considers the below mentioned key questions:

Q.1. What are some of the most favorable, high-growth prospects for the global Supercomputer Industry market?

Q.2. Which products segments will grow at a faster rate throughout the forecast period and why?

Q.3. Which geography will grow at a faster rate and why?

Q.4. What are the major factors impacting market prospects? What are the driving factors, restraints, and challenges in this Supercomputer Industry market?

Q.5. What are the challenges and competitive threats to the market?

Q.6. What are the evolving trends in this Supercomputer Industry market and reasons behind their emergence?

Q.7. What are some of the changing customer demands in the Supercomputer Industry Industry market?

For More Details On this Report: https://www.marketstudyreport.com/reports/covid-19-outbreak-global-supercomputer-industry-market-report-development-trends-threats-opportunities-and-competitive-landscape-in-2020

Related Reports:

1. COVID-19 Outbreak-Global Switch Industry Market Report-Development Trends, Threats, Opportunities and Competitive Landscape in 2020Read More: https://www.marketstudyreport.com/reports/covid-19-outbreak-global-switch-industry-market-report-development-trends-threats-opportunities-and-competitive-landscape-in-2020

2. COVID-19 Outbreak-Global Tritium Light Sources Industry Market Report-Development Trends, Threats, Opportunities and Competitive Landscape in 2020Read More: https://www.marketstudyreport.com/reports/covid-19-outbreak-global-tritium-light-sources-industry-market-report-development-trends-threats-opportunities-and-competitive-landscape-in-2020

Related Report : https://www.marketwatch.com/press-release/steam-boiler-market-share-historical-growth-analysis-opportunities-and-forecast-to-2025-2020-07-09?tesla=y

Contact Us:Corporate Sales,Market Study Report LLCPhone: 1-302-273-0910Toll Free: 1-866-764-2150 Email: [emailprotected]

Excerpt from:

Supercomputer Industry Market Size, Growth Opportunities, Trends by Manufacturers, Regions, Application & Forecast to 2025 - Cole of Duty

Tech News: Neuromorphic computing and the brain-on-a-chip in your pocket – IOL

By Louis Fourie Jul 10, 2020

Share this article:

JOHANNESBURG - The human brain is relatively small, uses about 20 Watts of power and can accomplish an amazing number of complex tasks. In contrast, machine learning algorithms that are growing in popularity need large powerful computers and data centres that consumes megawatts of electricity.

Artificial Intelligence (AI) produces astounding achievements in the recognising of images with greater accuracy than humans, having natural conversations, beating humans in sophisticated games, and driving vehicles in heavy traffic.

AI is indeed a disruptive power of the Fourth Industrial Revolution currently driving advances in numerous things from medicine to predicting the weather. However, all of these advances require enormous amounts of computing power and electricity to develop, train and run the algorithms.

According to Elon Musk, the computing power and electricity consumption of AI machines doubles every three to four months, thus becoming a major concern for environmentalists.

But it seems that we can learn something from nature in our endeavour to address the high consumption of electricity and the resultant contribution to the climate crisis by AI and powerful machines.

A branch of computer chip design focuses on mimicking the biological brain to create super-efficient neuromorphic chips that will bring AI from the powerful and energy-hungry machines right to our pocket.

Neuromorphic computing

Neuromorphic computing is the next generation of AI and entails very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the biological nervous system.

This form of AI has more in common with human cognition than with conventional computer logic.

In November 2017 Intel Labs introduced Loihi, a fifth-generation self-learning neuromorphic research test-chip containing some 130 000 neurons, to provide a functional system for researchers to implement Spiking Neural Networks (SNN) that emulate natural neural networks in biological brains.

Each neuron in the SNN can fire or spike independently and send pulsed signals with encoded information to other neurons, thereby simulating the natural learning process by dynamically remapping the synapses between the artificial neurons in response to stimuli.

MIT & memristors

About a month ago engineers of the Massachusetts Institute of Technology (MIT) published a paper in the prestigious journal, Nature Nanotechnology, announcing that they designed a brain-on-a-chip, consisting of thousands of artificial brain synapses known as memristors.

A memristor is a silicon-based electronic memory device that mimics the information-transmitting synapses in the human brain to carry out complex computational tasks. The neuromorphic chip, smaller than a piece of confetti, is so powerful that a small portable device could now easily handle the convoluted computational tasks currently carried out by todays supercomputers.

Artificial neural networks are nothing new. However, until now synapse networks existed only as software. MIT has built real neural network hardware that made small and portable AI systems possible, thereby cutting the power consumption of AI networks by about 95 percent.

Just imagine connecting a small neuromorphic device to a camera in your car, and having it recognise lights and objects and make a decision immediately, without having to connect to the Internet. This is exactly what this new energy-efficient MIT chip will make possible on-site and in real-time.

Memristors, or memory transistors, are an essential component of neuromorphic computing. In a neuromorphic device, a memristor serves as the transistor in a circuit, however, in this case it rather resembles the functioning of a brain synapse (the junction between two neurons). The synapse receives signals from a neuron in the form of ions and sends an equivalent signal to the following neuron.

Computers in our phones and laptops currently use different digital components for processing and memory. Information is, therefore, continuously transferred between the components. The new MIT chip computes all the inputs in parallel within the memory using analog circuits in a similar way the human brain works, thus significantly reducing the amount of data that needs to be transferred, as well as a huge saving in electricity.

Since memristors are not binary as the transistors in a conventional circuit, but can have many values, they can carry out a far wider range of operations. This means that memristors could enable smaller portable devices that do not rely on supercomputers, or even connections to the Internet and cloud processing.

To overcome the challenges of reliability and scalability the MIT researchers used a new kind of silicon-based, alloyed memristor. Until now, ions flowing in memristors made from unalloyed material easily scattered as the components are getting smaller, thus leading to inferior fidelity and computational reliability. Images were often of a poorer quality.

However, an alloy of conventional silver and silicidable (a compound that has silicon with more electropositive elements) copper stabilise the flow of ions between the electrodes, allowing the scaling of the number of memristors on a small chip without sacrificing quality or functionality. The result after numerous storing and reproductions of a visual task was that the images were much crisper and clearer when compared with existing memristor designs of unalloyed elements.

The MIT researchers are not the first to create chips to carry out processing in memory to reduce power consumption of neural nets.

However, it is the first time the approach has been used to run powerful convolutional neural networks popular in image-based AI applications. This will certainly open the

possibility to use more complex convolutional neural networks for image and video classifications in the Internet of Things in the future. Although much work still needs to be done, the new MIT chip also opens up opportunities to build more AI into devices such as smartphones, household appliances, Internet of Things devices, and self-driving cars where powerful low-power AI chips are needed.

Companies & chips

MIT is not the only institution working on making AI more suitable for smaller devices. Apple has already integrated its Neural Engine into the iPhone X to power its facial recognition technology. Amazon is developing its own custom AI chips for the next generation of its Echo digital assistant.

The big chip companies are also working on the energy-efficiency of their chips since they are increasingly building advanced capabilities like machine learning into their chips. In the beginning of this year ARM unveiled new chips capable of AI tasks such as translation, facial recognition, and the detection of faces in images. Even Qualcomms new Snapdragon mobile chips are heavily focusing on AI.

Going even further, IBM and Intel are developing neuromorphic chips. IBMs TrueNorth and Intels Loihi can run powerful machine learning tasks on a fraction of the power of conventional chips.

The costs of AI and machine learning is also declining dramatically. The cost to train an image recognition algorithm decreased from around R17 000 in 2017 to about R170 in 2019.

The cost of running such an algorithm decreased even more. The cost to classify a billion images was R17 000 in 2017, but just R0.51 in 2019.

There is little doubt that as neuromorphic chips advance further in the years to come, the trends of miniaturization, increased performance, less power consumption, and much lower AI costs will continue.

Perhaps it may not be too long before we will carry some serious AI or artificial brains in our pocket that will be able to outperform current supercomputers, just as our cellphones are more powerful than the super computers of many years ago. AI will be in our pocket, as well as in numerous other devices. It will increasingly be part of our lives, making decisions on our behalf, guiding us, and automating many current tasks.

The Fourth Industrial Revolution is fundamentally changing engineering and making things possible that we could only dream of before.

Professor Louis C H Fourie is a futurist and technology strategist.

BUSINESS REPORT

Go here to read the rest:

Tech News: Neuromorphic computing and the brain-on-a-chip in your pocket - IOL

Give IBM your unused computing power to help cure coronavirus and cancer – CNET

Your idle Android phone could be performing calculations that help cure diseases.

When Sawyer Thompson was just 12 years old, he discovered his father Brett unconscious in their Washington, DC area home. Sawyer called an ambulance and Brett was rushed to the hospital, where the family learned the worst: He had brain cancer. After a year of surgeries, radiation and chemotherapy, Brett's cancer is in remission. But Sawyer wanted to do more to fight against cancer, and is tapping his interest in tech to make a bigger difference.

Like many young people, Sawyer -- who built his first computer at age 9, and started a business called ZOYA building machines for locals -- took to the internet. A Google search on "how to help cure cancer" led him to the IBM World Community Grid app, and gave him a way to make a difference from home.

Subscribe to the CNET Now newsletter for our editors' picks of the most important stories of the day.

IBM World Community Grid app uses "volunteer computing" -- a type of distributed computing where you donate your computer's unused resources to a research project. Basically, with the app, your computer, phone or tablet can run virtual experiments in the background while you aren't using it that would normally take years of expensive trial and error using laboratory computers alone. The crowdsourcing approach lets anyone participate in important research, with no time, money or expertise required.

"I've always wanted to find a way to help people with computers," Sawyer said. "World Community Grid allows anyone to help cure cancer, find cures for COVID-19 and study rainfall in Africa. It's really cool."

As people are still largely stuck at home due to the coronavirus pandemic, finding ways to volunteer that don't require an in-person commitment or a donation can be difficult. But volunteer computing initiatives like World Community Grid provide opportunities to help.

Last year, Sawyer created a website called Help Sawyer Fight Cancer to share his dad's story and urge people to sign up for the app. He set an "audacious goal" of getting 100 years of cancer research processing time donated before his dad's birthday in September. Two other users on another team, nicknamed Old Chap in the UK and the Little Mermaid in Copenhagen, came across the project. Their team joined Sawyer's, and within a few months more than 80 people around the world helped him cross the 100-year mark.

Soon after that, Old Chap received a cancer diagnosis of his own. And Sawyer, now age 14, decided to shoot for 1,000 years of research processing time, instead of just 100.

"I changed the goal not just for my dad, but for Old Chap and anyone else who finds themself unexpectedly on this journey," Sawyer said. "It's honestly been crazy. At first I never thought we'd reach 100 years, and here we are trekking our way to 1,000 years."

The team's computers have already performed about 1 million calculations -- contributing more than 450 years worth of computing, had a single PC been crunching the same numbers.

"Other forms of donating to researchers involve money," Sawyer said. "But this is 100% free and requires no effort at all."

Sawyer Thompson, right, started using IBM's Community Grid app to donate his unused computing power to cancer research after his father Brett's brain cancer diagnosis.

Volunteer computing has been around since the 1990s, and such efforts are typically organized by academic and research organizations. IBM launched the World Community Grid as part of the company's social responsibility work in 2004. The app currently has more than 785,000 volunteers who donate their unused computing power to any of seven projects, focused on healthcare research on cancer, COVID-19, bacteria, tuberculosis and AIDS, or environmental research on rainfall in sub-Saharan Africa.

"World Community Grid is essentially a way to crowdsource big scientific problems, and enlist the help of volunteers to solve challenges in health and environmental research," said Juan Hindo, an IBM Corporate Social Responsibility manager and leader of the World Community Grid team.

The Mapping Cancer Markers project identifies indicators of cancer and studies how to personalize treatment plans. Researchers have millions of different tissue samples -- from healthy people, from people with different types of cancer, from those who have passed and from those who are still patients.

The Mapping Cancer Markers project in IBM World Community Grid.

"They're essentially doing a massive data comparison exercise to compare the genetic profile of all these people in the hope of identifying factors that can say, for example, people with aggressive type of cancer X are more likely to have these biomarkers," Hindo said.

To process these millions of data points requires a lot of computing power, Hindo said. That's where volunteers step in.

"Rather than trying to find a supercomputer or get more funding for computing capacity, [the researchers] bring us millions of calculations, and we distribute them out to our massive community of volunteers," she added. "They're not scientists or techies, and they don't need any skills or expertise to solve this problem."

With the app installed on a volunteer's computer or Android device, any time those devices aren't being fully used, it can run a calculation.

"By crowdsourcing this and running it out over our volunteer community, the researchers get to do this in a fraction of the time," Hindo said. "We hear from our volunteers over and over again that they feel like they're a part of a scientific process that they wouldn't otherwise be able to contribute to."

You can join the World Community Grid through IBM's website by entering an email address and creating a password, and then selecting which of the active projects you'd like to put your computing power toward. Then, you download the app on your computer or Android device (it's not on iOS).

Once you've joined the program and installed the app, everything works seamlessly, Hindo said. The app will figure out if you have any spare computing power and if so, will take on some calculations and send results back.

You can donate your unused computing power to one of several different projects on the World Community Grid app.

The app only runs if you are plugged in and if your device is charged at least 90 percent. The Android app version will only download calculations or upload results when connected to Wi-Fi, so it won't eat up your data, Hindo said. The ideal use case is when you're charging your phone or computer overnight.

When you open the app, you can find out what types of calculations your device has been working on.

In terms of security, the app uses one folder where downloaded and uploaded data goes, but doesn't touch any other data on your device, Hindo said. On the other end, the data you receive from researchers doesn't include any personally identifiable information, she added. However, anything you post in the community forums may become available to third party search engines online, according to the app's terms of service.

Researchers keep IBM and volunteers up to date on how they're using the data and calculations, what results they're finding and where they are publishing those discoveries, Hindo said. World Community Grid is also an open data project, which means all findings are made publicly available so the wider scientific community can benefit from volunteers' work.

The projects have yielded many papers published in scientific journals, Hindo said. For example, in 2014, scientists from a World Community Grid project aiming to fight childhood cancer announced the discovery of seven compounds that can destroy neuroblastoma cancer cells without any apparent side effects, marking a move toward new treatments.

"I want people to feel empowered that they can do something productive -- it's a fairly unique way of supporting a cause they care about, like cancer research," Hindo said. "Everyone's familiar with ways of volunteering your time or donating your money, and this is a different type of volunteerism -- all it takes is for you to download the app."

Now playing: Watch this: The LifeStraw is close to eradicating an ancient disease

10:08

Continue reading here:

Give IBM your unused computing power to help cure coronavirus and cancer - CNET

New OCF Supercomputer at the University of Aberdeen Supports Ground Breaking Genomics Research – HPCwire

July 7, 2020 Researchers at the University of Aberdeen are benefitting from an investment in High Performance Computing (HPC). The new HPC cluster, called Maxwell, is supporting ground breaking research at the Universitys Centre for Genome-Enabled Biology and Medicine (CGEBM) and provides a centralized HPC system for the whole University with applications in medicine, biological sciences, engineering, chemistry, maths and computing science. The new HPC system is designed, integrated and managed by high performance compute, storage, cloud and AI integrator, OCF. The supercomputer is part of the Universitys expansion to improve facilities for staff and students.

With Maxwell, the Universitys CGEBM is able to rapidly analyze complex genomics datasets from known and novel organisms and help researchers to revolutionize the study of the Earths biodiversity and complex ecosystems important to health and disease, agriculture or the environment. It is estimated that only around 1 percent of the Earths biodiversity is easily culturable in a laboratory, and there is little knowledge on most living organisms on the planet.

With the use of HPC, University researchers can analyze microbiomes associated with a diverse array of ecosystems, such as the human gut, fish important to Scottish aquaculture, glaciers, deep-sea sediments, soil and bioreactors for the production of sustainable and environmentally friendly biofuels. These state-of-the-art studies provide new understanding of important and diverse biological processes such as antimicrobial drug resistance; pathogen detection, evolution and virulence; mechanisms of drug efficacy and toxicity; development; inflammation; tumorigenesis; nutrition and satiety; and degradation of hydrocarbons.

Scotia Biologics, a SME research company, is working with the Universitys CGEBM, using Maxwells capacity to speed up its existing pipeline and generating a more comprehensive dataset using genomics compared to traditional methods typically used in its field.

The new HPC system is also being used to teach graduates and post-graduate students in specialist subjects such as AI and bioinformatics, fields important to modern research and STEM careers, providing them with a unique opportunity to access HPC capacity. With 300 users, the cluster is providing a centralized HPC system to support all researchers and post-graduate students across the University.

With twenty times more storage than the Universitys previous HPC system, Maxwell comprises four Lenovo ThinkSystem SD530 servers, 40 compute notes, ThinkParkQ supported BeeGFS Parallel FileSystem hosted on Lenovo Servers and Storage and NVIDIA GPUs. OCF is also providing an OpenSource Software Stack and its OCF Remote HPC Admin Managed Service to support the in-house HPC team.

Dean Phillips, Assistant Director, Digital and Information Services of the University of Aberdeen says: Aberdeen is a research-intensive university and weve already seen an increase of 50 percent in registered users of our Maxwell HPC cluster. Having our own HPC system helps the University to attract new researchers, research funding and expand on existing programs of research and teaching. It is highly beneficial for our researchers to have on-site access to HPC infrastructure, particularly when securing start-up funds.

Phillips continues: OCFs Remote Admin Service is an extension of our team and really helps to ensure the smooth day to day running of our HPC cluster and dealing with support issues, user requests and keeping on top of software and security updates.

Dr Elaina Collie-Duguid, Manager, Centre for Genome Enabled Biology & Medicine at the University of Aberdeen says: Genomics is a dynamic discipline that rapidly evolves into new applications and approaches to interrogate complex systems. The new HPC cluster, with its expanded capacity and advanced GPU capabilities, enables us to use new analysis methods and work at a much quicker rate than before. It really is an exciting time for genomics, which is revolutionizing the study of organisms and complex ecosystems to address issues of global importance, and HPC is a critical tool for analysis of these data.

Russell Slack, Managing Director of OCF comments: The new HPC cluster helps the University remain ahead of a fiercely competitive market. It attracts researchers, students and grants to its facility. Aberdeens investment in its HPC is a credit to its foresight in the importance of HPC in research that impacts people and everyday lives.

Keith Charlton, CEO of Scotia Biologics, says, As part of our drive to introduce new services to offer to the life sciences sector, Scotia is developing phage display library capabilities based around a growing number of animal species. With access to Maxwell, weve been able to quickly generate a large volume of data relatively inexpensively whilst significantly advancing our R&D program.

Source: University of Aberdeen

See the original post here:

New OCF Supercomputer at the University of Aberdeen Supports Ground Breaking Genomics Research - HPCwire

Tech company uses quantum computers to help shipping and trucking industries – FreightWaves

Ed Heinbockel, president and chief executive officer of SavantX, said hes excited about how a powerful new generation of quantum computers can bring practical solutions to industries such as trucking and cargo transport.

With quantum computing, Im very keen on this, because Im a firm believer that its a step change technology, Heinbockel said. Its going to rewrite the way that we live and the way we work.

Heinbockel referred to recent breakthroughs such as Googles quantum supremacy, a demonstration where a programmable quantum processor solved a problem that no classical computer could feasibly solve.

In October 2019, Googles quantum processor, named Sycamore, performed a computation in 200 seconds that would have taken the worlds fastest supercomputer 10,000 years to solve, according to Google.

Jackson, Wyoming-based SavantX also recently formed a partnership with D-Wave Systems Inc., a Burnaby, Canada-based company that develops and offers quantum computing systems, software and services.

With D-Waves quantum services, SavantX has begun offering its Hyper Optimization Nodal Efficiency (HONE) technology to solve optimization problems to customers such as the Pier 300 container terminal project at the Port of Los Angeles.

The project, which began last year, is a partnership between SavantX, Blume Global and Fenix Marine Services. The projects goal is to optimize logistics on the spacing and placement of shipping containers to better integrate with inbound trucks and freight trains. The Pier 300 site handles 1.2 million container lifts per year.

With Pier 300, when do you need trucks at the pier and when and how do you get them scheduled optimally?, Heinbockel said. So the appointing part of it is very important and that is a facet of HONE technology.

Heinbockel added, Were very excited about the Pier 300 project, because HONE is a generalized technology. Then its a question of what other systems can we optimize? In all modes of transportation, the winners are going to be those that can minimize the energy in the systems; energy reduction. Thats all about optimization.

Heinbockel co-founded SavantX in 2015 with David Ostby, the companys chief science officer. SavantX offers data collection and visualization tools for industries ranging from healthcare to nuclear energy to transportation.

Heinbockel also recently announced SavantX will be relocating its corporate research headquarters to Santa Fe, New Mexico. The new center, which could eventually include 100 employees, will be focused on the companys HONE technology and customizing it for individual clients.

Heinbockel said SavantX has been talking to trucking, transportation and aviation companies about how HONE can help solve issues such as driver retention and optimizing schedules.

One of the problems Ive been hearing consistently from trucking companies is that they hire somebody. The HR department tells the new employee well have you home every Thursday night, Heinbockel said. Then you get onto a Friday night or Saturday, and [the driver] is still not home.

Heinbockel said if quantum computing and HONE can be used to help trucking companies with driver retention, and that it will make a lot of companies happy.

Heinbockel said cross-border operations could use HONE to understand what the flow patterns are like for commercial trucks crossing through different ports at various times of the day.

You would optimize your trucking flow based on when those lax periods were at those various ports, or you could ask yourself, is it cheaper for me to send a truck 100 miles out of the way to another port, knowing that it can get right through that port without having to sit for two or three hours in queue, Heinbockel said.

Click for more FreightWaves articles byNoi Mahoney.

Original post:

Tech company uses quantum computers to help shipping and trucking industries - FreightWaves

German Climate Computing Centre Orders Atos Supercomputer That Will Boost Computing Power by 5X – HPCwire

PARIS, France, June 22, 2020 Atos has signed a new five-year contract with the German Climate Computing Centre (DKRZ)to supply a supercomputer based on its latestBullSequana XH2000 technology to increase DKRZs computing power by five, compared to the currently operating high-performance computer Mistral which was provided by Atosin 2015. The new systems will be available at the DKRZ from mid-2021.

BullSequana to accelerate and deliver more precise forecasting

Just as a new, more powerful telescope provides more detailed images from space, a more powerful supercomputer allows for more detailed simulations and thus deeper insights into climate events. This significant increase in computing power will enable researchers at DKRZ to use regionally more detailed climate and earth system models in future, to include more processes in calculations, to simulate longer time periods, or to more accurately capture natural climate variability using ensemble calculations and thus reduce uncertainties. This is accompanied by a strong increase in the data that is calculated and then stored and evaluated. The BullSequana is an efficient computing and data management solution, essential for climate modelling and the resulting data volumes, to promote environmental research and deliver more reliable, detailed results.

Prof. Thomas Ludwig, CEO at DKRZ says: Our high-performance computer is the heart around which our services for science are grouped.Were really happy to be working with Atos again.With the new system, our users will be able to gain new insights into the climate system anddeliver even more detailed results. This concerns basic research, but also more applied fields of research such as improved current climate projections. This way, we help gain fundamental insightsfor climate change adaptation.

Damien Dclat, Group VP, Head of HPC, AI & Quantum Business Operations at Atos, explains: With our strong expertise and experience we have been able to successfully design the DKRZ solution integrating it efficiently with the BullSequana XH2000 systems best-of-breed components to optimize DKRZs production workloads. We look forward to continuing this joint effort to anticipate the next phases as well as to adapt applications and requirements to the next processor generation and other accelerating components.

Atos is a specialist in the provision of leading technologies for some of the worlds leading centers in the Weather Forecasting and Climate community, such as theEuropean Centre for Medium-Range Weather Forecastsand the French Meteorological ServiceMto-Franceand has worked closely together to optimize applications, explore and anticipate new technologies, and look for increased efficiency and reduced TCOs.

Technical specifications

The Atos solution is based on its BullSequana XH2000 supercomputer and will be one of the first equipped with the next generation of AMD EPYC x86 processors. The interconnect uses NVIDIA Mellanox InfiniBand HDR 200G technology and the data storage solution relies on DDN equipment. The final system will consist of around 3,000 computing nodes with a total peak performance of 16 petaflops, 800 Terabytes main memory and a 120 petabytes storage system.

Financing

The new system is worth 32.5 million euros, which is being provided by the Helmholtz Association of German Research Centres, the Max Planck Society and the Free and Hanseatic City of Hamburg.

About DKRZ

The German Climate Computing Center (Deutsches Klimarechenzentrum, DKRZ) is a central service center for German climate and earth system research. Its high performance computers, data storage and services form the central research infrastructure for simulation-based climate science in Germany. Apart from providing computing power, data storage capacity and technical support for models and simulations in climate research, DKRZ offers its scientific users an extensive portfolio of tailor-made services. It maintains and develops application software relevant to climate research and supports its users in matters of data processing. Finally, DKRZ also participates in national and international joint projects and cooperations with the aim of improving the infrastructure for climate modeling.

About Atos

Atos is a global leader in digital transformation with 110,000 employees in 73 countries and annual revenue of 12 billion. European number one in Cloud, Cybersecurity and High-Performance Computing, the Group provides end-to-end Orchestrated Hybrid Cloud, Big Data, Business Applications and Digital Workplace solutions. The Group is the Worldwide Information Technology Partner for the Olympic & Paralympic Games and operates under the brands Atos, Atos|Syntel, and Unify. Atos is a SE (Societas Europaea), listed on the CAC40 Paris stock index.

Source: Atos

More:

German Climate Computing Centre Orders Atos Supercomputer That Will Boost Computing Power by 5X - HPCwire

RIKEN Physicists Develop Pseudo-2D Architecture for Quantum Computers that is Simple and Scalable – HPCwire

June 22, 2020 A simple pseudo-2D architecture for connecting qubitsthe building blocks of quantum computershas been devised by RIKEN physicists1. This promises to make it easier to construct larger quantum computers.

Quantum computers are anticipated to solve certain problems overwhelmingly faster than conventional computers, but despite rapid progress in recent years, the technology is still in its infancy. Were still in the late 1940s or early 1950s, if we compare the development of quantum computers with that of conventional computers, notes Jaw-Shen Tsai of the RIKEN Center for Emergent Matter Science and the Tokyo University of Science.

One bottleneck to developing larger quantum computers is the problem of how to arrange qubits in such a way that they can both interact with their neighbors and be readily accessed by external circuits and devices. Conventional 2D networks suffer from the problem that, as the number of qubits increases, qubits buried deep inside the networks become difficult to access.

To overcome this problem, large companies such as Google and IBM have been exploring complex 3D architectures. Its kind of a brute-force approach, says Tsai. Its hard to do and its not clear how scalable it is, he adds.

Tsai and his team have been exploring a different tack from the big companies. Its very hard for research institutes like RIKEN to compete with these guys if we play the same game, Tsai says. So we tried to do something different and solve the problem they arent solving.

Now, after about three years of work, Tsai and his co-workers have come up with a quasi-2D architecture that has many advantages over 3D ones.

Their architecture is basically a square array of qubits deformed in such a way that all the qubits are arranged in two rows (Fig. 1)a bilinear array with cross wiring, as Tsai calls it. Since all the qubits lie on the edges, it is easy to access them.

The deformation means that some wires cross each other, but the team overcame this problem by using airbridges so that one wire passes over the other one, much like a bridge at the intersection of two roads allows traffic to flow without interruption. Tests showed that there was minimal crosstalk between wires.

The scheme is much easier to construct than 3D ones since it is simpler and can be made using conventional semiconductor fabrication methods. It also reduces the number of wires that cross each other. And importantly, it is easy to scale up.

The team now plans to use the architecture to make a 1010 array of qubits.

About RIKEN

RIKEN is Japans largest comprehensive research institution renowned for high-quality research in a diverse range of scientific disciplines. Founded in 1917 as a private research foundation in Tokyo, RIKEN has grown rapidly in size and scope, today encompassing a network of world-class research centers and institutes across Japan.

Source: RIKEN

Here is the original post:

RIKEN Physicists Develop Pseudo-2D Architecture for Quantum Computers that is Simple and Scalable - HPCwire

COLUMN: Future Shock — COVID-19 Channel Upheaval – CRN: Technology news for channel partners and solution providers

In the 1970 best-seller Future Shock, Alvin Toffler wrote about the enormous structural change that was taking place as a result of the shift from an industrial to a super industrial society. The state of future shock is the perfect metaphor for the technology upheaval that is ripping through the channel in the wake of the COVID-19 pandemic.

Forget super industrial. The new future shock may well be the equivalent of a supercomputer for every home given the structural changes in the global workforce. The pandemic has exposed the fault lines in IT budgets and strategies, which are now shifting at a blinding pace to provide employees the computing power and support they need.

So what does this future shock mean to solution providers? Thats the question Senior Editor Kyle Alspach takes on in this months cover story, The New Channel Normal. The deep dive on the pandemic impactwhich includes data from the COVID-19 Channel Impact Study by our sister business unit IPEDshows that the solution providers that are thriving are changing at a rapid pace what they sell and how they sell it.

The old channel playbook has been thrown out the window. Solution providers that do the same thing they were doing before the pandemic outbreak are going to find themselves grappling with the famous definition of insanity: doing the same thing over and over again and expecting a different result.

The bottom line is customers are speedily moving to pay-per-use cloud services and anytime, anyplace and anywhere business models. Thats good news for solution providers with an end-to-end suite of recurring revenue managed IT services.

If you want a good example of a company that gets it and is moving at a blinding pace to help customers move to the new world order, then look no further than Anexinet, No. 212 on the 2020 CRN Solution Provider 500. Anexinet CEO Todd Pittman is one of the leaders who has put his Blue Bell, Pa., company at the forefront of the post-pandemic super industrial era. That means closing a blockbuster virtual sales deal with a national energy company for a new mobile and web app.

Weve revamped our approach with our customers, Pittman said, calling the virtually delivered project a major success that Anexinet is now replicating with two other customers. Frankly, [the stakeholders] at our first customer were raving fans.

Its no small matter that Anexineta Hewlett Packard Enterprise Platinum partner is also betting big on HPEs GreenLake pay-per-use platform. Everybody wants to ensure that they have the capital required to keep their business operating through this uncertain time. And so I think that will continue to drive more conversations around leveraging the cloud, pay-as-you-go models, GreenLake, Pittman said.

The future shock, by the way, also applies to vendors. HPE CEO Antonio Neri, for one, is doubling down on an edge-to-cloud Platform-as-a-Service strategy and accelerating HPEs Everything-as-a-Service model in the wake of the pandemic.

In Tofflers amazingly prescient vision of the information era, citizens are, for the most part, inextricably linked to their homes, doing their own manufacturing and consumption from those electronic cottages.

Thats the world we find ourselves living in now. Those solution providers that are able to absorb this kind of future shock are going to thrive. Those that dont will disappear into the past.

See the article here:

COLUMN: Future Shock -- COVID-19 Channel Upheaval - CRN: Technology news for channel partners and solution providers

4th World Intelligence Congress to be held online – PRNewswire

In contrast with previous WICs, the event will be held online this year. Utilizing such smart technologies as artificial intelligence, augmented and virtual reality, the congress will bring together state leaders, experts and entrepreneurs from around the world in real-time. Together, they will discuss the development of AI and the building of a community with a shared future for mankind. The WIC aims to offer an international platform for creating better lives through the development of emerging industries in the new era.

During the congress, a wide range of innovative forums, exhibitions and competitions will be held online, such as the 2020 World Intelligence Driving Challenge and Haihe Entrepreneurial Talent Competition. All these activities will center around the theme of "Intelligent New Era: Innovation, Energization and Ecology," highlighting the WIC's role in advancing the application of AI in socio-economic development.

The host city Tianjin has vigorously promoted the development of intelligent industry in recent years. Numerous achievements have been made in the city in the field of science and technology, including the Tianhe-1 supercomputer, which is among the fastest in the world, the "PK" operating system, which represents a mainstream trend in related technology roadmaps, and "Brain Talker," the world's first chip designed specifically for use in brain-computer interfaces. In addition, the pilot zone of China's Internet of Vehicles has been approved in the city.

As the birthplace of modern industry in China, Tianjin boasts a solid foundation for industrial development. With the coming of the new era, the national strategy of coordinated development in the Beijing-Tianjin-Hebei region has presented new opportunities for the city. Standing at the forefront of reform and opening-up, Tianjin has established both a national innovation demonstration zone and a free trade zone. As such, there is great room for it to develop intelligent technology and the digital economy. In recent years, Tianjin has launched a targeted action plan, invested tens of billions of yuan in special funds, pooled the strength of universities and research institutions, and improved policies to attract more professional personnel. Through such measures, the city is positioning itself to become a vanguard of AI development, with intelligent technology being applied to transport, public services and daily life. The intelligent industry has also created new opportunities for young people looking for job or start their own business.

As one amongst many cities looking to transform, Tianjin epitomizes China's efforts to advance the development of AI, replace old growth drivers with new ones, and promote high-quality development. In fact, AI has also played a prominent role in China's fight against COVID-19.

As a new round of technological revolution is taking place, holding the WIC is in line with global demand. The event is expected to create a platform for exchanges, cooperation, win-win outcomes and mutual benefits, as well as drive the sound development of a new generation of AI. Wish the congress a huge success, and hope that AI can better benefit the people of all countries.

China Mosaic http://www.china.org.cn/video/node_7230027.htm

4th World Intelligence Congress to be held onlinehttp://www.china.org.cn/video/2020-06/22/content_76189084.htm

SOURCE China.org.cn

http://www.china.org.cn

See the rest here:

4th World Intelligence Congress to be held online - PRNewswire

Definition from WhatIs.com – whatis.techtarget.com

A supercomputer is a computer that performs at or near the currently highest operational rate for computers. Traditionally, supercomputers have been used for scientific and engineering applications that must handle very large databases or do a great amount of computation (or both).Although advances likemulti-core processors and GPGPUs (general-purpose graphics processing units)have enabled powerful machinesfor personal use (see: desktop supercomputer, GPU supercomputer),by definition, a supercomputer is exceptional in terms of performance.

At any given time, there are a few well-publicized supercomputers that operate at extremely high speeds relative to all other computers. The term is also sometimes applied to far slower (but still impressively fast) computers. The largest, most powerful supercomputers are really multiple computers that perform parallel processing. In general, there are two parallel processing approaches: symmetric multiprocessing (SMP) and massively parallel processing (MPP).

As of June 2016, the fastest supercomputer in the world was the Sunway TaihuLight, in the city of Wixu in China. A few statistics on TaihuLight:

The first commercially successful supercomputer, the CDC (Control Data Corporation) 6600 was designed by Seymour Cray. Released in 1964, the CDC 6600 had a single CPU and cost $8 million the equivalent of $60 million today. The CDC could handle three million floating point operations per second (flops).

Cray went on to found a supercomputer company under his name in 1972. Although the company has changed hands a number of times it is still in operation. In September 2008, Cray and Microsoft launched CX1, a $25,000 personal supercomputer aimed at markets such as aerospace, automotive, academic, financial services and life sciences.

IBM has been a keen competitor. The company's Roadrunner, once the top-ranked supercomputer, was twice as fast as IBM's Blue Gene and six times as fast as any of other supercomputers at that time. IBM's Watson is famous for having adopted cognitive computing to beat champion Ken Jennings on Jeopardy!, a popular quiz show.

Year

Supercomputer

Peak speed (Rmax)

Location

2016

Sunway TaihuLight

93.01PFLOPS

Wuxi, China

2013

NUDTTianhe-2

33.86PFLOPS

Guangzhou, China

2012

CrayTitan

17.59PFLOPS

Oak Ridge, U.S.

2012

IBMSequoia

17.17PFLOPS

Livermore, U.S.

2011

FujitsuK computer

10.51PFLOPS

Kobe, Japan

2010

Tianhe-IA

2.566PFLOPS

Tianjin, China

2009

CrayJaguar

1.759PFLOPS

Oak Ridge, U.S.

2008

IBMRoadrunner

1.026PFLOPS

Los Alamos, U.S.

1.105PFLOPS

In the United States, some supercomputer centers are interconnected on an Internet backbone known as vBNS or NSFNet. This network is the foundation for an evolving network infrastructure known as the National Technology Grid. Internet2 is a university-led project that is part of this initiative.

At the lower end of supercomputing, clustering takes more of a build-it-yourself approach to supercomputing. The Beowulf Project offers guidance on how to put together a number of off-the-shelf personal computer processors, using Linux operating systems, and interconnecting the processors with Fast Ethernet. Applications must be written to manage the parallel processing.

Read more:

Definition from WhatIs.com - whatis.techtarget.com

Microsoft announces new supercomputer, lays out vision for …

As weve learned more and more about what we need and the different limits of all the components that make up a supercomputer, we were really able to say, If we could design our dream system, what would it look like? said OpenAI CEO Sam Altman. And then Microsoft was able to build it.

OpenAIs goal is not just to pursue research breakthroughs but also to engineer and develop powerful AI technologies that other people can use, Altman said. The supercomputer developed in partnership with Microsoft was designed to accelerate that cycle.

We are seeing that larger-scale systems are an important component in training more powerful models, Altman said.

For customers who want to push their AI ambitions but who dont require a dedicated supercomputer, Azure AI provides access to powerful compute with the same set of AI accelerators and networks that also power the supercomputer. Microsoft is also making available the tools to train large AI models on these clusters in a distributed and optimized way.

At its Build conference, Microsoft announced that it would soon begin open sourcing its Microsoft Turing models, as well as recipes for training them in Azure Machine Learning. This will give developers access to the same family of powerful language models that the company has used to improve language understanding across its products.

It also unveiled a new version of DeepSpeed, an open source deep learning library for PyTorch that reduces the amount of computing power needed for large distributed model training. The update is significantly more efficient than the version released just three months ago and now allows people to train models more than 15 times larger and 10 times faster than they could without DeepSpeed on the same infrastructure.

Along with the DeepSpeed announcement, Microsoft announced it has added support for distributed training to the ONNX Runtime. The ONNX Runtime is an open source library designed to enable models to be portable across hardware and operating systems. To date, the ONNX Runtime has focused on high-performance inferencing; todays update adds support for model training, as well as adding the optimizations from the DeepSpeed library, which enable performance improvements of up to 17 times over the current ONNX Runtime.

We want to be able to build these very advanced AI technologies that ultimately can be easily used by people to help them get their work done and accomplish their goals more quickly, said Microsoft principal program manager Phil Waymouth. These large models are going to be an enormous accelerant.

In self-supervised learning, AI models can learn from large amounts of unlabeled data. For example, models can learn deep nuances of language by absorbing large volumes of text and predicting missing words and sentences. Art by Craighton Berman.

Designing AI models that might one day understand the world more like people do starts with language, a critical component to understanding human intent, making sense of the vast amount of written knowledge in the world and communicating more effortlessly.

Neural network models that can process language, which are roughly inspired by our understanding of the human brain, arent new. But these deep learning models are now far more sophisticated than earlier versions and are rapidly escalating in size.

A year ago, the largest models had 1 billion parameters, each loosely equivalent to a synaptic connection in the brain. The Microsoft Turing model for natural language generation now stands as the worlds largest publicly available language AI model with 17 billion parameters.

This new class of models learns differently than supervised learning models that rely on meticulously labeled human-generated data to teach an AI system to recognize a cat or determine whether the answer to a question makes sense.

In whats known as self-supervised learning, these AI models can learn about language by examining billions of pages of publicly available documents on the internet Wikipedia entries, self-published books, instruction manuals, history lessons, human resources guidelines. In something like a giant game of Mad Libs, words or sentences are removed, and the model has to predict the missing pieces based on the words around it.

As the model does this billions of times, it gets very good at perceiving how words relate to each other. This results in a rich understanding of grammar, concepts, contextual relationships and other building blocks of language. It also allows the same model to transfer lessons learned across many different language tasks, from document understanding to answering questions to creating conversational bots.

This has enabled things that were seemingly impossible with smaller models, said Luis Vargas, a Microsoft partner technical advisor who is spearheading the companys AI at Scale initiative.

The improvements are somewhat like jumping from an elementary reading level to a more sophisticated and nuanced understanding of language. But its possible to improve accuracy even further by fine tuning these large AI models on a more specific language task or exposing them to material thats specific to a particular industry or company.

Because every organization is going to have its own vocabulary, people can now easily fine tune that model to give it a graduate degree in understanding business, healthcare or legal domains, he said.

Read the rest here:

Microsoft announces new supercomputer, lays out vision for ...

Top 10 Supercomputers

Advertisement

If someone says "supercomputer," your mind may jump to Deep Blue, and you wouldn't be alone. IBM's silicon chess wizard defeated grandmaster Gary Kasparov in 1997, cementing it as one of the most famous computers of all time (some controversy around the win helped, too). For years, Deep Blue was the public face of supercomputers, but it's hardly the only all-powerful artificial thinker on the planet. In fact, IBM took Deep Blue apart shortly after the historic win! More recently, IBM made supercomputing history with Watson, which defeated "Jeopardy!" champions Ken Jennings and Brad Rutter in a special match.

Brilliant as they were, neither Deep Blue nor Watson would be able to match the computational muscle of the systems on the November 2013 TOP500 list. TOP500 calls itself a list of "the 500 most powerful commercially available computer systems known to us." The supercomputers on this list are a throwback to the early computers of the 1950s -- which took up entire rooms -- except modern computers are using racks upon racks of cutting-edge hardware to produce petaflops of processing power.

Your home computer probably runs on four processor cores. Most of today's supercomputers use hundreds of thousands of cores, and the top entry has more than 3 million.

TOP500 currently relies on the Linpack benchmark, which feeds a computer a series of linear equations to measure its processing performance, although an alternative testing method is in the works. The November 2013 list sees China's Tianhe-2 on top of the world. Every six months, TOP500 releases a list, and a few new computers rise into the ranks of the world's fastest. Here are the champions as of early 2014. Read on to see how they're putting their electronic mettle to work.

Read more here:

Top 10 Supercomputers


12345...102030...