17 of the best computers and supercomputers to grace the planet – Pocket-lint

(Pocket-lint) - Supercomputers, the behemoths of the tech world and inventions by man often to put to specific use to solve incredible problems mere mortals couldn't fathom alone.

From studying the decay of nuclear materials to predicting the path of our planet due to global warming and everything in between, these machines do the processing and crunch the numbers. Calculating in moments what it would take mere mortals decades or more to decipher.

Earth Simulator was the world's fastest supercomputer between 2002 and 2004. It was created in Japan, as part of the country's "Earth Simulator Project" which was intended to model the effects of global warming on our planet.

The original Earth Simulator supercomputer cost the government 60 billion yen but was a seriously impressive piece of technology for the time, with 5120 processors and 10 terabytes of memory.

It was later replaced by Earth Simulator 2 in 2009 and Earth Simulator 3 in 2015.

The original Earth Simulator supercomputer was surpassed in performance by IBM's Blue Gene/L prototype in 2004. Blue Gene was designed to reach petaFLOP operating speeds while maintaining low power consumption. As a result, the various Blue Gene systems have been ranked as some of the most powerful and most power-efficient supercomputers in the world.

The Blue Gene supercomputers were so named because they were designed to help analyse and understand protein folding and gene development. They were most well-known for power and performance though, reaching 596 TFLOPS peak performance. They were then outclassed by IBM's Cell-based Roadrunner system in 2008.

ENIAC was one of the very first supercomputers. It was originally designed by the US Army to calculate artillery firing tables and even to study the possibility of thermonuclear weapons.It was said to be able to calculate in just 30 seconds what it would take a person 20 hours to do.

This supercomputer cost around $500,000 to build (over $6 million in today's money).

Notably, the Electronic Numerical Integrator and Computer was later used to compute 2,037 digits of Pi and it was the first computer to do so. Even that computation took 70 hours to complete.

In 2018, the Chinese supercomputer known as Sunway TaihuLight was listed as the third-fastest supercomputer in the world. This system sported nearly 41,000 processors, each of which had 256 processing cores, meaning a total of over 10 million cores.

This supercomputer was also known to be able to carry out an eye-watering 93 quadrillion calculations per second. IT was designed for all sorts of research from weather forecasting to industrial design, life sciences and everything in between.

The Difference Engine was crafted by Charles Babbage in 1822. This was essentially the first computer or at least one of them. It could be used to calculate mathematical functions but unfortunately cost an astronomical amount for the time.

This machine was impressive for what it could do but also for the machines it inspired in the years and decades that followed.

IBM's Roadrunner supercomputer was a $100 million system built at the Los Alamos National Laboratory in New Mexico, USA.

In 2008, it managed to become one of the fastest supercomputers on the planet, reaching a top performance of 1.456 petaFLOPS.

Despite taking up 296 server racks and covering 6,000 square feet, Roadrunner still managed to be the fourth-most energy-efficient supercomputer at the time.

The system was used in order to analyse the decay of US nuclear weapons and examine whether the nuclear materials would be safe in the following years.

Summit is one of the most recent and most powerful supercomputers built by man. Another incredible system built by IBM, this time used at Oak Ridge National Laboratory and sponsored by the U.S. Department of Energy.

Between 2018 and June 2020, Summit (also known as OLCF-4) achieved the record of being the fastest supercomputer in the world, reaching benchmark scores of 148.6 petaFLOPS. Summit was also the first supercomputer to hit exaflop (a quintillion operations per second) speeds.

Summit boasts 9,216 22-core CPUs and 27,648 Nvidia Tesla V100 GPUs which have been put to work in all manner of complex research from Earthquake Simulation to Extreme Weather simulation as well as predicting the lifetime of Neutrinos in physics.

The Sierra is another supercomputer developed by IBM for the US Government. Like Summit, Sierra packs some serious power, with 1,572,480 processing cores and a peak performance of 125 petaFLOPS.

As with IBM Roadrunner, this supercomputer is used to manage the stockpile of US nuclear weapons to assure the safety of those weapons.

Tianhe-2 is another powerful supercomputer built by the Chinese. It's located at the National Supercomputer Center in Guangzhou, China and cost a staggering 2.4 billion Yuan (US$390 million) to build.

It took a team of 1,300 people to create and their hard work paid off when Tianhe-2 was recognised as the world's fastest supercomputer between 2013 and 2015.

The system sports nearly five million processor cores and 1,375 TiBs of memory, making it able to carry out over 33 quadrillion calculations per second.

The CDC 6600 was built in 1964 for $2,370,000. This machine is thought to be the worlds first supercomputer, managing three megaFLOPS, three times the speed of the previous record holder.

At the time, this system was so successful that it became a "must-have" for those carrying out high-end research and as a result over 100 of them weird built.

The Cray-1 came almost a decade after the CDC 6600, but quickly became one of the most successful supercomputers of the time. This was thanks to its unique design that not only included an unusual shape but also the first implementation of a vector processor design.

This was a supercomputer system that sported 64-bit processor running at 80 MHz with 8 megabytes of RAM which make it capable of a peak performance of 250 megaflops. A significant move forward compared to the CDC 6600 which came a mere decade before.

The Frontera supercomputer is the fastest university supercomputer in the world. In 2019, it achieved 23.5 PetaFLOPS making it able to calculate in a mere second what it would take an average person a billion years to do manually.

The system was designed to help teams at the University of Texas to solve massively difficult problems including everything from molecular dynamics to climate simulations and cancer studies too.

Trinity is yet another supercomputer designed to analyse the effectiveness of nuclear weapons.

With 979,072 processing cores and 20.2 petaFLOPS of performance power, it's able to simulate all manner of data to ensure the country's stockpile of weapons is safe.

In 2019, IBM built Pangea III, a system purported to be the world's most powerful commercial supercomputer. It was designed for Total, a global energy company with operations worldwide.

Pangea III was an AI-optimised supercomputer with a high-performance structure but one that was said to be significantly more power-efficient than previous models.

The system was designed to support seismic data acquisition by geoscientists to establish the location of oil and gas resources. Pangea III has a computing power of 25 petaflops (roughly the same as 130,000 laptops) and ranked 11th in the leaderboards of the top supercomputers at the time.

The Connection Machine 5 is interesting for a number o reasons, not simply because it's a marvellous looking supercomputer but also because it's likely the only system on our list to be featured in a Hollywood Blockbuster. That's right, this supercomputer appeared on the set of Jurassic Park, where it masqueraded as the Park's central control computer.

The Connection Machine 5 was announced in 1991 and later declared the fastest computer in the world in 1993. It ran 1024 cores with peak performance of 131.0 GFLOPS.

It's also said to have been used by the National Security Agency back in its early years.

HPC4 is a Spanish supercomputer that's particularly well-known for being energy efficient while still sporting some serious processing power that includes 253,600 processor cores and 304,320GB of memory.

In 2018, the updated HPC5 supercomputer was combined with HPC4 to result in 70 petaFlops of combined computational capacity. That means this system is capable of performing 70 million billion mathematical operations in a single second.

Selene is Nvidia's supercomputer built on the DGX SuperPOD architecture. This is an Nvidia-powered supercomputer sporting 2,240 NVIDIA A100 GPUs, 560 CPUs and an impressive record that includes being the second most power-efficient supercomputer around.

Selene is particularly impressive when you discover that it was built in just three weeks. We also like that it has its own robot attendant and is able to communicate with human operators via Slack.

Writing by Adrian Willings.

Read the original post:

17 of the best computers and supercomputers to grace the planet - Pocket-lint

Supercomputer finds best way to air out classroom to ward off virus : The Asahi Shimbun – Asahi Shimbun

The worlds fastest supercomputer has found opening just one window and one door diagonally opposite each other is the best way to ventilate an air-conditioned classroom to prevent the novel coronavirus from spreading.

A team of researchers from the Riken Center for Computational Science and other institutions crunched the numbers using Japans supercomputer Fugaku, which ranked No. 1 in the world in June for its calculation speed.

It ran various simulations to determine the best way to ventilate a classroom to prevent the coronavirus from spreading while also keeping the room temperature cool for students to ensure they do not get heatstroke in the hot summer months.

People can let a certain amount of fresh air in a room while keeping the room temperature cool by opening windows diagonally opposite from each other, said Makoto Tsubokura, a professor of computational science at Kobe University who heads the team. They can also take other measures, such as opening windows fully during breaks, at the same time to further lower the risk of infections.

For a classroom measuring 8 square meters, with 40 students sitting at their desks, the team simulated various combinations of having the doors and transom windows facing the corridor and other windows open to find the best way to efficiently ventilate the room while still being cooled by an air conditioner.

With a window in the back of the room and a door in the front of the room diagonally opposite each other left open by 20 centimeters each, the computer found it takes about 500 seconds for the air in the room to be completely replaced with fresh air.

Under two other configurations, it took roughly 100 seconds each time. One was with all the windows open 20 cm each, with transom windows facing the corridor open. The other was when all the windows were open by 20 cm each with doors at the front and back of the room open 40 cm each.

The first simulation required more time than the other two to ventilate the room because the open window area was smaller. But the amount of air replaced in the first setting was calculated at 1,190 cubic meters per hour.

According to the researchers, when that is converted into the amount of air ventilated per person in the room, it is equivalent to the ventilation standards for a common office under the law.

The team concluded that a room can be adequately ventilated by opening windows diagonally opposite from each other when accounting forair conditioning efficiency in the summer and heating in the winter.

See the original post here:

Supercomputer finds best way to air out classroom to ward off virus : The Asahi Shimbun - Asahi Shimbun

The Supercomputer Breaking Online Gaming Records and Modeling COVID-19 – BioSpace

Humanity is obsessed with making and breaking records in absolutely everything, just ask the good people at Guinness. In science, we dont exactly have a land-speed record for sequencing a genome or characterizing a protein, but we do know how long it takes to discover a therapeutic (typically 1 to 6 years) and get it to market (another decade, with all the tests and trials required). Even then, only about 10% get approved. We have gone from identifying a new virus to having multiple vaccine candidates in clinical testing within 6 months that is Earth-shattering record breaking. This was unthinkable with SARS in the mid-2000s, but our rapidly advancing technology and researchers dropping everything to work on SARS-CoV-2 have made next-to-impossible a reality.

Scaled Up Computing for Record Breaking Games

A big part of this has been global advancements in computing and processing power, leveraging the power of the cloud. Hadean, a UK-based company, has developed a cloud-native supercomputing platform. Their Hadean Platform, a distributed computing platform, streamlines running applications via cloud by removing excessive middleware and helping scale the process a journey that has taken them from the world of gaming to the modeling a pandemic.

Our cardinal application is Aether Engine, a spatial simulation engine, but we also have Mesh, the Big Data framework, and we have Muxer, which is a dynamic content delivery network for high performance workloads, said Miriam Keshani, VP of Operations at Hadean.

They took Aether Engine to the biggest gaming conference around the Gaming Developers Conference in San Francisco and were instantly attracted by massive online gaming and specifically EVE Online. The makers demonstrated record-breaking massive scale battles, but that often meant slowing the game down.

Fast forward to GDC 2019. We were there with the makers of EVE Online, CCP games, and together broke their world record for the most number of players in the single game with 14,000 connected clients, a mixture of human and AI, says Keshani.

The company has continued to work with CCP Games as well as Microsofts Minecraft. In parallel, Hadean also took their Aether Engine to a whole new level the molecular level.

Spatial Engines, Scale and Biology

Hadean and Dr. Paul Bates at the Francis Crick Institute in London partnered to investigate protein-protein interactions. The group is pioneering a new technique in the field called Cross-Docking, an approach to find the best holo structures among multiple structures available for a target protein.

The formation of specific proteinprotein interactions is often a key to understanding function. Proteins are flexible molecules and as they interact with each other they change shape / flex in response to each other. These can be major structural changes, or relatively minor movements, but either way a significant challenge in the field is being able to a priori predict the extent of such conformational structure changes and the flexibility of each target, Bates said.

The method can be used to predict protein binding sites useful for studying disease and drug design however, it requires a lot of processing power. This is where the Aether Engine comes in.

Despite promising results, this methods additional pre-processing steps (to choose the best input structures) make it practically difficult to do at scale, Bates said.

Publicly available docking servers rely on shared cloud resources, so a full docking run of all 56 protein pairs investigated [at the Crick Institute] takes weeks to complete. We used Aether Engine to sample tens of thousands of possible conformations for 56 protein pairs, profiled by potential energy, and selected candidates for docking according to features in this energy space, Bates said. This sophisticated sampling of inputs using Aether Engine led to a significant reduction in computation time and negated any additional burden brought on by this pre-processing step.

The research found 10% uplift in quality compared to other approaches, and the Aether Engine significantly reduced bottlenecks around pre-processing and docking, run as a publicly available server.

Modeling Spread of A New Disease

One of the first things we learned about SARS-CoV-2 is how it gains entry to our cells. The Spike protein on the viral envelope binds to Ace2, a receptor on the surface of endothelial cells. By binding Ace2, this effectively acts as a gateway for SARS-CoV-2 to enter our cells and begin replicating, spreading infection throughout the airway.

Buoyed by the success of their first study, the Bates Lab and Hadean renewed their partnership to focus on simulating COVID-19. The Aether Engines simulates a model of the lungs, going down over twenty levels, called generations, at each of which the airway bifurcates.

In the model, the virus is introduced at the top because we assume it was inhaled. There is a partial computational fluid dynamics element to it, as the virus travels down the airway according to a set diffusion rate. As it travels through the lungs there are elements, also known as agents in this type of model, that the virus agent is able to interact with, Keshani said.

The model relies on a number of parameters and can be used to measure the effect of treatments on viral replication in the lungs.

How we tweak these parameters will depend on keeping track of the literature over time. If there is an interaction between these two agents, the virus will invade the cell and ultimately cause it to burst after replicating inside the cell. Some of these agents will go back into the airways and some into the interstitial lung space. But theres other elements at play, the immune system fights back, here shown by the antibody and T-cell response, and anti-viral drug interventions can be added to the mix, Keshani said.

It does have its limitations. The model relies on a number of parameters. Simplifying the complexity of the human body and disease interaction by simulating the effect of what is happening, rather than the actual going events.

It's not always possible, or even necessary, to go into the level of detail that wed love to see. It's about making trade-offs between what's useful and what's reality, Keshani added.

Supercomputing & Future of Drug Discovery

Drug discovery is a long and expensive process. In recent years, artificial intelligence platforms are transforming the process, helping screen drug candidates and shorten the time required to get to clinical trial. Remdesivir was identified by AI platforms scouring existing drugs for potential COVID treatments. But machine and deep learning platforms require a lot of data to train and make better predictions if they are going to break records in drug development outside of a global pandemic. Keshani thinks there is a role for supercomputing here as well.

If you're able to create a simplification of a world that can model emergent behavior, which is the kind of simulation Aether Engine is able to scale massively, you can start building a picture of what could happen if you let different scenarios play out, Keshani said. And if you run that same simulation with slightly different parameters 100,000 times or 200,000 times, its building up a training set.

See the original post here:

The Supercomputer Breaking Online Gaming Records and Modeling COVID-19 - BioSpace

When it comes to hurricane models, which one is best? – KHOU.com

Is the American or European forecast model more accurate? Let's connect the dots!

When it comes to hurricanes a lot of information can come at you fast, but when it comes to storm forecast models is one really better than the other?

Lets connect the dots.

American model vs. European model

The two global models you hear about the most are the American and European. The American is officially called the Global Forecast System model and is created and operate by the National Weather Service.

And it's no rink-dink forecast! It uses a supercomputer considered one of the fastest in the world.

The European model is officially called the European Center for Medium Range Weather Forecasts and is the result of a partnership of 34 different nations.

European model outperforms big supercomputer

Looking at last years forecast, the European model did do better, especially when we were one to two days out from the storm. Thats according to the National Hurricane Center forecast verification report.

According to the Washington Post, it's because the European model is considered computationally more powerful. Thats thanks to raw super computer power and the math behind the model.

Meteorologists weigh in

No matter how much computer power you have you still need humans to interpret these models, and thats where a skilled meteorologist comes in.

They look at all the models, weigh their strengths and weakness and consider the circumstances for each storm. Thats how they let us know when to worry and when to stay calm.

Read the rest here:

When it comes to hurricane models, which one is best? - KHOU.com

Natural Radiation Including Cosmic Rays From Outer Space Can Wreak Havoc With Quantum Computers – SciTechDaily

Study shows the need to shield qubits from natural radiation, like cosmic rays from outer space.

A multi-disciplinary research team has shown that radiation from natural sources in the environment can limit the performance of superconducting quantum bits, known as qubits. The discovery, reported today in the journal Nature, has implications for the construction and operation of quantum computers, an advanced form of computing that has attracted billions of dollars in public and private investment globally.

The collaboration between teams at the U.S. Department of Energys Pacific Northwest National Laboratory (PNNL) and the Massachusetts Institute of Technology (MIT), helps explain a mysterious source of interference limiting qubit performance.

Our study is the first to show clearly that low-level ionizing radiation in the environment degrades the performance of superconducting qubits, said John Orrell, a PNNL research physicist, a senior author of the study, and an expert in low-level radiation measurement. These findings suggest that radiation shielding will be necessary to attain long-sought performance in quantum computers of this design.

Computer engineers have known for at least a decade that natural radiation emanating from materials like concrete and pulsing through our atmosphere in the form of cosmic rays can cause digital computers to malfunction. But digital computers arent nearly as sensitive as a quantum computer.

We found that practical quantum computing with these devices will not be possible unless we address the radiation issue, said PNNL physicist Brent VanDevender, a co-investigator on the study.

Natural radiation may interfere with both superconducting dark matter detectors (seen here) and superconducting qubits. Credit: Timothy Holland, PNNL

The researchers teamed up to solve a puzzle that has been vexing efforts to keep superconducting quantum computers working for long enough to make them reliable and practical. A working quantum computer would be thousands of times faster than even the fastest supercomputer operating today. And it would be able to tackle computing challenges that todays digital computers are ill-equipped to take on. But the immediate challenge is to have the qubits maintain their state, a feat called coherence, said Orrell. This desirable quantum state is what gives quantum computers their power.

MIT physicist Will Oliver was working with superconducting qubits and became perplexed at a source of interference that helped push the qubits out of their prepared state, leading to decoherence, and making them non-functional. After ruling out a number of different possibilities, he considered the idea that natural radiation from sources like metals found in the soil and cosmic radiation from space might be pushing the qubits into decoherence.

A chance conversation between Oliver, VanDevender, and his long-time collaborator, MIT physicist Joe Formaggio, led to the current project.

To test theidea, the research team measured the performance of prototype superconducting qubits in two different experiments:

The pair of experiments clearly demonstrated the inverse relationship between radiation levels and length of time qubits remain in a coherent state.

Natural radiation in the form of X-rays, beta rays, cosmic rays and gamma rays can penetrate a superconducting qubit and interfere with quantum coherence. Credit: Michael Perkins, PNNL

The radiation breaks apart matched pairs of electrons that typically carry electric current without resistance in a superconductor, said VanDevender. The resistance of those unpaired electrons destroys the delicately prepared state of a qubit.

The findings have immediate implications for qubit design and construction, the researchers concluded. For example, the materials used to construct quantum computers should exclude material that emits radiation, the researchers said. In addition, it may be necessary to shield experimental quantum computers from radiation in the atmosphere. At PNNL, interest has turned to whether the Shallow Underground Laboratory, which reduces surface radiation exposure by 99%, could serve future quantum computer development. Indeed, a recent study by a European research team corroborates the improvement in qubit coherence when experiments are conducted underground.

A worker in the ultra-low radiation detection facility at the Shallow Underground Laboratory located at Pacific Northwest National Laboratory. Credit: Andrea Starr, PNNL

Without mitigation, radiation will limit the coherence time of superconducting qubits to a few milliseconds, which is insufficient for practical quantum computing, said VanDevender.

The researchers emphasize that factors other than radiation exposure are bigger impediments to qubit stability for the moment. Things like microscopic defects or impurities in the materials used to construct qubits are thought to be primarily responsible for the current performance limit of about one-tenth of a millisecond. But once those limitations are overcome, radiation begins to assert itself as a limit and will eventually become a problem without adequate natural radiation shielding strategies, the researchers said.

In addition to helping explain a source of qubit instability, the research findings may also have implications for the global search for dark matter, which is thought to comprise just under 85% of the known universe, but which has so far escaped human detection with existing instruments. One approach to signals involves using research that depends on superconducting detectors of similar design to qubits. Dark matter detectors also need to be shielded from external sources of radiation, because radiation can trigger false recordings that obscure the desirable dark matter signals.

Improving our understanding of this process may lead to improved designs for these superconducting sensors and lead to more sensitive dark matter searches, said Ben Loer, a PNNL research physicist who is working both in dark matter detection and radiation effects on superconducting qubits. We may also be able to use our experience with these particle physics sensors to improve future superconducting qubit designs.

For more on this research, read Quantum Computing Performance May Soon Hit a Wall, Due to Interference From Cosmic Rays.

Reference: Impact of ionizing radiation on superconducting qubit coherence by Antti P. Vepslinen, Amir H. Karamlou, John L. Orrell, Akshunna S. Dogra, Ben Loer, Francisca Vasconcelos, David K. Kim, Alexander J. Melville, Bethany M. Niedzielski, Jonilyn L. Yoder, Simon Gustavsson, Joseph A. Formaggio, Brent A. VanDevender and William D. Oliver, 26 August 2020, Nature.DOI: 10.1038/s41586-020-2619-8

The study was supported by the U.S. Department of Energy, Office of Science, the U.S. Army Research Office, the ARO Multi-University Research Initiative, the National Science Foundation and the MIT Lincoln Laboratory.

Read the rest here:

Natural Radiation Including Cosmic Rays From Outer Space Can Wreak Havoc With Quantum Computers - SciTechDaily

The Tech Field Failed a 25-Year Challenge to Achieve Gender Equality by 2020 Culture Change Is Key to Getting on Track – Nextgov

In 1995, pioneering computer scientist Anita Borg challenged the tech community to a moonshot: equal representation of women in tech by 2020. Twenty-five years later, were still far from that goal. In 2018, fewer than 30% of the employees in techs biggest companies and 20% of faculty in university computer science departments were women.

On Womens Equality Day in 2020, its appropriate to revisit Borgs moonshot challenge. Today, awareness of the gender diversity problem in tech has increased, and professional development programs have improved womens skills and opportunities. But special programs and fixing women by improving their skills have not been enough. By and large, the tech field doesnt need to fix women, it needs to fix itself.

As former head of a national supercomputer center and a data scientist, I know that cultural change is hard but not impossible. It requires organizations to prioritize and promote material, not symbolic, change. It requires sustained effort and shifts of power to include more diverse players. Intentional strategies to promote openness, ensure equity, diversify leadership and measure success can work. Ive seen it happen.

Swimming Upstream

I loved math as a kid. I loved finding elegant solutions to abstract problems. I loved learning that Mobius strips have only one side and that there is more than one size of infinity. I was a math major in college and eventually found a home in computer science in graduate school.

But as a professional, Ive seen that tech is skewed by currents that carry men to success and hold women back. In academic computer science departments, women are usually a small minority.

In most organizations I have dealt with, women rarely occupy the top job. From 2001 to 2009, I led a National Science Foundation supercomputer center. Ten years after moving on from that job, Im still the only woman to have occupied that position.

Several years into my term, I discovered that I was paid one-third less than others with similar positions. Successfully lobbying for pay equity with my peers took almost a year and a sincere threat to step down from a job I loved. In the work world, money implies value, and no one wants to be paid less than their peers.

Changing Culture Takes Persistence

Culture impacts outcomes. During my term as a supercomputer center head, each center needed to procure the biggest, baddest machine in order to get the bragging rights and resources necessary to continue. Supercomputer culture in those days was hypercompetitive and focused on dominance of Supercomputings Top500 ranking.

In this environment, women in leadership were unusual and there was more for women to prove, and quickly, if we wanted to get something done. The fields focus on dominance was reflected in organizational culture.

My team and I set out to change that. Our efforts to include a broader range of styles and skill sets ultimately changed the composition of our centers leadership and management. Improving the organizational culture also translated into a richer set of projects and collaborations. It helped us expand our focus to infrastructure and users and embrace the data revolution early on.

Setting the Stage for Cultural Diversity

Diverse leadership is a critical part of creating diverse cultures. Women are more likely to thrive in environments where they have not only stature, but responsibility, resources, influence, opportunity and power.

Ive seen this firsthand as a co-founder of the Research Data Alliance (RDA), an international community organization of more than 10,000 members that has developed and deployed infrastructure to facilitate data sharing and data-driven research. From the beginning, gender balance has been a major priority for RDA, and as we grew, a reality in all leadership groups in the organization.

RDAs plenaries also provide a model for diverse organizational meetings in which speaker lineups are expected to include both women and men, and all-male panels, nicknamed manels, are strongly discouraged. Women both lead and thrive in this community.

Having women at the table makes a difference. As a board member of the Alfred P. Sloan Foundation, Ive seen the organization improve the diversity of annual classes of fellows in the highly prestigious Sloan Research Fellows program. To date, 50 Nobel Prize winners and many professional award winners are former Sloan Research Fellows.

Since 2013, the accomplished community members Sloan has chosen for its Fellowship Selection Committees have been half or more women. During that time, the diversity of Sloans research fellowship applicant pool and awardees have increased, with no loss of quality.

Calming Cultural Currents

Culture change is a marathon, not a sprint, requiring constant vigilance, many small decisions, and often changes in who holds power. My experience as supercomputer center head, and with the Research Data Alliance, the Sloan Foundation and other groups has shown me that organizations can create positive and more diverse environments. Intentional strategies, prioritization and persistent commitment to cultural change can help turn the tide.

Some years ago, one of my best computer science students told me that she was not interested in a tech career because it was so hard for women to get ahead. Cultures that foster diversity can change perceptions of what jobs women can thrive in, and can attract, rather than repel, women to study and work in tech.

Calming the cultural currents that hold so many women back can move the tech field closer to Borgs goal of equal representation in the future. Its much better to be late than never.

The Sloan Foundation has provided funding to The Conversation US.

Francine Bermanis a Hamilton Distinguished Professor of Computer Science at Rensselaer Polytechnic Institute.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

More:

The Tech Field Failed a 25-Year Challenge to Achieve Gender Equality by 2020 Culture Change Is Key to Getting on Track - Nextgov

Cerebras Systems Expands Global Footprint with Toronto Office Opening – HPCwire

TORONTO and LOS ALTOS, Calif., Aug. 26, 2020 Cerebras Systems announced its international expansion in Canada with the opening of its Toronto office. The regional office, which will be focused on accelerating the companys R&D efforts and establishing an AI center of excellence, will be led by local technology industry veteran Nish Sinnadurai. With more than fifteen engineers currently employed, Cerebras plans to triple its Toronto engineering team in the coming year.

Canada is a hotbed of technology innovation, and we look forward to driving AI compute excellence throughout the province of Ontario, said Andrew Feldman, CEO and Co-Founder of Cerebras. We are excited to grow our presence in the region and to attract, hire and develop top local talent in high-performance computing and AI.

I am pleased that Cerebras has chosen to open a Toronto office to take advantage of the local technology and engineering talent and regional growth opportunities, said John Tory, Mayor of Toronto. We welcome and celebrate Cerebras expansion as the company fosters AI growth and innovation in the Toronto Region.

Throughout their due diligence and expansion process, Cerebras System worked closely with Toronto Global, a team of experienced business advisors assisting global businesses to expand into the Toronto Region, as well as with the office of the Ontario Senior Economic Officer based in San Francisco.

Nish Sinnadurai will serve as Toronto Site Lead and Director of Software Engineering. Nish comes to Cerebras Systems with deep technical engineering expertise, having previously served as Director of Software Engineering at the Intel Toronto Technology Centre, where he led a multi-disciplinary organization developing large-scale, high-performance software for state-of-the-art systems. Prior to that, he held various roles at Altera (acquired by Intel) and Research in Motion Ltd (now Blackberry).

I am honored to join the Cerebras team and work alongside a group of world-class engineers who have invented a one-of-a-kind technology with the Wafer-Scale Engine (WSE) and CS-1 system, one of the fastest AI computers ever made, said Nish. I look forward to helping push the boundaries of AI and machine learning and define the future of computing with our talented team in Toronto.

In November 2019, Cerebras announcedCerebras CS-1, the industrys fastest AI computer, which was recently selected as one ofFast CompanysBest World Changing Ideas and a winner ofIEEE SpectrumsEmerging Technology Awards. Cerebras also recently announced CS-1 deployments at some of the largest computer facilities in the U.S., includingArgonne National Laboratory,Lawrence Livermore National LaboratoryandPittsburgh Supercomputing Center(PSC) for its groundbreakingNeocortexAI supercomputer.

Cerebras flagship product, the CS-1, is powered byCerebras Wafer Scale Engine (WSE),which is the industrys first and only wafer scale processor. The WSE contains 400,000 AI optimized compute cores, more than one trillion transistors and measures 46,225 millimeters square. The CS-1 system is also comprised of the CS-1 enclosure, which is a complete computer system and delivers power, cooling and data to the WSE; and the Cerebras software platform, which makes the solution quick to deploy and easy to use. These technologies combine to make the CS-1 the highest performing AI accelerator ever built, allowing AI researchers to use their existing software models without modification.

For more information on Cerebras Systems and the Cerebras CS-1, please visitwww.cerebras.net.

About Cerebras Systems

Cerebras Systemsis a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to build a new class of computer to accelerate artificial intelligence work by three orders of magnitude beyond the current state of the art. The Cerebras CS-1 is the fastest AI computer in existence. It contains a collection of industry firsts, including the Cerebras Wafer Scale Engine (WSE). The WSE is the largest chip ever built. It contains 1.2 trillion transistors, covers more than 46,225 square millimeters of silicon and contains 400,000 AI optimized compute cores. The largest graphics processor on the market has 54 billion transistors and covers 826 square millimeters and has only 6,912 cores. In artificial intelligence work, large chips process information more quickly producing answers in less time. As a result, neural networks that in the past took months to train, can now train in minutes on the Cerebras WSE.

Source: Cerebras Systems

See the article here:

Cerebras Systems Expands Global Footprint with Toronto Office Opening - HPCwire

Here’s the smallest AI/ML supercomputer ever – TechRadar

NEC is known for its vector processor-powered supercomputers, most notably the Earth Simulator. Typically, NECs vector processors have been aimed at numerical simulation and similar workloads, but recently NEC unveiled a platform that makes its latest SX-Aurora Tsubasa supercomputer-class processors usable for artificial intelligence and machine learning workloads.

The vector processor, with advanced pipelining, is a technology that proved itself long ago,wroteRobbert Emery, who is responsible for commercializing NEC Corporations advanced technologies in HPC and AI/ML platform solutions.

Vector processing paired with middleware optimized for parallel pipelining is lowering the entry barriers for new AI and ML applications, and is set to solve the challenges both today and in the future that were once only attainable by the hyperscale cloud providers.

TheSX-Aurora Tsubasa AI Platformsupports bothPython and TensorFlowdevelopment environments as well programming languages such as C/C++ and Fortran.

NEC offersmultiple versionsof its latest SX-Aurora Tsubasa versions for desktops and servers that can handle FHFL cards. The most advanced Vector Engine Processor model is the Type 20 that features 10 cores running at 1.6GHz and paired with 48GB HBM2 memory. The card offers a peak performance of 3.07 FP32 TFLOPS or 6.14 FP16 TFLOPS.

While peak performance numbers offered by the SX-Aurora Tsubasa look rather pale when compared to those offered by the latest GPUs (which are also a class of vector processors), such as NVIDIAs A100, NEC believes that its vector processors can still be competitive, especially on datasets that require 48GB of onboard memory (as NVIDIA only has 40GB).

As an added advantage, the NEC SX-Aurora Tsubasa card can run typical supercomputing workloads in a desktop workstation.

NEC does not publish prices of its SX-Aurora Tsubasa cards, but those who want to try the product can contact the company for quotes. In addition, it is possible totry the hardware in the cloud.

Sources:ITMedia,EnterpriseAI,NEC(viaHPCwire)

Read the original here:

Here's the smallest AI/ML supercomputer ever - TechRadar

CSC’s Supercomputer Mahti is Now Available to Researchers and Students – HPCwire

Our efficient national environment supports Finnish research by offering researchers competitive resources to solve even the toughest challenges in their fields, among the first. It also contributes to facilitating researchers access to more ambitious international collaborative research projects. The COVID-19 pandemic is a good indication of the importance of our research infrastructure. We can react quickly and allocate resources to research when a critical need arises, says Pekka Lehtovuori, Director of Computing Services at CSC.

Mahti is a robust liquid-cooled supercomputer capable of solving heavy computational tasks. Mahti can be used, for example, for computational drug design, extensive molecular dynamics simulations, or to model space weather and climate change.

Modern drug design requires extensive computational resources. Supercomputers can be used to analyze how a drug candidate affects the function of proteins in the body, but also to the side effects caused by the same drug. Personalized medicine is also highly dependent on the CSCs computing environment, says Antti Poso, Professor of Drug Design at the University of Eastern Finland and the University of Tbingen in Germany. Mahti is also contributing to the re-use of drugs, which enables a faster response even in situations such as COVID-19 pandemics

CSCs data management and computing services will also be open to educational use by higher education institutes.

With the new data management and computing environment, more and more people will be able to take advantage of modern research equipment. The CSC environment is now also available for educational use in universities and academic use in research institutes. The expanded user base helps ensure that we get all the benefits from the investment and that expertise is accumulated not only in CSC but also in user organizations, says Erja Heikkinen, Director at the Ministry of Education and Culture.

Mahti is the fastest supercomputer in the Nordic countries

The supercomputer Mahti is aBullSequana XH2000system fromAtoswith two 64-core AMD EPYC (Rome) processors in the nodes. This processor is the latest version of the EPYC family (7H12). The total amount of cores is about 180 000. There are almost 9 Petabytes of storage capacity in Mahti.

The interconnect network represents the latest technology, and its speed up to the nodes is 200 Gbps (Gigabits per second). Mahti is one of the worlds first supercomputers with such a fast interconnect.

Mahti ranked 47th in the Supercomputer Top500 list in June 2020 with a maximum performance of 5.39 Petaflops. The theoretical peak performance is 7.5 Petaflops. Mahti is the fastest supercomputer in the Nordic countriesand when compared to European ones, Mahti ranks as number 17.

On the recent Supercomputer Green500 list, Mahti ranked 44th. The Green500 lists the worlds most energy-efficient supercomputers. There are only 14 supercomputers that are faster and more energy-efficient than Mahti.

I am really proud of this achievement because of the flawless cooperation with the CSC project team, a perfect example of European cooperation. We set a common goal, which is to provide cutting edge HPC technology to researchers in Finnish universities and research institutes and achieved it, says Janne Ahonen,Atos Country Manager Finland & the Baltics.

CSCs new computing environment

The availability of Mahti completes the CSCs new data management and computing environment, which consists of Mahti, Puhti, and Allas.

Mahti is the robust supercomputer in CSCs environment, geared towards medium to large scale simulations. Puhti, a BullSequana X400 system from Atos, is a general-purpose supercomputerfor a wide range of use cases. Puhti was launched in autumn 2019.

Puhti Artificial Intelligence Partition Puhti-AI is a GPU-accelerated supercomputer and is specifically designed for artificial intelligence research and artificial intelligence applications.

The entire CSC computing environment is served by a common data management solution, Allas, that is based on CEPH object storage technology and has a storage capacity of 12 Petabytes.

About CSC

CSCis a Finnish center of expertise in ICT that provides world-class services for research, education, culture, public administration and enterprises, to help them thrive and benefit society at large. http://www.csc.fi

About Atos

Follow this link:

CSC's Supercomputer Mahti is Now Available to Researchers and Students - HPCwire

When it comes to hurricane models, which one is best? – 12newsnow.com KBMT-KJAC

Is the American or European forecast model more accurate? Let's connect the dots!

When it comes to hurricanes a lot of information can come at you fast, but when it comes to storm forecast models is one really better than the other?

Lets connect the dots.

American model vs. European model

The two global models you hear about the most are the American and European. The American is officially called the Global Forecast System model and is created and operate by the National Weather Service.

And it's no rink-dink forecast! It uses a supercomputer considered one of the fastest in the world.

The European model is officially called the European Center for Medium Range Weather Forecasts and is the result of a partnership of 34 different nations.

European model outperforms big supercomputer

Looking at last years forecast, the European model did do better, especially when we were one to two days out from the storm. Thats according to the National Hurricane Center forecast verification report.

According to the Washington Post, it's because the European model is considered computationally more powerful. Thats thanks to raw super computer power and the math behind the model.

Meteorologists weigh in

No matter how much computer power you have you still need humans to interpret these models, and thats where a skilled meteorologist comes in.

They look at all the models, weigh their strengths and weakness and consider the circumstances for each storm. Thats how they let us know when to worry and when to stay calm.

See more here:

When it comes to hurricane models, which one is best? - 12newsnow.com KBMT-KJAC

SberCloud’s Cloud Platform Sweeps Three International Accolades At IT World Awards – Exchange News Direct

AI Cloud, the cloud platform of Sberbank ecosystems SberCloud, has won awards in three categories at the 15th international IT World Awards.

Featuring executives, professionals, and experts from the worlds top IT companies, the judging panel for the award recognized AI Cloud as the gold winner in the New Product-Service of the Year | Artificial Intelligence category, the silver winner in the Data Science Platforms category, and the bronze winner in the Hot Technology of the Year | Artificial Intelligence category.

The award was organized by Network Products Guide, industrys leading technology research and advisory guide from Silicon Valley, California, U.S., which shares insights with top executives of the worlds leading IT companies into the best IT products, solutions, and services.

David Rafalovsky, CTO of Sberbank Group, Executive Vice President, Head of Technology,

We are proud that the unique IT project of Sberbank and SberCloud has gained international recognition from a qualified jury. AI Cloud and the Christofari were designed for convenient and reliable use of artificial intelligence technology by a wide variety of entities and organizations, from startups and small businesses to large companies and research centers. Many of our partners and companies that are Sberbank ecosystem members already use AI Cloud and the Christofari to develop proprietary products and services.

The universal cloud platform AI Cloud has the computing power of the Christofari supercomputer at its core and allows for the use of artificial intelligence (AI) across a raft of business, industry, science, and education domains. AI Cloud users can work with data, create AI models, train neural networks, and tailor microservices from the latter to get things done in the cloud through a single interface as fast as possible.

Sberbank already uses AI algorithms to recognize and understand human speech while also utilizing them in voice assistants, voice interfaces, behavioral analytics, and other workflow situations.

The architecture and computing power of Russias fastest supercomputer Christofari, which was specially made in partnership with NVIDIA to work with AI, let stakeholders train program models based on complex neural networks in record time. Thanks to AI Cloud, the Christofari can be accessed from anywhere in the world where you can get an Internet connection. Its capacity, architecture, and affordability make it a unique supercomputer on a global scale.

SberCloud is a cloud platform developed by Sberbank Group to provide services through the IT architecture of the largest bank in Russia, CIS, and Eastern Europe. The infrastructure, IT platforms, and SberCloud services are the pillars of Sberbank Groups digital ecosystem, also being available to external customers, such as companies and governmental organizations.

The Christofari supercomputer is the fastest supercomputer in Russia. Designed by Sberbank and SberCloud together with NVIDIA, it is based on NVIDIA DGX-2 high performance nodes featuring Tesla V100 computing accelerators. According to the LINPACK benchmarks, the supercomputers Rmax value reached 6.7 PFLOPs, measuring a peak of 8.8 PFLOPs.

Read this article:

SberCloud's Cloud Platform Sweeps Three International Accolades At IT World Awards - Exchange News Direct

Supercomputer Market Growth, Future Prospects And Competitive Analysis (2020-2026) – Bulletin Line

The research report on the global Supercomputer Market offers an all-encompassing analysis of recent and upcoming states of this industry which also analyzes several growth strategies for market growth. The Supercomputer report also focuses on the comprehensive study of the industry environment, and industry chain structure extensively. The Supercomputer report also sheds light on major factors including leading vendors, growth rate, production value, and key regions.

Request for a sample report here @:

https://www.reportspedia.com/report/semiconductor-and-electronics/global-supercomputer-market-report-2020-by-key-players,-types,-applications,-countries,-market-size,-forecast-to-2026-(based-on-2020-covid-19-worldwide-spread)/68930#request_sample

Top Key Players:

FujitsuCrayHPEDellLenovo

Supercomputer Market Fragment by Areas, regional examination covers

United States, Canada, Germany, UK, France, Italy, Russia, Switzerland, Sweden, Poland, , China, Japan, South Korea, Australia, India, Taiwan, Thailand, Philippines, Malaysia, Brazil, Argentina, Columbia, Chile, Saudi Arabia, UAE, Egypt, Nigeria, South Africa and Rest of the World.

The Supercomputer Market report introduces the industrial chain analysis, downstream buyers, and raw material sources along with the correct comprehensions of market dynamics. The Supercomputer Market report is articulated with a detailed view of the Global Supercomputer industry including Global production sales, Global revenue, and CAGR. Additionally, it offers potential insights about Porters Five Forces including substitutes, buyers, industry competitors, and suppliers with genuine information for understanding the Global Supercomputer Market.

Get Impressive discount @:

https://www.reportspedia.com/discount_inquiry/discount/68930

Market segment by Type, the product can be split into:

Commercial IndustriesResearch InstitutionsGovernment EntitiesOthers

Market segment by Application, split into:

LinuxUnixOthers

The Supercomputer Market study projects viability analysis, SWOT analysis, and various other information about the leading companies operating in the Global Supercomputer Market provide a complete efficient account of the viable environment of the industry with the aid of thorough company profiles. However, Supercomputer research examines the impact of current market success and future growth prospects for the industry.

Inquire Before Buying @:

https://www.reportspedia.com/report/semiconductor-and-electronics/global-supercomputer-market-report-2020-by-key-players,-types,-applications,-countries,-market-size,-forecast-to-2026-(based-on-2020-covid-19-worldwide-spread)/68930#inquiry_before_buying

In this study, the years considered to estimate the market size of Supercomputer are as follows:

Table of Contents:

Get Full Table of Content @:

https://www.reportspedia.com/report/semiconductor-and-electronics/global-supercomputer-market-report-2020-by-key-players,-types,-applications,-countries,-market-size,-forecast-to-2026-(based-on-2020-covid-19-worldwide-spread)/68930#table_of_contents

See the original post:

Supercomputer Market Growth, Future Prospects And Competitive Analysis (2020-2026) - Bulletin Line

A continent works to grow its stake in quantum computing – University World News

AFRICA

South Africa is a few steps ahead in the advancement of quantum computing and quantum technologies in general, said Mark Tame, professor in photonics at Stellenbosch University in the Western Cape.

South Africas University of KwaZulu-Natal has also been working on quantum computing for more than a decade, gradually building up a community around the field.

The buzz about quantum computing in South Africa just started recently due to the agreement between [Johannesburgs] University of the Witwatersrand and IBM, said Professor Francesco Petruccione, interim director, National Institute for Theoretical and Computational Science, and South African Research Chair in Quantum Information Processing and Communication at the School of Chemistry and Physics Quantum Research Group, University of KwaZulu-Natal.

Interest was intensified by Googles announcement last October that it had developed a 53-qubit device which it claimed took 200 seconds to sample one instance of a quantum circuit a million times. The IT company claimed it would take a state-of-the-art digital supercomputer 10,000 years to achieve this.

A University of Waterloo Institute for Quantum Computing paper stresses quantum computers ability to express a signal (a qubit) of more than one value at the same time (the superposition ability) with that signal being manifested in another device independently, but in exactly the same way (the entanglement ability). This enables quantum computers to handle much more complex questions and problems than standard computers using binary codes of ones and zeros.

The IBM Research Laboratory in Johannesburg offers African researchers the potential to harness such computing power. It was established in 2015, part of a 10-year investment programme through the South African governments Department of Trade and Industry.

It is a portal to the IBM Quantum Experience, a cloud-based quantum computing platform accessible to other African universities that are part of the African Research Universities Alliance (ARUA), which involves 16 of the continents leading universities (in Ethiopia, Ghana, Kenya, Nigeria, Rwanda, Senegal, Tanzania, Uganda and South Africa).

Levelling of the playing field

The IBM development has levelled the playing field for students, [giving them] access to the same hardware as students elsewhere in the world. There is nothing to hold them back to develop quantum applications and code. This has been really helpful for us at Stellenbosch to work on projects which need access to quantum processors not available to the general public, said Tame.

While IBM has another centre on the continent, at the Catholic University of Eastern Africa in Nairobi, Kenya, in 2018 the University of the Witwatersrand became the first African university to join the American computing giants Quantum Computing Network. They are starting to increase the network to have an army of quantum experts, said Professor Zeblon Vilakazi, a nuclear physicist, and vice-chancellor and principal of the University of the Witwatersrand.

At a continental level, Vilakazi said Africa is still in a learning phase regarding quantum computing. At this early stage we are still developing the skills and building a network of young students, he said. The university has sent students to IBMs Zurich facility to learn about quantum computing, he said.

To spur cooperation in the field, a Quantum Africa conference has been held every year since 2010, with the first three in South Africa, and others in Algeria and Morocco. Last years event was in Stellenbosch, while this years event, to be hosted at the University of Rwanda, was postponed until 2021 due to the COVID-19 pandemic.

Growing African involvement

Rwanda is making big efforts to set up quantum technology centres, and I have former students now working in Botswana and the Gambia. It is slowly diffusing around the continent, said Petruccione.

Academics participating at the Stellenbosch event included Yassine Hassouni of Mohammed V University, Rabat; Nigerian academic Dr Obinna Abah of Queens University Belfast; and Haikel Jelassi of the National Centre for Nuclear Sciences and Technologies, Tunisia.

In South Africa, experimental and theoretical work is also being carried out into quantum communications the use of quantum physics to carry messages via fibre optic cable.

A lot of work is being done on the hardware side of quantum technologies by various groups, but funding for these things is not the same order of magnitude as in, say, North America, Australia or the UK. We have to do more with less, said Tame.

Stellenbosch, near Cape Town, is carrying out research into quantum computing, quantum communication and quantum sensing (the ability to detect if a quantum-sent message is being read).

I would like it to grow over the next few years by bringing in more expertise and help the development of quantum computing and technologies for South Africa, said Tame.

Witwatersrand is focusing on quantum optics, as is Petrucciones team, while there is collaboration in quantum computing with the University of Johannesburg and the University of Pretoria.

University programmes

Building up and retaining talent is a key challenge as the field expands in Africa, as is expanding courses in quantum computing.

South Africa doesnt offer a masters in quantum computing, or an honours programme, which we need to develop, said Petruccione.

This is set to change at the University of the Witwatersrand.

We will launch a syllabus in quantum computing, and were in the process of developing courses at the graduate level in physics, natural sciences and engineering. But such academic developments are very slow, said Vilakazi.

Further development will hinge on governmental support, with a framework programme for quantum computing being developed by Petruccione. There is interest from the [South African] Department of Science and Innovation. Because of [the economic impact of] COVID-19, I hope some money is left for quantum technology, but at least the government is willing to listen to the community, he said.

Universities are certainly trying to tap non-governmental support to expand quantum computing, engaging local industries, banks and pharmaceutical companies to get involved in supporting research.

We have had some interesting interactions with local banks, but it needs to be scaled up, said Petruccione.

Applications

While African universities are working on quantum computing questions that could be applicable anywhere in the world, there are plans to look into more localised issues. One is drug development for tuberculosis, malaria and HIV, diseases that have afflicted Southern Africa for decades, with quantum computings ability to handle complex modelling of natural structures a potential boon.

There is potential there for helping in drug development through quantum simulations. It could also help develop quantum computing networks in South Africa and more broadly across the continent, said Vilakazi.

Agriculture is a further area of application. The production of fertilisers is very expensive as it requires high temperatures, but bacteria in the soil do it for free. The reason we cant do what bacteria do is because we dont understand it. The hope is that as quantum computing is good at chemical reactions, maybe we can model it and that would lead to cheaper fertilisers, said Petruccione.

With the world in a quantum computing race, with the US and China at the forefront, Africa is well positioned to take advantage of developments. We can pick the best technology coming out of either country, and that is how Africa should position itself, said Vilakazi.

Petrucciones group currently has collaborations with Russia, India and China. We want to do satellite quantum communication. The first step is to have a ground station, but that requires investment, he said.

Go here to see the original:

A continent works to grow its stake in quantum computing - University World News

Supercomputer predicts where Spurs will finish in the 2020/21 Premier League table – The Spurs Web

With the Premier League fixtures for the 2020/21 season being released last week, it is that time of the year again when fans and pundits start predicting who will finish where.

There is some cause for optimism for Tottenham fans heading into the season, given the strong manner in which Jose Mourinhos men finished the 2019/20 campaign.

Tottenham only lost once in their final nine games after the restart, a run which enabled the club to sneak into sixth place and book their place in the Europa League.

Spurs will be hoping to carry on the momentum into the start of next season but they will be aiming a lot higher than sixth in what will be Mourinhos first full season in charge.

However, according to the predictions of Unikrns supercomputer (as relayed by The Mirror), the Lilywhites will once again miss out on the top four next season.

Based on its calculations, Spurs will end the season in fifth place, one place ahead of their North London rivals Arsenal. Manchester City are predicted to win the title next season just ahead of Liverpool, with Manchester United and Chelsea rounding off the top four.

Spurs Web Opinion

It is too early to be making any predictions considering the transfer market will be open for a while. I believe we will finish in the top four next season as long as we make at least three more intelligent additions (a right-back, centre-back and a striker).

Continued here:

Supercomputer predicts where Spurs will finish in the 2020/21 Premier League table - The Spurs Web

Has the world’s most powerful computer arrived? – The National

The quest to build the ultimate computer has taken a big step forward following breakthroughs in ensuring its answers can be trusted.

Known as a quantum computer, such a machine exploits bizarre effects in the sub-atomic world to perform calculations beyond the reach of conventional computers.

First proposed almost 40 years ago, tech giants Microsoft, Google and IBM are among those racing to exploit the power of quantum computing, which is expected to transform fields ranging from weather forecasting and drug design to artificial intelligence.

The power of quantum computers comes from their use of so-called qubits, the quantum equivalent of the 1s and 0s bits used by conventional number-crunchers.

Unlike bits, qubits exploit a quantum effect allowing them to be both 1s and 0s at the same time. The impact on processing power is astonishing. Instead of processing, say, 100 bits in one go, a quantum computer could crunch 100 qubits, equivalent to 2 to the power 100, or a million trillion trillion bits.

At least, that is the theory. The problem is that the property of qubits that gives them their abilities known as quantum superposition is very unstable.

Once created, even the slightest vibration, temperature shift or electromagnetic signal can disturb the qubits, causing errors in calculations. Unless the superposition can be maintained long enough, the quantum computer either does a few calculations well or a vast amount badly.

For years, the biggest achievement of any quantum computer involved using a few qubits to find the prime factors of 15 (which every schoolchild knows are 3 and 5).

Using complex shielding methods, researchers can now stabilise around 50 qubits long enough to perform impressive calculations.

Last October, Google claimed to have built a quantum computer that solved in 200 seconds a maths problem that would have taken an ultra-fast conventional computer more than 10,000 years.

Yet even this billion-fold speed-up is just a shadow of what would be possible if qubits could be kept stable for longer. At present, many of the qubits have their powers wasted being used to spot and fix errors.

Now two teams of researchers have independently found new ways of tackling the error problem.

Physicists at the University of Chicago have found a way of keeping qubits stable for longer not by blocking disturbances, but by blurring them.

It is like sitting on a merry-go-round with people yelling all around you

Dr Kevin Miao, computing expert

In some quantum computers, the qubits take the form of electrons whose direction of spin is a superposition of both up and down. By adding a constantly flipping magnetic field, the team found that the electrons rotated so quickly that they barely noticed outside disturbances. The researchers explain the trick with an analogy: It's like sitting on a merry-go-round with people yelling all around you, says team member Dr Kevin Miao. When the ride is still, you can hear them perfectly, but if you're rapidly spinning, the noise blurs into a background.

Describing their work in the journal Science, the team reported keeping the qubits working for about 1/50th of a second - around 10,000 times longer than their lifetime if left unshielded. According to the team, the technique is simple to use but effective against all the standard sources of disturbance. Meanwhile, researchers at the University of Sydney have come up with an algorithm that allows a quantum computer to work out how its qubits are being affected by disturbances and fix the resulting errors. Reporting their discovery in Nature Physics, the team says their method is ready for use with current quantum computers, and could work with up to 100 qubits.

These breakthroughs come at a key moment for quantum computing. Even without them, the technology is already spreading beyond research laboratories.

In June, the title of worlds most powerful quantum computer was claimed not by a tech giant but by Honeywell a company perhaps best known for central heating thermostats.

Needless to say, the claim is contested by some, not least because the machine is reported to have only six qubits. But Honeywell points out that it has focused its research on making those qubits ultra-stable which allows them to work reliably for far longer than rival systems. Numbers of qubits alone, in other words, are not everything.

And the company insists this is just the start. It plans to boost the performance of its quantum computer ten-fold each year for the next five years, making it 100,000 times more powerful still.

But apart from bragging rights, why is a company like Honeywell trying to take on the tech giants in the race for the ultimate computer ?

A key clue can be found in remarks made by Honeywell insiders to Forbes magazine earlier this month. These reveal that the company wants to use quantum computers to discover new kinds of materials.

Doing this involves working out how different molecules interact together to form materials with the right properties. Thats something conventional computers are already used for. But quantum computers wont just bring extra number-crunching power to bear. Crucially, like molecules themselves, their behaviour reflects the bizarre laws of quantum theory. And this makes them ideal for creating accurate simulations of quantum phenomena like the creation of new materials.

This often-overlooked feature of quantum computers was, in fact, the original motivation of the brilliant American physicist Richard Feynman, who first proposed their development in 1981.

Honeywell already has plans to use quantum computers to identify better refrigerants. These compounds were once notorious for attacking the Earths ozone layer, but replacements still have unwanted environmental effects. Being relatively simple chemicals, the search for better refrigerants is already within the reach of current quantum computers.

But Honeywell sees a time when far more complex molecules such as drugs will also be discovered using the technology.

For the time being, no quantum computer can match the all-round number-crunching power of standard computers. Just as Honeywell made its claim, the Japanese computer maker Fujitsu unveiled a supercomputer capable of over 500 million billion calculations a second.

Even so, the quantum computer is now a reality and before long it will make even the fastest supercomputer seem like an abacus.

Robert Matthews is Visiting Professor of Science at Aston University, Birmingham, UK

Updated: August 21, 2020 12:06 PM

Continue reading here:

Has the world's most powerful computer arrived? - The National

Galaxy Simulations Could Help Reveal Origins of Milky Way – Newswise

Newswise Rutgers astronomers have produced the most advanced galaxy simulations of their kind, which could help reveal the origins of the Milky Way and dozens of small neighboring dwarf galaxies.

Their research also could aid the decades-old search for dark matter, which fills an estimated 27 percent of the universe. And the computer simulations of ultra-faint dwarf galaxies could help shed light on how thefirst stars formed in the universe.

Our supercomputer-generated simulations provide the highest-ever resolution of a Milky Way-type galaxy, said co-author Alyson M. Brooks, an associate professor in the Department of Physics and Astronomy in the School of Arts and Sciences at Rutgers UniversityNew Brunswick. The high resolution allows us to simulate smaller neighbor galaxies than ever before the ultra-faint dwarf galaxies. These tiny galaxies are mostly dark matter and therefore are some of the best probes we have for learning about dark matter, and this is the first time that they have ever been simulated around a Milky Way-like galaxy. The sheer variety of the simulated galaxies is unprecedented, including one that lost all of its dark matter similar to whats been observed in space.

The Rutgers-led team generated two new simulations of Milky Way-type galaxies and their surroundings. They call them the DC Justice League Simulations, naming them after two women who have served on the U.S. Supreme Court: current Associate Justice Elena Kagan and retired Associate Justice Sandra Day O'Connor.

These are cosmological simulations, meaning they begin soon after the Big Bang and model the evolution of galaxies over the entire age of the universe (almost 14 billion years). Bound via gravity, galaxies consist of stars, gas and dust. The Milky Way is an example a large barred spiral galaxy, according to NASA.

In recent years, scientists have discovered ultra-faint satellite galaxies of the Milky Way, thanks to digital sky surveys that can reach fainter depths than ever. While the Milky Way has about 100 billion stars and is thousands of light years across, ultra-faint galaxies have a million times fewer stars (under 100,000 and as low as few hundred) and are much smaller, spanning tens of light years. For the first time, the simulations allow scientists to begin modeling these ultra-faint satellite galaxies around a Milky Way-type galaxy, meaning they provide some of the first predictions for what future sky surveys will discover.

In one simulation, a galaxy lost all its dark matter, and while real galaxies like that have been seen before, this is the first time anyone has simulated such a galaxy. These kinds of results tell scientists whats possible when it comes to forming galaxies, and they are learning new ways that neighbor galaxies can arise, allowing scientists to better understand what telescopes find.

In about a year, the Large Synoptic Survey Telescope, recently renamed the Vera C. Rubin Observatory, will begin a survey targeting the whole sky and scientists expect to find hundreds of ultra-faint galaxies. In recent years, surveys targeting a small patch of the sky have discovered dozens of them.

Just counting these galaxies can tell scientists about the nature of dark matter. Studying their structure and the motions of their stars can tell us even more, said lead author Elaad Applebaum, a Rutgers doctoral student. These galaxies are also very old, with some of the most ancient stars, meaning they can tell us about how the first stars formed in the universe.

Scientists at Grinnell College, University of Oklahoma, University of Washington, University of Oslo and the Yale Center for Astronomy & Astrophysics contributed to the study.The research was funded by the National Science Foundation.

Read the original here:

Galaxy Simulations Could Help Reveal Origins of Milky Way - Newswise

ALCC Program Awards Computing Time on ALCF’s Theta Supercomputer to 24 projects – HPCwire

Aug. 6, 2020 The U.S. Department of Energys (DOE) Advanced Scientific Computing Research (ASCR) Leadership Computing Challenge (ALCC) has awarded 24 projects a total of 5.74 million node hours at the Argonne Leadership Computing Facility (ALCF) to pursue challenging, high-risk, high-payoff simulations.

Each year, the ASCR program, which manages some of the worlds most powerful supercomputing facilities, selects ALCC projects in areas that aim to further DOE mission science and broaden the community of researchers capable of using leadership computing resources.

TheALCC programallocates computational resources at ASCRs supercomputing facilities to research scientists in industry, academia, and national laboratories. In addition to the ALCF, ASCRs supercomputing facilities include the Oak Ridge Leadership Computing Facility (OLCF) at Oak Ridge National Laboratory and the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. The ALCF, OLCF, and NERSC are DOE Office of Science User Facilities.

The 24 projects awarded time on the ALCFs Theta supercomputer are listed below. Some projects received additional computing time at OLCF and/or NERSC (see the full list of awardshere). The one-year awards began on July 1.

About The Argonne Leadership Computing Facility

The Argonne Leadership Computing Facilityprovides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines. Supported by the U.S. Department of Energys (DOEs) Office of Science, Advanced Scientific Computing Research (ASCR) program, theALCFis one of twoDOELeadership Computing Facilities in the nation dedicated to open science.

About the Argonne National Laboratory

Argonne National Laboratoryseeks solutions to pressing national problems in science and technology. The nations first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance Americas scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed byUChicago Argonne, LLCfor theU.S. Department of Energys Office of Science.

About The U.S. Department of Energys Office of Science

The U.S. Department of Energys Office of Scienceis the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visithttps://energy.gov/science

Source: Nils Heinonen, Argonne Leadership Computing Facility

See the rest here:

ALCC Program Awards Computing Time on ALCF's Theta Supercomputer to 24 projects - HPCwire

GE plans to give offshore wind energy a supercomputing boost – The Verge

GE plans to harness the power of one of the worlds fastest supercomputers to propel offshore wind power development in the US. IBMs Summit supercomputer at the US Department of Energys Oak Ridge National Laboratory will allow GE to simulate air currents in a way the companys never been able to before.

Ultimately, the research could influence the design, control, and operations of future wind turbines. Its also intended to advance the growth of wind power off the East Coast of the US by giving researchers a better grasp of the available wind resources in the Atlantic. The simulations Summit will run can fill in some of the gaps in the historical data, according to GE.

Offshore wind has the potential to provide almost twice the amount of electricity as the USs current electricity usage, according to the American Wind Energy Association. But to make turbines that are hardier and more efficient offshore, researchers need more information. Thats where Summit comes in.

Its like being a kid in a candy store where you have access to this kind of a tool, says Todd Alhart, GEs research communications lead. The Summit supercomputer is currently ranked as the second fastest supercomputer in the world after Japans Fugaku, according to the Top500 supercomputer speed ranking.

GEs research, to be conducted over the next year in collaboration with the DOEs Exascale Computing Project, would be almost impossible to do without Summit. Thats because theres usually a trade-off in their research between resolution and scale. They can typically study how air moves across a single rotor blade with high resolution, or they could examine a bigger picture like a massive wind farm but with blurrier vision. In this case, exascale computing should allow them to simulate the flow physics of an entire wind farm with a high enough resolution to study individual turbine blades as they rotate.

That is really amazing, and cannot be achieved otherwise, says Jing Li, GE research aerodynamics lead engineer.

Li and her team will focus on studying coastal low-level jets. These are air currents that dont follow the same patterns as winds typically considered in traditional wind turbine design, which gradually increase in speed with height. Coastal low-level jet streams are atypical, according to Li, because wind speeds can rise rapidly up to a certain height before suddenly dropping away. These wind patterns are generally less common, but they occur more frequently along the US East Coast which is why researchers want to better understand how they affect a turbines performance.

Theres been a growing appetite for offshore wind energy on the East Coast of the US. Americas first offshore wind farm was built off the coast of Rhode Island in 2016. A slate of East Coast wind farms is poised to come online over the next several years, with the largest expected to be a $1.6 billion project slated to be built off the coast of New Jersey by 2024.

See the original post:

GE plans to give offshore wind energy a supercomputing boost - The Verge

From WarGames to Terms of Service: How the Supreme Courts Review of Computer Fraud Abuse Act Will Impact Your Trade Secrets – JD Supra

Introduction

The Computer Fraud and Abuse Act (CFAA) is the embodiment of Congresss first attempt to draft laws criminalizing computer hacking. It is rumored that the Act was influenced by the 1983 movie WarGames[1], in which a teenager unintentionally starts a countdown to World War III when he hacks into a military supercomputer.

The law as originally drafted was aimed at hackers who use computers to gain unauthorized access to government computers. But Congress has amended it numerous times over the years, drastically expanding it to cover unauthorized access of any computer used in or affecting interstate or foreign commerce or communication, as well as a variety of other illicit computer activities such as committing fraud using a computer, trafficking in passwords, and damaging computer systems such as through a virus.

The CFAA also provides a private right of action allowing compensation and injunctive relief for anyone harmed by a violation of the law. It has proved very useful in civil and criminal cases of trade secret misappropriation where the trade secret information was obtained by accessing a computer without authorization or exceed[ing] authorized access. It is this language that provides the statute with so much flexibility to be used in trade secret cases; and which the Supreme Court has decided to take a closer look at in its next term.

Opponents have long argued that the without authorization or exceeds authorized access language is so unreasonably broad that it criminalizes everyday, insignificant online acts such as passwordsharing and violations of websites Terms of Service. Tim Wu, a professor at Columbia Law School, has called it the worst law in technology.[2] While it is true that CFAA violations have been, at times, over-aggressively charged, the Supreme Courts decision could drastically curtail how the CFAA can be used to curb trade secret misappropriation.

The Computer Fraud and Abuse Act

As computer technology has proliferated and become more powerful over the years, Congress has expanded the CFAAboth in terms of its scope and its penaltiesnumerous times since its enactment. In 1984, Congress passed the Comprehensive Crime Control Act, which included the first federal computer crime statute, later codified at 18 U.S.C. 1030, even before the more recognizable form of the modern Internet, i.e., the World Wide Web, was invented.[3] This original bill was in response to a growing problem in counterfeit credit cards and unauthorized use of account numbers or access codes to banking system accounts. H.R. Rep. No. 98-894, at 4 (1984). Congress recognized that the main issue underlying counterfeit credit cards was the potential for fraudulent use of ever-expanding and rapidly-changing computer technology. Id. The purpose of the statute was to deter the activities of so-called hackers who were accessing both private and public computer systems. Id. at 10. In fact, the original bill characterized the 1983 science fiction film WarGames[4] as a realistic representation of the automatic dialing and access capabilities of the personal computer. Id.

Two years later, Congress significantly expanded the computer crime statute, and it became known as the Computer Fraud and Abuse Act. Congress has further amended the statute over the years to expand the scope of proscribed violations and to provide a civil cause of action for private parties to obtain compensatory damages, injunctive relief, and/or other equitable relief. For example, in the most recent expansion of the CFAA, in 2008, Congress (1) broadened the definition of protected computers to include those used in or affecting interstate or foreign commerce or communication, including a computer located outside the United States, which includes servers and other devices connected to the Internet; (2) criminalized threats to steal data on a victims computer, publicly disclose stolen data, or not repair damage already caused to the computer; (3) added conspiracy as an offense; and (4) allowed for civil and criminal forfeiture of real or personal property used in or derived from CFAA violations.

The CFAA covers a broad range of unlawful computer access and, in relevant part, provides: [w]hoever . . . intentionally accesses a computer without authorization or exceeds authorized access, and thereby obtains . . . information from any protected computer, commits a federal crime and may face civil liability. 18 U.S.C. 1030(a)(2), (c), (g). The phrase without authorization is not defined in the statute, but the phrase exceeds authorized access, is defined as: to access a computer with authorization and to use such access to obtain or alter information in the computer that the accesser is not entitled so to obtain or alter. Id. 1030(e)(6).

A computer can be any electronic, magnetic, optical, electrochemical, or other high speed data processing device performing logical, arithmetic, or storage functions, and includes any data storage facility or communications facility directly related to or operating in conjunction with such device. Id. 1030(e)(1). Courts across the country have interpreted computer extremely broadly to include cell phones, Internet-connected devices, cell towers, and stations that transmit wireless signals. E.g., United States v. Kramer, 631 F.3d 900, 902-03 (8th Cir. 2011) (basic cellular phone without Internet connection); United States v. Valle, 807 F.3d 508, 513 (2d Cir. 2015) (restricted databases); United States v. Drew, 259 F.R.D. 449, 457-58 (C.D. Cal. 2009) (Internet website); United States v. Mitra, 405 F.3d 492, 495 (7th Cir. 2005) (computer-based radio system); United States v. Nosal, 844 F.3d 1024, 1050 (9th Cir. 2016) (Reinhardt, J., dissenting) (This means that nearly all desktops, laptops, servers, smart-phones, as well as any iPad, Kindle, Nook, X-box, Blu-Ray player or any other Internet-enabled device, including even some thermostats qualify as protected. (some internal quotations omitted)). A protected computer is any computer that is used in or affect[s] interstate or foreign commerce or communication of the United States. 18 U.S.C. 1030(e)(2)(B). Again, courts have construed this term very broadly to include any computer connected to the Internet. E.g., United States v. Nosal, 676 F.3d 854, 859 (9th Cir. 2012) (en banc); United States v. Trotter, 478 F.3d 918, 921 (8th Cir. 2007).

Violations of the CFAA can result in both criminal and civil liability. A criminal conviction under the exceeds authorized access provision is typically a fine or a misdemeanor for a first offense, but can be a felony punishable by fines and imprisonment of up to five years in certain situations, such as where the offense was committed for commercial advantage or private financial gain and the value of the information obtained exceeds $5,000. 18 U.S.C. 1030(c)(2)(A), (B). The CFAA also authorizes civil suits for compensatory damages and injunctive or other equitable relief by parties who show, among other things, that a violation of the statute caused them to suffer[ ] damage or loss under certain circumstances. Id. 1030(g).

Using the CFAA in Trade Secret Cases

A CFAA claim can be a nice complement to a trade secret misappropriation claim if the act of misappropriation included taking information from a computer system. One key advantage that the CFAA adds to a trade secret misappropriation case is that it is not subject to some of the more restrictive requirements of federal and state trade secret laws. To assert a claim under the Defend Trade Secrets Act, 18 U.S.C. 1836, et seq., the claimant must (among other things): (1)specifically identify the trade secret that was misappropriated; (2) prove that the claimant took reasonable measures to keep the information secret; and (3) prove that the information derives independent economic value from not being generally known or readily ascertainable. See 18 U.S.C. 1839(3).

These requirements can present traps for the unwary, and potential defenses for a defendant. For example, a defendant accused of trade secret misappropriation will often put the plaintiff through its paces to specifically identify the trade secrets that were allegedly misappropriated because failure to do so to the courts satisfaction can lead to an early dismissal. E.g., S & P Fin. Advisors v. Kreeyaa LLC, No. 16-CV-03103-SK, 2016 WL 11020958, at *3 (N.D. Cal. Oct. 25, 2016) (dismissing for failure to state a claim for violation of the DTSA where plaintiff failed to sufficiently state what information defendants allegedly misappropriated and how that information constitutes a trade secret).

Similarly, whether the information was protected by reasonable measures can become a litigation within the litigation. To establish this requirement, the plaintiff typically must spell-out all of its security measures, supply evidence of the same, and provide one or more witnesses to testify to the extent and effectiveness of the security measures. Failure to adequately establish reasonable measure has been the downfall of many trade secret claims. E.g., Govt Employees Ins. Co. v. Nealey, 262 F. Supp. 3d 153, 167-172 (E.D. Pa. 2017) (dismissing plaintiffs DTSA claim for failure to state a claim when it included much of the same information it claimed to be a trade secret in a publicly filed affidavit).

Lastly, the requirement to establish that the information derives independent economic value from not being generally known or readily ascertainable can also be a significant point of contention. Establishing this prong often requires the use of a damages expert and the costly expert discovery that goes along with that. And as with the other requirements of a DTSA claim, failure to establish it adequately can doom the claim. E.g., ATS Grp., LLC v. Legacy Tank & Indus. Servs. LLC, 407 F. Supp. 3d 1186, 1200 (W.D. Okla. 2019) (finding plaintiff failed to state a claim that the information designated as trade secrets derived independent value from remaining confidential when the complaint only recited language from DTSA without alleging, for example, that the secrecy of the information provided it with a competitive advantage).

The elements of a CFAA claim in a civil actiongenerally, intentionally accessing a protected computer without authorization or exceeding authorization and causing at least $5,000 in losses[5]are, in comparison, less burdensome to establish and less perilous for the claimant. Access is typically established through computer logs or forensic analysis. The level of authorization the defendant had is usually easily established from company records and/or a managers testimony, and the requisite damages of $5,000 is so low that it is easily met in the vast majority of cases. Lastly, the element of intent can be the most contentious, but as with any intent requirement, it can be established through circumstantial evidence. E.g. Fidlar Techs. v. LPS Real Estate Data Sols., Inc., 810 F.3d 1075, 1079 (7th Cir. 2016) (Because direct evidence of intent is often unavailable, intent to defraud [under the CFAA] may be established by circumstantial evidence and by inferences drawn from examining the scheme itself which demonstrate that the scheme was reasonably calculated to deceive persons of ordinary prudence and comprehension.) (citations and internal quotation marks omitted). Often, the mere fact that the defendant bypassed controls and security messages on the computer is sufficient to establish intent. E.g. Tyan, Inc. v. Garcia, No. CV-15-05443-MWF (JPRx), 2017 WL 1658811, at *14, 2017 U.S. Dist. LEXIS 66805 at *40-41 (C.D. Cal. May 2, 2017) (finding defendant had intent to defraud when he accessed files with a username and password he was not authorized to use).

The Controversy Surrounding the CFAA

Over the years, opponents of the CFAA have argued that it is so unreasonably broad that it effectively criminalizes everyday computer behavior:

Every day, millions of ordinary citizens across the country use computers for work and for personal matters. United States v. Nosal, 676 F.3d 854, 862-63 (9th Cir. 2012) (en banc). Accessing information on those computers is virtually always subject to conditions imposed by employers policies, websites terms of service, and other third-party restrictions. If, as some circuits hold, the CFAA effectively incorporates all of these limitations, then any trivial breach of such a conditionfrom checking sports scores at work to inflating ones height on a dating websiteis a federal crime.

Petition for Writ of Certiorari, Van Buren v. United States, No. 19-783, at 2.

The most infamous example of overcharging the CFAA is the tragic case of Aaron Swartz. Swartzan open-Internet activistconnected a computer to the Massachusetts Institute of Technology (MIT) network in a wiring closet and downloaded 4.8 million academic journal articles from the subscription database, JSTOR, which he planned to release for free to the public. Federal prosecutors charged him with multiple counts of wire fraud and violations of the CFAA, sufficient to carry a maximum penalty of $1 million in fines and 35 years in prison. United States v. Swartz, No. 11-1-260-NMG, Dkt. No. 2 (D. Mass, July 14, 2011); see also id. at Dkt. No. 53. The prosecutors offered Swartz a plea bargain under which he would serve six months in prison, and Swartz countered the plea offer. But the prosecutors rejected the counteroffer and, two days later, Swartz hanged himself in his Brooklyn apartment.[6]

In 2014, partly in response to public pressure from the Swartz case and in an attempt to provide some certainty to its prosecution of CFAA offenses, the Department of Justice issued a memorandum outlining its charging policy for CFAA violations. Under the new policy, the DOJ explained:

When prosecuting an exceed-authorized-access violation, the attorney for the government must be prepared to prove that the defendant knowingly violated restriction on his authority to obtain or alter information stored on a computer, and not merely that the defendant subsequently misused information or services that he was authorized to obtain from the computer at the time he obtained it.

Department of Justices Intake and Charging Policy for Computer Crime Matters (Charging Policy), Memorandum from U.S. Atty Gen. to U.S. Attys and Asst. Atty Gens. for the Crim. and Natl Sec. Divs. at 4 (Sept. 11, 2014) (available at https://www.justice.gov/criminal-ccips/file/904941/download). Perhaps unsurprisingly, opponents of the law were not sufficiently comforted by prosecutorial promises to not overcharge CFAA claims.

The Supreme Court Is Expected to Clarify What Actions Constitute Violations of the CFAA Uniformly Across the Country

Nathan Van Buren was a police sergeant in Cumming, Georgia. As a law enforcement officer, Van Buren was authorized to access the Georgia Crime Information Center (GCIC) database, which contains license plate and vehicle registration information, for law-enforcement purposes. An acquaintance, Andrew Albo, gave Van Buren $6,000 to run a search of the GCIC to determine whether a dancer at a local strip club was an undercover police officer. Van Buren complied and was arrested by the FBI the next day. It turned out, Albo was cooperating with the FBI and his request to Van Buren was a ruse invented by the FBI to see if Van Buren would bite.

Following a trial, Van Buren was convicted under the CFAA and sentenced to eighteen months in prison. On appeal, he argued that accessing information that he had authorization to access cannot exceed[] authorized access as meant by statute, even if he did so for an improper or impermissible purpose. The Eleventh Circuit disagreed, siding with the government that United States v. Rodriguez, 628 F.3d 1258 (11th Cir. 2010) was controlling. In Rodriguez, the Eleventh Circuit had held that a person with access to a computer for business reasons exceed[s] his authorized access when he obtain[s] . . . information for a nonbusiness reason. Rodriguez, 628 F.3d at 1263.

In denying Van Burens appeal, the Eleventh Circuit noted the split that the Supreme Court has now decided to resolve. As with the Eleventh Circuit, the First, Fifth, and Seventh Circuits have all held that a person operates without authorization or exceeds authorized access when they access information they otherwise are authorized to access, but for an unauthorized purpose. See EF Cultural Travel BV v. Explorica, Inc., 274 F.3d 577, 582-83 (1st Cir. 2001) (defendant exceeded authorized access by collecting proprietary information and know-how to aid a competitor); United States v. John, 597 F.3d 263, 272 (5th Cir. 2010) (exceed[ing] authorized access includes exceeding the purposes for which access is authorized.); Intl Airport Ctrs., L.L.C. v. Citrin, 440 F.3d 418, 420-21 (7th Cir. 2006) (CFAA violated when defendant accessed data on his work computer for a purpose that his employer prohibited). Those who favor the broader interpretation argue that an expansive interpretation of the statute is more consistent with congressional intent of stopping bad actors from computerfacilitated crime as computers continue to proliferate, especially in light of the consistent amendments that Congress has enacted to broaden the application of the CFAA. See Guest-Tek Interactive Entmt, Inc. v. Pullen, 665 F. Supp. 2d 42, 45 (D. Mass. 2009) (a narrow reading of the CFAA ignores the consistent amendments that Congress has enacted to broaden its application . . . in the past two decades by the enactment of a private cause of action and a more liberal judicial interpretation of the statutory provisions.).

Numerous trial courts have applied these circuits more expansive interpretation in civil cases against alleged trade secret misappropriators. For example, in Merritt Hawkins & Assocs., LLC v. Gresham, 79 F. Supp. 3d 625 (N.D. Tex. 2015), the court relied on the Fifth Circuits controlling case, United States v. John, 597 F.3d 263 (5th Cir. 2010), to deny summary judgment to a defendant who was accused of exceeding his authorization when he deleted hundreds of files on the companys computer before terminating his employment. In finding disputed issues of fact, the trial court specifically noted that the Fifth Circuit agree[d] with the First Circuit that the concept of exceeds authorized access may include exceeding the purposes for which access is authorized. Merritt Hawkins, 79 F. Supp. 3d at 634. Likewise, in Guest-Tek Interactive Entmt, , the court noted both interpretations and opted for the broader one in view of guidance from the First Circuit. Guest-Tek Interactive Entmt Inc., 665 F. Supp. 2d at 45-46 (noting that the First Circuit has favored a broader reading of the CFAA) (citing EF Cultural Travel BV, 274 F.3d at 582-84).

On the flip side, three circuits have held that the CFAAs without authorization and exceeds authorized access do not impose criminal liability on a person with permission to access information on a computer who accesses that information for an improper purpose. In other words, a person violates the CFAA in these circuits only by accessing information he has no authorization to access, regardless of the reason. Valle, 807 F.3d at 527 (CFAA is limited to situations where the user does not have access for any purpose at all); WEC Carolina Energy Sols. LLC v. Miller, 687 F.3d 199, 202, 207 (4th Cir. 2012) (rejecting CFAA imposes liability on employees who violate a use policy and limiting liability to individuals who access computers without authorization or who obtain or alter information beyond the bounds of their authorized access); Nosal, 676 F.3d at 862-63 (holding that the phrase exceeds authorized access in the CFAA does not extend to violations of use restrictions). These courts have all relied on statutory construction plus some version of the rule of lenitythat, when a criminal statute is susceptible to a harsher construction and a less-harsh construction, courts should opt for the latter. For example, as Van Buren pointed out in his petition:

every March, tens of millions of American workers participate in office pools for the NCAA mens basketball tournament (March Madness). Such pools typically involve money stakes. When these employees use their company computers to generate their brackets or to check their standing in the pools, they likely violate their employers computer policies. Again, the answer to the question presented determines whether these employees are guilty of a felony.

Petition for Writ of Certiorari, Van Buren v. United States, No. 19-783, at 12-13; see also Nosal, 676 F.3d at 860-63 (applying use restrictions would turn millions of ordinary citizens into criminals). Numerous trial courts in these jurisdictions have followed suit. See, e.g., Shamrock Foods v. Gast, 535 F. Supp. 2d 962, 967 (D. Ariz. 2008) (concluding that the plain language, legislative history, and principles of statutory construction support the restrictive view of authorization); Lockheed Martin Corp. v. Speed, et al., No. 6:05-cv-1580, 2006 U.S. Dist. LEXIS 53108 at *24 (M.D. Fla. 2006) (finding that the narrow construction follows the statutes plain meaning, and coincidently, has the added benefit of comporting with the rule of lenity.).

So at bottom, the Supreme Court will decide whether the CFAA, in addition to access restrictions, also encompasses use restrictions.

What Future Impact May the Supreme Courts Decision Have on Trade Secret Cases?

If the Supreme Court adopts the narrower access-restriction-only enforcement of the CFAA, then the nature and extent of the alleged misappropriators authorization to access the trade secrets will determine the applicability of the CFAA. Even with this narrower interpretation, however, employers can still proactively take certain steps to improve their chances of being able to assert CFAA claims in the future.

Misappropriation of trade secrets under federal and state statutes, and breach of employment or nondisclosure agreements, are potential claims an employer can assert against an employee who accepts a job offer with a competitor and downloads trade secret information to take with him before leaving the company. Whether the company can also have a claim against this former employee under the CFAA depends on the level of access he had to the employers computer sources during the course of his employment. If (under the narrower interpretation of the CFAA) the employee downloaded the trade secrets from computer sources he had access to in the course of his ordinary job duties, then the company may not have a CFAA claim because the employees actions were neither without authorization nor exceed[ing] authorized access. But if the employee did not have permission to access those computer sources in the course of his normal job duties, then he may be guilty of exceeding his authorized access. See Nosal, 676 F.3d at 858 (accessing a computer without authorization refers to a scenario where a user lacks permission to access any information on the computer, whereas exceeds authorized access refers to a user who has permission to access some information on the computer but then accesses other information to which her authorization does not extend).

There are certain steps employers can take that can determine whether they can assert a CFAA claim against an employee if need be in the future. First, employees computer access should be limited to need-to-know. In other words, employees should not be able to access computer resources and information that are not necessary for them to perform their duties. For example, an employee may be provided access to customer and price lists (economic trade secrets), but not have access to servers where source code and technical information (technical trade secrets) are stored. Even within technical areas, an employees access privileges should be limited as much as possible to their specific areas of work. In addition, employment agreements, confidentiality agreements (with both employees and third parties), and company policies should make clear that employees (and business partners, where applicable) do not have permission to access resources that are not necessary in the performance of their job responsibilities. This may entail some additional IT overhead in tightening up employees access privileges, but any steps employers can take proactively to convert potential use restrictions into access restrictions will go a long way in preserving the viability of a CFAA claim.

Lastly, without use restrictions, if the Supreme Court decides those are overreach, a companys employment agreements, nondisclosures agreements, and computer use policies may still save the day. Under one line of thought which may survive even if the Supreme Court adopts the narrower interpretation, when an employee breaches one of these agreements or policies, or even just violates her duty of loyalty to the company, that can instantly and automatically extinguish her agency relationship with the company and, with it, whatever authority she had to access the companys computers and information. See Intl Airport Ctrs., 440 F.3d at 420-21 (relying on employees breach of his duty of loyalty [which] terminated his agency relationshipand with it his authority to access the laptop, because the only basis of his authority had been that relationship.); see also Shurgard Storage, Inc. v. Safeguard Self Storage, Inc., 119 F. Supp. 2d 1121 (W.D. Wash. 2000) (the authority of the plaintiffs former employees ended when they allegedly became agents of the defendant. Therefore, for the purposes of this 12(b)(6) motion, they lost their authorization and were without authorization when they allegedly obtained and sent the proprietary information to the defendant via e-mail.). Accordingly, it may be possible to bring a CFAA claim where an employee exceeds his authority by, for example, violating a policy that prohibits downloading confidential company files to portable media (essentially, a use restriction), which then automatically forfeits his access rights resulting in an access restriction violation.

Conclusion

The Supreme Court may drastically narrow application of the CFAA when it decides the Van Buren case. But there are proactive measures employers can take now to potentially preserve their ability to use the CFAA in cases of trade secret misappropriation.

[1] WarGames (Metro-Goldwyn-Mayer Studios 1983).

[2] Tim Wu, Fixing the Worst Law in Technology, The New Yorker, Mar. 18, 2013 (available at https://www.newyorker.com/news/news-desk/fixing-the-worst-law-in-technology.

[3] Evan Andrews, Who Invented the Internet? (Oct. 28, 2019), https://www.history.com/news/who-invented-the-internet).

[4] WarGames, supra n. 1.

[5] CFAA claims in civil trade secret misappropriation cases are typically brought under 18 U.S.C. 1030(a)(2) or (a)(4). The former states, Whoever

(2) intentionally accesses a computer without authorization or exceeds authorized access, and thereby obtains

(A) information contained in a financial record of a financial institution, or of a card issuer as defined in section 1602(n) of title 15, or contained in a file of a consumer reporting agency on a consumer, as such terms are defined in the Fair Credit Reporting Act (15 U.S.C. 1681 et seq.);

(B) information from any department or agency of the United States; or

(C) information from any protected computer;

shall be punished as provided in subsection (c) of this section. The latter states, Whoever

(4) knowingly and with intent to defraud, accesses a protected computer without authorization, or exceeds authorized access, and by means of such conduct furthers the intended fraud and obtains anything of value, unless the object of the fraud and the thing obtained consists only of the use of the computer and the value of such use is not more than $5,000 in any 1-year period. shall be punished as provided in subsection (c) of this section.

These are the portions of the CFAA that are usually more applicable to an employee or former employee who steals his employers trade secrets. The CFAA includes other provisions directed to outside hacker situations. For example, 18 U.S.C. 1030(a)(5)-(7) address scenarios such as malicious hacking with intent to cause damage or loss, trafficking in passwords, and ransomware.

[6] Sam Gustin, Aaron Swartz, Tech Prodigy and Internet Activist, Is Dead at 26 (January 13, 2013) (available at https://business.time.com/2013/01/13/tech-prodigy-and-internet-activist-aaron-swartz-commits-suicide/).

[View source.]

Here is the original post:

From WarGames to Terms of Service: How the Supreme Courts Review of Computer Fraud Abuse Act Will Impact Your Trade Secrets - JD Supra

A Quintillion Calculations a Second: DOE Calculating the Benefits of Exascale and Quantum Computers – SciTechDaily

By U.S. Department of EnergyAugust 6, 2020

To keep qubits used in quantum computers cold enough so scientists can study them, DOEs Lawrence Berkeley National Laboratory uses a sophisticated cooling system. Credit: Image courtesy of Thor Swift, Lawrence Berkeley National Laboratory

A quintillion calculations a second. Thats one with 18 zeros after it. Its the speed at which an exascale supercomputer will process information. The Department of Energy (DOE) is preparing for the first exascale computer to be deployed in 2021. Two more will follow soon after. Yet quantum computers may be able to complete more complex calculations even faster than these up-and-coming exascale computers. But these technologies complement each other much more than they compete.

Its going to be a while before quantum computers are ready to tackle major scientific research questions. While quantum researchers and scientists in other areas are collaborating to design quantum computers to be as effective as possible once theyre ready, thats still a long way off. Scientists are figuring out how to build qubits for quantum computers, the very foundation of the technology. Theyre establishing the most fundamental quantum algorithms that they need to do simple calculations. The hardware and algorithms need to be far enough along for coders to develop operating systems and software to do scientific research. Currently, were at the same point in quantum computing that scientists in the 1950s were with computers that ran on vacuum tubes. Most of us regularly carry computers in our pockets now, but it took decades to get to this level of accessibility.

In contrast, exascale computers will be ready next year. When they launch, theyll already be five times faster than our fastest computer Summit, at Oak Ridge National Laboratorys Leadership Computing Facility, a DOE Office of Science user facility. Right away, theyll be able to tackle major challenges in modeling Earth systems, analyzing genes, tracking barriers to fusion, and more. These powerful machines will allow scientists to include more variables in their equations and improve models accuracy. As long as we can find new ways to improve conventional computers, well do it.

Once quantum computers are ready for prime time, researchers will still need conventional computers. Theyll each meet different needs.

DOE is designing its exascale computers to be exceptionally good at running scientific simulations as well as machine learning and artificial intelligence programs. These will help us make the next big advances in research. At our user facilities, which are producing increasingly large amounts of data, these computers will be able to analyze that data in real time.

Quantum computers, on the other hand, will be perfect for modeling the interactions of electrons and nuclei that are the constituents of atoms. As these interactions are the foundation for chemistry and materials science, these computers could be incredibly useful. Applications include modeling fundamental chemical reactions, understanding superconductivity, and designing materials from the atom level up. Quantum computers could potentially reduce the time it takes to run these simulations from billions of years to a few minutes. Another intriguing possibility is connecting quantum computers with a quantum internet network. This quantum internet, coupled with the classical internet, could have a profound impact on science, national security, and industry.

Just as the same scientist may use both a particle accelerator and an electron microscope depending on what they need to do, conventional and quantum computing will each have different roles to play. Scientists supported by the DOE are looking forward to refining the tools that both will provide for research in the future.

For more information, check out this infographic:

Read the original post:

A Quintillion Calculations a Second: DOE Calculating the Benefits of Exascale and Quantum Computers - SciTechDaily