Supercomputer Market Growth, Future Prospects And Competitive Analysis (2020-2026) – Bulletin Line

The research report on the global Supercomputer Market offers an all-encompassing analysis of recent and upcoming states of this industry which also analyzes several growth strategies for market growth. The Supercomputer report also focuses on the comprehensive study of the industry environment, and industry chain structure extensively. The Supercomputer report also sheds light on major factors including leading vendors, growth rate, production value, and key regions.

Request for a sample report here @:,-types,-applications,-countries,-market-size,-forecast-to-2026-(based-on-2020-covid-19-worldwide-spread)/68930#request_sample

Top Key Players:


Supercomputer Market Fragment by Areas, regional examination covers

United States, Canada, Germany, UK, France, Italy, Russia, Switzerland, Sweden, Poland, , China, Japan, South Korea, Australia, India, Taiwan, Thailand, Philippines, Malaysia, Brazil, Argentina, Columbia, Chile, Saudi Arabia, UAE, Egypt, Nigeria, South Africa and Rest of the World.

The Supercomputer Market report introduces the industrial chain analysis, downstream buyers, and raw material sources along with the correct comprehensions of market dynamics. The Supercomputer Market report is articulated with a detailed view of the Global Supercomputer industry including Global production sales, Global revenue, and CAGR. Additionally, it offers potential insights about Porters Five Forces including substitutes, buyers, industry competitors, and suppliers with genuine information for understanding the Global Supercomputer Market.

Get Impressive discount @:

Market segment by Type, the product can be split into:

Commercial IndustriesResearch InstitutionsGovernment EntitiesOthers

Market segment by Application, split into:


The Supercomputer Market study projects viability analysis, SWOT analysis, and various other information about the leading companies operating in the Global Supercomputer Market provide a complete efficient account of the viable environment of the industry with the aid of thorough company profiles. However, Supercomputer research examines the impact of current market success and future growth prospects for the industry.

Inquire Before Buying @:,-types,-applications,-countries,-market-size,-forecast-to-2026-(based-on-2020-covid-19-worldwide-spread)/68930#inquiry_before_buying

In this study, the years considered to estimate the market size of Supercomputer are as follows:

Table of Contents:

Get Full Table of Content @:,-types,-applications,-countries,-market-size,-forecast-to-2026-(based-on-2020-covid-19-worldwide-spread)/68930#table_of_contents

See the original post:

Supercomputer Market Growth, Future Prospects And Competitive Analysis (2020-2026) - Bulletin Line

A continent works to grow its stake in quantum computing – University World News


South Africa is a few steps ahead in the advancement of quantum computing and quantum technologies in general, said Mark Tame, professor in photonics at Stellenbosch University in the Western Cape.

South Africas University of KwaZulu-Natal has also been working on quantum computing for more than a decade, gradually building up a community around the field.

The buzz about quantum computing in South Africa just started recently due to the agreement between [Johannesburgs] University of the Witwatersrand and IBM, said Professor Francesco Petruccione, interim director, National Institute for Theoretical and Computational Science, and South African Research Chair in Quantum Information Processing and Communication at the School of Chemistry and Physics Quantum Research Group, University of KwaZulu-Natal.

Interest was intensified by Googles announcement last October that it had developed a 53-qubit device which it claimed took 200 seconds to sample one instance of a quantum circuit a million times. The IT company claimed it would take a state-of-the-art digital supercomputer 10,000 years to achieve this.

A University of Waterloo Institute for Quantum Computing paper stresses quantum computers ability to express a signal (a qubit) of more than one value at the same time (the superposition ability) with that signal being manifested in another device independently, but in exactly the same way (the entanglement ability). This enables quantum computers to handle much more complex questions and problems than standard computers using binary codes of ones and zeros.

The IBM Research Laboratory in Johannesburg offers African researchers the potential to harness such computing power. It was established in 2015, part of a 10-year investment programme through the South African governments Department of Trade and Industry.

It is a portal to the IBM Quantum Experience, a cloud-based quantum computing platform accessible to other African universities that are part of the African Research Universities Alliance (ARUA), which involves 16 of the continents leading universities (in Ethiopia, Ghana, Kenya, Nigeria, Rwanda, Senegal, Tanzania, Uganda and South Africa).

Levelling of the playing field

The IBM development has levelled the playing field for students, [giving them] access to the same hardware as students elsewhere in the world. There is nothing to hold them back to develop quantum applications and code. This has been really helpful for us at Stellenbosch to work on projects which need access to quantum processors not available to the general public, said Tame.

While IBM has another centre on the continent, at the Catholic University of Eastern Africa in Nairobi, Kenya, in 2018 the University of the Witwatersrand became the first African university to join the American computing giants Quantum Computing Network. They are starting to increase the network to have an army of quantum experts, said Professor Zeblon Vilakazi, a nuclear physicist, and vice-chancellor and principal of the University of the Witwatersrand.

At a continental level, Vilakazi said Africa is still in a learning phase regarding quantum computing. At this early stage we are still developing the skills and building a network of young students, he said. The university has sent students to IBMs Zurich facility to learn about quantum computing, he said.

To spur cooperation in the field, a Quantum Africa conference has been held every year since 2010, with the first three in South Africa, and others in Algeria and Morocco. Last years event was in Stellenbosch, while this years event, to be hosted at the University of Rwanda, was postponed until 2021 due to the COVID-19 pandemic.

Growing African involvement

Rwanda is making big efforts to set up quantum technology centres, and I have former students now working in Botswana and the Gambia. It is slowly diffusing around the continent, said Petruccione.

Academics participating at the Stellenbosch event included Yassine Hassouni of Mohammed V University, Rabat; Nigerian academic Dr Obinna Abah of Queens University Belfast; and Haikel Jelassi of the National Centre for Nuclear Sciences and Technologies, Tunisia.

In South Africa, experimental and theoretical work is also being carried out into quantum communications the use of quantum physics to carry messages via fibre optic cable.

A lot of work is being done on the hardware side of quantum technologies by various groups, but funding for these things is not the same order of magnitude as in, say, North America, Australia or the UK. We have to do more with less, said Tame.

Stellenbosch, near Cape Town, is carrying out research into quantum computing, quantum communication and quantum sensing (the ability to detect if a quantum-sent message is being read).

I would like it to grow over the next few years by bringing in more expertise and help the development of quantum computing and technologies for South Africa, said Tame.

Witwatersrand is focusing on quantum optics, as is Petrucciones team, while there is collaboration in quantum computing with the University of Johannesburg and the University of Pretoria.

University programmes

Building up and retaining talent is a key challenge as the field expands in Africa, as is expanding courses in quantum computing.

South Africa doesnt offer a masters in quantum computing, or an honours programme, which we need to develop, said Petruccione.

This is set to change at the University of the Witwatersrand.

We will launch a syllabus in quantum computing, and were in the process of developing courses at the graduate level in physics, natural sciences and engineering. But such academic developments are very slow, said Vilakazi.

Further development will hinge on governmental support, with a framework programme for quantum computing being developed by Petruccione. There is interest from the [South African] Department of Science and Innovation. Because of [the economic impact of] COVID-19, I hope some money is left for quantum technology, but at least the government is willing to listen to the community, he said.

Universities are certainly trying to tap non-governmental support to expand quantum computing, engaging local industries, banks and pharmaceutical companies to get involved in supporting research.

We have had some interesting interactions with local banks, but it needs to be scaled up, said Petruccione.


While African universities are working on quantum computing questions that could be applicable anywhere in the world, there are plans to look into more localised issues. One is drug development for tuberculosis, malaria and HIV, diseases that have afflicted Southern Africa for decades, with quantum computings ability to handle complex modelling of natural structures a potential boon.

There is potential there for helping in drug development through quantum simulations. It could also help develop quantum computing networks in South Africa and more broadly across the continent, said Vilakazi.

Agriculture is a further area of application. The production of fertilisers is very expensive as it requires high temperatures, but bacteria in the soil do it for free. The reason we cant do what bacteria do is because we dont understand it. The hope is that as quantum computing is good at chemical reactions, maybe we can model it and that would lead to cheaper fertilisers, said Petruccione.

With the world in a quantum computing race, with the US and China at the forefront, Africa is well positioned to take advantage of developments. We can pick the best technology coming out of either country, and that is how Africa should position itself, said Vilakazi.

Petrucciones group currently has collaborations with Russia, India and China. We want to do satellite quantum communication. The first step is to have a ground station, but that requires investment, he said.

Go here to see the original:

A continent works to grow its stake in quantum computing - University World News

Supercomputer predicts where Spurs will finish in the 2020/21 Premier League table – The Spurs Web

With the Premier League fixtures for the 2020/21 season being released last week, it is that time of the year again when fans and pundits start predicting who will finish where.

There is some cause for optimism for Tottenham fans heading into the season, given the strong manner in which Jose Mourinhos men finished the 2019/20 campaign.

Tottenham only lost once in their final nine games after the restart, a run which enabled the club to sneak into sixth place and book their place in the Europa League.

Spurs will be hoping to carry on the momentum into the start of next season but they will be aiming a lot higher than sixth in what will be Mourinhos first full season in charge.

However, according to the predictions of Unikrns supercomputer (as relayed by The Mirror), the Lilywhites will once again miss out on the top four next season.

Based on its calculations, Spurs will end the season in fifth place, one place ahead of their North London rivals Arsenal. Manchester City are predicted to win the title next season just ahead of Liverpool, with Manchester United and Chelsea rounding off the top four.

Spurs Web Opinion

It is too early to be making any predictions considering the transfer market will be open for a while. I believe we will finish in the top four next season as long as we make at least three more intelligent additions (a right-back, centre-back and a striker).

Continued here:

Supercomputer predicts where Spurs will finish in the 2020/21 Premier League table - The Spurs Web

Has the world’s most powerful computer arrived? – The National

The quest to build the ultimate computer has taken a big step forward following breakthroughs in ensuring its answers can be trusted.

Known as a quantum computer, such a machine exploits bizarre effects in the sub-atomic world to perform calculations beyond the reach of conventional computers.

First proposed almost 40 years ago, tech giants Microsoft, Google and IBM are among those racing to exploit the power of quantum computing, which is expected to transform fields ranging from weather forecasting and drug design to artificial intelligence.

The power of quantum computers comes from their use of so-called qubits, the quantum equivalent of the 1s and 0s bits used by conventional number-crunchers.

Unlike bits, qubits exploit a quantum effect allowing them to be both 1s and 0s at the same time. The impact on processing power is astonishing. Instead of processing, say, 100 bits in one go, a quantum computer could crunch 100 qubits, equivalent to 2 to the power 100, or a million trillion trillion bits.

At least, that is the theory. The problem is that the property of qubits that gives them their abilities known as quantum superposition is very unstable.

Once created, even the slightest vibration, temperature shift or electromagnetic signal can disturb the qubits, causing errors in calculations. Unless the superposition can be maintained long enough, the quantum computer either does a few calculations well or a vast amount badly.

For years, the biggest achievement of any quantum computer involved using a few qubits to find the prime factors of 15 (which every schoolchild knows are 3 and 5).

Using complex shielding methods, researchers can now stabilise around 50 qubits long enough to perform impressive calculations.

Last October, Google claimed to have built a quantum computer that solved in 200 seconds a maths problem that would have taken an ultra-fast conventional computer more than 10,000 years.

Yet even this billion-fold speed-up is just a shadow of what would be possible if qubits could be kept stable for longer. At present, many of the qubits have their powers wasted being used to spot and fix errors.

Now two teams of researchers have independently found new ways of tackling the error problem.

Physicists at the University of Chicago have found a way of keeping qubits stable for longer not by blocking disturbances, but by blurring them.

It is like sitting on a merry-go-round with people yelling all around you

Dr Kevin Miao, computing expert

In some quantum computers, the qubits take the form of electrons whose direction of spin is a superposition of both up and down. By adding a constantly flipping magnetic field, the team found that the electrons rotated so quickly that they barely noticed outside disturbances. The researchers explain the trick with an analogy: It's like sitting on a merry-go-round with people yelling all around you, says team member Dr Kevin Miao. When the ride is still, you can hear them perfectly, but if you're rapidly spinning, the noise blurs into a background.

Describing their work in the journal Science, the team reported keeping the qubits working for about 1/50th of a second - around 10,000 times longer than their lifetime if left unshielded. According to the team, the technique is simple to use but effective against all the standard sources of disturbance. Meanwhile, researchers at the University of Sydney have come up with an algorithm that allows a quantum computer to work out how its qubits are being affected by disturbances and fix the resulting errors. Reporting their discovery in Nature Physics, the team says their method is ready for use with current quantum computers, and could work with up to 100 qubits.

These breakthroughs come at a key moment for quantum computing. Even without them, the technology is already spreading beyond research laboratories.

In June, the title of worlds most powerful quantum computer was claimed not by a tech giant but by Honeywell a company perhaps best known for central heating thermostats.

Needless to say, the claim is contested by some, not least because the machine is reported to have only six qubits. But Honeywell points out that it has focused its research on making those qubits ultra-stable which allows them to work reliably for far longer than rival systems. Numbers of qubits alone, in other words, are not everything.

And the company insists this is just the start. It plans to boost the performance of its quantum computer ten-fold each year for the next five years, making it 100,000 times more powerful still.

But apart from bragging rights, why is a company like Honeywell trying to take on the tech giants in the race for the ultimate computer ?

A key clue can be found in remarks made by Honeywell insiders to Forbes magazine earlier this month. These reveal that the company wants to use quantum computers to discover new kinds of materials.

Doing this involves working out how different molecules interact together to form materials with the right properties. Thats something conventional computers are already used for. But quantum computers wont just bring extra number-crunching power to bear. Crucially, like molecules themselves, their behaviour reflects the bizarre laws of quantum theory. And this makes them ideal for creating accurate simulations of quantum phenomena like the creation of new materials.

This often-overlooked feature of quantum computers was, in fact, the original motivation of the brilliant American physicist Richard Feynman, who first proposed their development in 1981.

Honeywell already has plans to use quantum computers to identify better refrigerants. These compounds were once notorious for attacking the Earths ozone layer, but replacements still have unwanted environmental effects. Being relatively simple chemicals, the search for better refrigerants is already within the reach of current quantum computers.

But Honeywell sees a time when far more complex molecules such as drugs will also be discovered using the technology.

For the time being, no quantum computer can match the all-round number-crunching power of standard computers. Just as Honeywell made its claim, the Japanese computer maker Fujitsu unveiled a supercomputer capable of over 500 million billion calculations a second.

Even so, the quantum computer is now a reality and before long it will make even the fastest supercomputer seem like an abacus.

Robert Matthews is Visiting Professor of Science at Aston University, Birmingham, UK

Updated: August 21, 2020 12:06 PM

Continue reading here:

Has the world's most powerful computer arrived? - The National

Galaxy Simulations Could Help Reveal Origins of Milky Way – Newswise

Newswise Rutgers astronomers have produced the most advanced galaxy simulations of their kind, which could help reveal the origins of the Milky Way and dozens of small neighboring dwarf galaxies.

Their research also could aid the decades-old search for dark matter, which fills an estimated 27 percent of the universe. And the computer simulations of ultra-faint dwarf galaxies could help shed light on how thefirst stars formed in the universe.

Our supercomputer-generated simulations provide the highest-ever resolution of a Milky Way-type galaxy, said co-author Alyson M. Brooks, an associate professor in the Department of Physics and Astronomy in the School of Arts and Sciences at Rutgers UniversityNew Brunswick. The high resolution allows us to simulate smaller neighbor galaxies than ever before the ultra-faint dwarf galaxies. These tiny galaxies are mostly dark matter and therefore are some of the best probes we have for learning about dark matter, and this is the first time that they have ever been simulated around a Milky Way-like galaxy. The sheer variety of the simulated galaxies is unprecedented, including one that lost all of its dark matter similar to whats been observed in space.

The Rutgers-led team generated two new simulations of Milky Way-type galaxies and their surroundings. They call them the DC Justice League Simulations, naming them after two women who have served on the U.S. Supreme Court: current Associate Justice Elena Kagan and retired Associate Justice Sandra Day O'Connor.

These are cosmological simulations, meaning they begin soon after the Big Bang and model the evolution of galaxies over the entire age of the universe (almost 14 billion years). Bound via gravity, galaxies consist of stars, gas and dust. The Milky Way is an example a large barred spiral galaxy, according to NASA.

In recent years, scientists have discovered ultra-faint satellite galaxies of the Milky Way, thanks to digital sky surveys that can reach fainter depths than ever. While the Milky Way has about 100 billion stars and is thousands of light years across, ultra-faint galaxies have a million times fewer stars (under 100,000 and as low as few hundred) and are much smaller, spanning tens of light years. For the first time, the simulations allow scientists to begin modeling these ultra-faint satellite galaxies around a Milky Way-type galaxy, meaning they provide some of the first predictions for what future sky surveys will discover.

In one simulation, a galaxy lost all its dark matter, and while real galaxies like that have been seen before, this is the first time anyone has simulated such a galaxy. These kinds of results tell scientists whats possible when it comes to forming galaxies, and they are learning new ways that neighbor galaxies can arise, allowing scientists to better understand what telescopes find.

In about a year, the Large Synoptic Survey Telescope, recently renamed the Vera C. Rubin Observatory, will begin a survey targeting the whole sky and scientists expect to find hundreds of ultra-faint galaxies. In recent years, surveys targeting a small patch of the sky have discovered dozens of them.

Just counting these galaxies can tell scientists about the nature of dark matter. Studying their structure and the motions of their stars can tell us even more, said lead author Elaad Applebaum, a Rutgers doctoral student. These galaxies are also very old, with some of the most ancient stars, meaning they can tell us about how the first stars formed in the universe.

Scientists at Grinnell College, University of Oklahoma, University of Washington, University of Oslo and the Yale Center for Astronomy & Astrophysics contributed to the study.The research was funded by the National Science Foundation.

Read the original here:

Galaxy Simulations Could Help Reveal Origins of Milky Way - Newswise

ALCC Program Awards Computing Time on ALCF’s Theta Supercomputer to 24 projects – HPCwire

Aug. 6, 2020 The U.S. Department of Energys (DOE) Advanced Scientific Computing Research (ASCR) Leadership Computing Challenge (ALCC) has awarded 24 projects a total of 5.74 million node hours at the Argonne Leadership Computing Facility (ALCF) to pursue challenging, high-risk, high-payoff simulations.

Each year, the ASCR program, which manages some of the worlds most powerful supercomputing facilities, selects ALCC projects in areas that aim to further DOE mission science and broaden the community of researchers capable of using leadership computing resources.

TheALCC programallocates computational resources at ASCRs supercomputing facilities to research scientists in industry, academia, and national laboratories. In addition to the ALCF, ASCRs supercomputing facilities include the Oak Ridge Leadership Computing Facility (OLCF) at Oak Ridge National Laboratory and the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. The ALCF, OLCF, and NERSC are DOE Office of Science User Facilities.

The 24 projects awarded time on the ALCFs Theta supercomputer are listed below. Some projects received additional computing time at OLCF and/or NERSC (see the full list of awardshere). The one-year awards began on July 1.

About The Argonne Leadership Computing Facility

The Argonne Leadership Computing Facilityprovides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines. Supported by the U.S. Department of Energys (DOEs) Office of Science, Advanced Scientific Computing Research (ASCR) program, theALCFis one of twoDOELeadership Computing Facilities in the nation dedicated to open science.

About the Argonne National Laboratory

Argonne National Laboratoryseeks solutions to pressing national problems in science and technology. The nations first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance Americas scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed byUChicago Argonne, LLCfor theU.S. Department of Energys Office of Science.

About The U.S. Department of Energys Office of Science

The U.S. Department of Energys Office of Scienceis the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit

Source: Nils Heinonen, Argonne Leadership Computing Facility

See the rest here:

ALCC Program Awards Computing Time on ALCF's Theta Supercomputer to 24 projects - HPCwire

A Quintillion Calculations a Second: DOE Calculating the Benefits of Exascale and Quantum Computers – SciTechDaily

By U.S. Department of EnergyAugust 6, 2020

To keep qubits used in quantum computers cold enough so scientists can study them, DOEs Lawrence Berkeley National Laboratory uses a sophisticated cooling system. Credit: Image courtesy of Thor Swift, Lawrence Berkeley National Laboratory

A quintillion calculations a second. Thats one with 18 zeros after it. Its the speed at which an exascale supercomputer will process information. The Department of Energy (DOE) is preparing for the first exascale computer to be deployed in 2021. Two more will follow soon after. Yet quantum computers may be able to complete more complex calculations even faster than these up-and-coming exascale computers. But these technologies complement each other much more than they compete.

Its going to be a while before quantum computers are ready to tackle major scientific research questions. While quantum researchers and scientists in other areas are collaborating to design quantum computers to be as effective as possible once theyre ready, thats still a long way off. Scientists are figuring out how to build qubits for quantum computers, the very foundation of the technology. Theyre establishing the most fundamental quantum algorithms that they need to do simple calculations. The hardware and algorithms need to be far enough along for coders to develop operating systems and software to do scientific research. Currently, were at the same point in quantum computing that scientists in the 1950s were with computers that ran on vacuum tubes. Most of us regularly carry computers in our pockets now, but it took decades to get to this level of accessibility.

In contrast, exascale computers will be ready next year. When they launch, theyll already be five times faster than our fastest computer Summit, at Oak Ridge National Laboratorys Leadership Computing Facility, a DOE Office of Science user facility. Right away, theyll be able to tackle major challenges in modeling Earth systems, analyzing genes, tracking barriers to fusion, and more. These powerful machines will allow scientists to include more variables in their equations and improve models accuracy. As long as we can find new ways to improve conventional computers, well do it.

Once quantum computers are ready for prime time, researchers will still need conventional computers. Theyll each meet different needs.

DOE is designing its exascale computers to be exceptionally good at running scientific simulations as well as machine learning and artificial intelligence programs. These will help us make the next big advances in research. At our user facilities, which are producing increasingly large amounts of data, these computers will be able to analyze that data in real time.

Quantum computers, on the other hand, will be perfect for modeling the interactions of electrons and nuclei that are the constituents of atoms. As these interactions are the foundation for chemistry and materials science, these computers could be incredibly useful. Applications include modeling fundamental chemical reactions, understanding superconductivity, and designing materials from the atom level up. Quantum computers could potentially reduce the time it takes to run these simulations from billions of years to a few minutes. Another intriguing possibility is connecting quantum computers with a quantum internet network. This quantum internet, coupled with the classical internet, could have a profound impact on science, national security, and industry.

Just as the same scientist may use both a particle accelerator and an electron microscope depending on what they need to do, conventional and quantum computing will each have different roles to play. Scientists supported by the DOE are looking forward to refining the tools that both will provide for research in the future.

For more information, check out this infographic:

Read the original post:

A Quintillion Calculations a Second: DOE Calculating the Benefits of Exascale and Quantum Computers - SciTechDaily

GE plans to give offshore wind energy a supercomputing boost – The Verge

GE plans to harness the power of one of the worlds fastest supercomputers to propel offshore wind power development in the US. IBMs Summit supercomputer at the US Department of Energys Oak Ridge National Laboratory will allow GE to simulate air currents in a way the companys never been able to before.

Ultimately, the research could influence the design, control, and operations of future wind turbines. Its also intended to advance the growth of wind power off the East Coast of the US by giving researchers a better grasp of the available wind resources in the Atlantic. The simulations Summit will run can fill in some of the gaps in the historical data, according to GE.

Offshore wind has the potential to provide almost twice the amount of electricity as the USs current electricity usage, according to the American Wind Energy Association. But to make turbines that are hardier and more efficient offshore, researchers need more information. Thats where Summit comes in.

Its like being a kid in a candy store where you have access to this kind of a tool, says Todd Alhart, GEs research communications lead. The Summit supercomputer is currently ranked as the second fastest supercomputer in the world after Japans Fugaku, according to the Top500 supercomputer speed ranking.

GEs research, to be conducted over the next year in collaboration with the DOEs Exascale Computing Project, would be almost impossible to do without Summit. Thats because theres usually a trade-off in their research between resolution and scale. They can typically study how air moves across a single rotor blade with high resolution, or they could examine a bigger picture like a massive wind farm but with blurrier vision. In this case, exascale computing should allow them to simulate the flow physics of an entire wind farm with a high enough resolution to study individual turbine blades as they rotate.

That is really amazing, and cannot be achieved otherwise, says Jing Li, GE research aerodynamics lead engineer.

Li and her team will focus on studying coastal low-level jets. These are air currents that dont follow the same patterns as winds typically considered in traditional wind turbine design, which gradually increase in speed with height. Coastal low-level jet streams are atypical, according to Li, because wind speeds can rise rapidly up to a certain height before suddenly dropping away. These wind patterns are generally less common, but they occur more frequently along the US East Coast which is why researchers want to better understand how they affect a turbines performance.

Theres been a growing appetite for offshore wind energy on the East Coast of the US. Americas first offshore wind farm was built off the coast of Rhode Island in 2016. A slate of East Coast wind farms is poised to come online over the next several years, with the largest expected to be a $1.6 billion project slated to be built off the coast of New Jersey by 2024.

See the original post:

GE plans to give offshore wind energy a supercomputing boost - The Verge

From WarGames to Terms of Service: How the Supreme Courts Review of Computer Fraud Abuse Act Will Impact Your Trade Secrets – JD Supra


The Computer Fraud and Abuse Act (CFAA) is the embodiment of Congresss first attempt to draft laws criminalizing computer hacking. It is rumored that the Act was influenced by the 1983 movie WarGames[1], in which a teenager unintentionally starts a countdown to World War III when he hacks into a military supercomputer.

The law as originally drafted was aimed at hackers who use computers to gain unauthorized access to government computers. But Congress has amended it numerous times over the years, drastically expanding it to cover unauthorized access of any computer used in or affecting interstate or foreign commerce or communication, as well as a variety of other illicit computer activities such as committing fraud using a computer, trafficking in passwords, and damaging computer systems such as through a virus.

The CFAA also provides a private right of action allowing compensation and injunctive relief for anyone harmed by a violation of the law. It has proved very useful in civil and criminal cases of trade secret misappropriation where the trade secret information was obtained by accessing a computer without authorization or exceed[ing] authorized access. It is this language that provides the statute with so much flexibility to be used in trade secret cases; and which the Supreme Court has decided to take a closer look at in its next term.

Opponents have long argued that the without authorization or exceeds authorized access language is so unreasonably broad that it criminalizes everyday, insignificant online acts such as passwordsharing and violations of websites Terms of Service. Tim Wu, a professor at Columbia Law School, has called it the worst law in technology.[2] While it is true that CFAA violations have been, at times, over-aggressively charged, the Supreme Courts decision could drastically curtail how the CFAA can be used to curb trade secret misappropriation.

The Computer Fraud and Abuse Act

As computer technology has proliferated and become more powerful over the years, Congress has expanded the CFAAboth in terms of its scope and its penaltiesnumerous times since its enactment. In 1984, Congress passed the Comprehensive Crime Control Act, which included the first federal computer crime statute, later codified at 18 U.S.C. 1030, even before the more recognizable form of the modern Internet, i.e., the World Wide Web, was invented.[3] This original bill was in response to a growing problem in counterfeit credit cards and unauthorized use of account numbers or access codes to banking system accounts. H.R. Rep. No. 98-894, at 4 (1984). Congress recognized that the main issue underlying counterfeit credit cards was the potential for fraudulent use of ever-expanding and rapidly-changing computer technology. Id. The purpose of the statute was to deter the activities of so-called hackers who were accessing both private and public computer systems. Id. at 10. In fact, the original bill characterized the 1983 science fiction film WarGames[4] as a realistic representation of the automatic dialing and access capabilities of the personal computer. Id.

Two years later, Congress significantly expanded the computer crime statute, and it became known as the Computer Fraud and Abuse Act. Congress has further amended the statute over the years to expand the scope of proscribed violations and to provide a civil cause of action for private parties to obtain compensatory damages, injunctive relief, and/or other equitable relief. For example, in the most recent expansion of the CFAA, in 2008, Congress (1) broadened the definition of protected computers to include those used in or affecting interstate or foreign commerce or communication, including a computer located outside the United States, which includes servers and other devices connected to the Internet; (2) criminalized threats to steal data on a victims computer, publicly disclose stolen data, or not repair damage already caused to the computer; (3) added conspiracy as an offense; and (4) allowed for civil and criminal forfeiture of real or personal property used in or derived from CFAA violations.

The CFAA covers a broad range of unlawful computer access and, in relevant part, provides: [w]hoever . . . intentionally accesses a computer without authorization or exceeds authorized access, and thereby obtains . . . information from any protected computer, commits a federal crime and may face civil liability. 18 U.S.C. 1030(a)(2), (c), (g). The phrase without authorization is not defined in the statute, but the phrase exceeds authorized access, is defined as: to access a computer with authorization and to use such access to obtain or alter information in the computer that the accesser is not entitled so to obtain or alter. Id. 1030(e)(6).

A computer can be any electronic, magnetic, optical, electrochemical, or other high speed data processing device performing logical, arithmetic, or storage functions, and includes any data storage facility or communications facility directly related to or operating in conjunction with such device. Id. 1030(e)(1). Courts across the country have interpreted computer extremely broadly to include cell phones, Internet-connected devices, cell towers, and stations that transmit wireless signals. E.g., United States v. Kramer, 631 F.3d 900, 902-03 (8th Cir. 2011) (basic cellular phone without Internet connection); United States v. Valle, 807 F.3d 508, 513 (2d Cir. 2015) (restricted databases); United States v. Drew, 259 F.R.D. 449, 457-58 (C.D. Cal. 2009) (Internet website); United States v. Mitra, 405 F.3d 492, 495 (7th Cir. 2005) (computer-based radio system); United States v. Nosal, 844 F.3d 1024, 1050 (9th Cir. 2016) (Reinhardt, J., dissenting) (This means that nearly all desktops, laptops, servers, smart-phones, as well as any iPad, Kindle, Nook, X-box, Blu-Ray player or any other Internet-enabled device, including even some thermostats qualify as protected. (some internal quotations omitted)). A protected computer is any computer that is used in or affect[s] interstate or foreign commerce or communication of the United States. 18 U.S.C. 1030(e)(2)(B). Again, courts have construed this term very broadly to include any computer connected to the Internet. E.g., United States v. Nosal, 676 F.3d 854, 859 (9th Cir. 2012) (en banc); United States v. Trotter, 478 F.3d 918, 921 (8th Cir. 2007).

Violations of the CFAA can result in both criminal and civil liability. A criminal conviction under the exceeds authorized access provision is typically a fine or a misdemeanor for a first offense, but can be a felony punishable by fines and imprisonment of up to five years in certain situations, such as where the offense was committed for commercial advantage or private financial gain and the value of the information obtained exceeds $5,000. 18 U.S.C. 1030(c)(2)(A), (B). The CFAA also authorizes civil suits for compensatory damages and injunctive or other equitable relief by parties who show, among other things, that a violation of the statute caused them to suffer[ ] damage or loss under certain circumstances. Id. 1030(g).

Using the CFAA in Trade Secret Cases

A CFAA claim can be a nice complement to a trade secret misappropriation claim if the act of misappropriation included taking information from a computer system. One key advantage that the CFAA adds to a trade secret misappropriation case is that it is not subject to some of the more restrictive requirements of federal and state trade secret laws. To assert a claim under the Defend Trade Secrets Act, 18 U.S.C. 1836, et seq., the claimant must (among other things): (1)specifically identify the trade secret that was misappropriated; (2) prove that the claimant took reasonable measures to keep the information secret; and (3) prove that the information derives independent economic value from not being generally known or readily ascertainable. See 18 U.S.C. 1839(3).

These requirements can present traps for the unwary, and potential defenses for a defendant. For example, a defendant accused of trade secret misappropriation will often put the plaintiff through its paces to specifically identify the trade secrets that were allegedly misappropriated because failure to do so to the courts satisfaction can lead to an early dismissal. E.g., S & P Fin. Advisors v. Kreeyaa LLC, No. 16-CV-03103-SK, 2016 WL 11020958, at *3 (N.D. Cal. Oct. 25, 2016) (dismissing for failure to state a claim for violation of the DTSA where plaintiff failed to sufficiently state what information defendants allegedly misappropriated and how that information constitutes a trade secret).

Similarly, whether the information was protected by reasonable measures can become a litigation within the litigation. To establish this requirement, the plaintiff typically must spell-out all of its security measures, supply evidence of the same, and provide one or more witnesses to testify to the extent and effectiveness of the security measures. Failure to adequately establish reasonable measure has been the downfall of many trade secret claims. E.g., Govt Employees Ins. Co. v. Nealey, 262 F. Supp. 3d 153, 167-172 (E.D. Pa. 2017) (dismissing plaintiffs DTSA claim for failure to state a claim when it included much of the same information it claimed to be a trade secret in a publicly filed affidavit).

Lastly, the requirement to establish that the information derives independent economic value from not being generally known or readily ascertainable can also be a significant point of contention. Establishing this prong often requires the use of a damages expert and the costly expert discovery that goes along with that. And as with the other requirements of a DTSA claim, failure to establish it adequately can doom the claim. E.g., ATS Grp., LLC v. Legacy Tank & Indus. Servs. LLC, 407 F. Supp. 3d 1186, 1200 (W.D. Okla. 2019) (finding plaintiff failed to state a claim that the information designated as trade secrets derived independent value from remaining confidential when the complaint only recited language from DTSA without alleging, for example, that the secrecy of the information provided it with a competitive advantage).

The elements of a CFAA claim in a civil actiongenerally, intentionally accessing a protected computer without authorization or exceeding authorization and causing at least $5,000 in losses[5]are, in comparison, less burdensome to establish and less perilous for the claimant. Access is typically established through computer logs or forensic analysis. The level of authorization the defendant had is usually easily established from company records and/or a managers testimony, and the requisite damages of $5,000 is so low that it is easily met in the vast majority of cases. Lastly, the element of intent can be the most contentious, but as with any intent requirement, it can be established through circumstantial evidence. E.g. Fidlar Techs. v. LPS Real Estate Data Sols., Inc., 810 F.3d 1075, 1079 (7th Cir. 2016) (Because direct evidence of intent is often unavailable, intent to defraud [under the CFAA] may be established by circumstantial evidence and by inferences drawn from examining the scheme itself which demonstrate that the scheme was reasonably calculated to deceive persons of ordinary prudence and comprehension.) (citations and internal quotation marks omitted). Often, the mere fact that the defendant bypassed controls and security messages on the computer is sufficient to establish intent. E.g. Tyan, Inc. v. Garcia, No. CV-15-05443-MWF (JPRx), 2017 WL 1658811, at *14, 2017 U.S. Dist. LEXIS 66805 at *40-41 (C.D. Cal. May 2, 2017) (finding defendant had intent to defraud when he accessed files with a username and password he was not authorized to use).

The Controversy Surrounding the CFAA

Over the years, opponents of the CFAA have argued that it is so unreasonably broad that it effectively criminalizes everyday computer behavior:

Every day, millions of ordinary citizens across the country use computers for work and for personal matters. United States v. Nosal, 676 F.3d 854, 862-63 (9th Cir. 2012) (en banc). Accessing information on those computers is virtually always subject to conditions imposed by employers policies, websites terms of service, and other third-party restrictions. If, as some circuits hold, the CFAA effectively incorporates all of these limitations, then any trivial breach of such a conditionfrom checking sports scores at work to inflating ones height on a dating websiteis a federal crime.

Petition for Writ of Certiorari, Van Buren v. United States, No. 19-783, at 2.

The most infamous example of overcharging the CFAA is the tragic case of Aaron Swartz. Swartzan open-Internet activistconnected a computer to the Massachusetts Institute of Technology (MIT) network in a wiring closet and downloaded 4.8 million academic journal articles from the subscription database, JSTOR, which he planned to release for free to the public. Federal prosecutors charged him with multiple counts of wire fraud and violations of the CFAA, sufficient to carry a maximum penalty of $1 million in fines and 35 years in prison. United States v. Swartz, No. 11-1-260-NMG, Dkt. No. 2 (D. Mass, July 14, 2011); see also id. at Dkt. No. 53. The prosecutors offered Swartz a plea bargain under which he would serve six months in prison, and Swartz countered the plea offer. But the prosecutors rejected the counteroffer and, two days later, Swartz hanged himself in his Brooklyn apartment.[6]

In 2014, partly in response to public pressure from the Swartz case and in an attempt to provide some certainty to its prosecution of CFAA offenses, the Department of Justice issued a memorandum outlining its charging policy for CFAA violations. Under the new policy, the DOJ explained:

When prosecuting an exceed-authorized-access violation, the attorney for the government must be prepared to prove that the defendant knowingly violated restriction on his authority to obtain or alter information stored on a computer, and not merely that the defendant subsequently misused information or services that he was authorized to obtain from the computer at the time he obtained it.

Department of Justices Intake and Charging Policy for Computer Crime Matters (Charging Policy), Memorandum from U.S. Atty Gen. to U.S. Attys and Asst. Atty Gens. for the Crim. and Natl Sec. Divs. at 4 (Sept. 11, 2014) (available at Perhaps unsurprisingly, opponents of the law were not sufficiently comforted by prosecutorial promises to not overcharge CFAA claims.

The Supreme Court Is Expected to Clarify What Actions Constitute Violations of the CFAA Uniformly Across the Country

Nathan Van Buren was a police sergeant in Cumming, Georgia. As a law enforcement officer, Van Buren was authorized to access the Georgia Crime Information Center (GCIC) database, which contains license plate and vehicle registration information, for law-enforcement purposes. An acquaintance, Andrew Albo, gave Van Buren $6,000 to run a search of the GCIC to determine whether a dancer at a local strip club was an undercover police officer. Van Buren complied and was arrested by the FBI the next day. It turned out, Albo was cooperating with the FBI and his request to Van Buren was a ruse invented by the FBI to see if Van Buren would bite.

Following a trial, Van Buren was convicted under the CFAA and sentenced to eighteen months in prison. On appeal, he argued that accessing information that he had authorization to access cannot exceed[] authorized access as meant by statute, even if he did so for an improper or impermissible purpose. The Eleventh Circuit disagreed, siding with the government that United States v. Rodriguez, 628 F.3d 1258 (11th Cir. 2010) was controlling. In Rodriguez, the Eleventh Circuit had held that a person with access to a computer for business reasons exceed[s] his authorized access when he obtain[s] . . . information for a nonbusiness reason. Rodriguez, 628 F.3d at 1263.

In denying Van Burens appeal, the Eleventh Circuit noted the split that the Supreme Court has now decided to resolve. As with the Eleventh Circuit, the First, Fifth, and Seventh Circuits have all held that a person operates without authorization or exceeds authorized access when they access information they otherwise are authorized to access, but for an unauthorized purpose. See EF Cultural Travel BV v. Explorica, Inc., 274 F.3d 577, 582-83 (1st Cir. 2001) (defendant exceeded authorized access by collecting proprietary information and know-how to aid a competitor); United States v. John, 597 F.3d 263, 272 (5th Cir. 2010) (exceed[ing] authorized access includes exceeding the purposes for which access is authorized.); Intl Airport Ctrs., L.L.C. v. Citrin, 440 F.3d 418, 420-21 (7th Cir. 2006) (CFAA violated when defendant accessed data on his work computer for a purpose that his employer prohibited). Those who favor the broader interpretation argue that an expansive interpretation of the statute is more consistent with congressional intent of stopping bad actors from computerfacilitated crime as computers continue to proliferate, especially in light of the consistent amendments that Congress has enacted to broaden the application of the CFAA. See Guest-Tek Interactive Entmt, Inc. v. Pullen, 665 F. Supp. 2d 42, 45 (D. Mass. 2009) (a narrow reading of the CFAA ignores the consistent amendments that Congress has enacted to broaden its application . . . in the past two decades by the enactment of a private cause of action and a more liberal judicial interpretation of the statutory provisions.).

Numerous trial courts have applied these circuits more expansive interpretation in civil cases against alleged trade secret misappropriators. For example, in Merritt Hawkins & Assocs., LLC v. Gresham, 79 F. Supp. 3d 625 (N.D. Tex. 2015), the court relied on the Fifth Circuits controlling case, United States v. John, 597 F.3d 263 (5th Cir. 2010), to deny summary judgment to a defendant who was accused of exceeding his authorization when he deleted hundreds of files on the companys computer before terminating his employment. In finding disputed issues of fact, the trial court specifically noted that the Fifth Circuit agree[d] with the First Circuit that the concept of exceeds authorized access may include exceeding the purposes for which access is authorized. Merritt Hawkins, 79 F. Supp. 3d at 634. Likewise, in Guest-Tek Interactive Entmt, , the court noted both interpretations and opted for the broader one in view of guidance from the First Circuit. Guest-Tek Interactive Entmt Inc., 665 F. Supp. 2d at 45-46 (noting that the First Circuit has favored a broader reading of the CFAA) (citing EF Cultural Travel BV, 274 F.3d at 582-84).

On the flip side, three circuits have held that the CFAAs without authorization and exceeds authorized access do not impose criminal liability on a person with permission to access information on a computer who accesses that information for an improper purpose. In other words, a person violates the CFAA in these circuits only by accessing information he has no authorization to access, regardless of the reason. Valle, 807 F.3d at 527 (CFAA is limited to situations where the user does not have access for any purpose at all); WEC Carolina Energy Sols. LLC v. Miller, 687 F.3d 199, 202, 207 (4th Cir. 2012) (rejecting CFAA imposes liability on employees who violate a use policy and limiting liability to individuals who access computers without authorization or who obtain or alter information beyond the bounds of their authorized access); Nosal, 676 F.3d at 862-63 (holding that the phrase exceeds authorized access in the CFAA does not extend to violations of use restrictions). These courts have all relied on statutory construction plus some version of the rule of lenitythat, when a criminal statute is susceptible to a harsher construction and a less-harsh construction, courts should opt for the latter. For example, as Van Buren pointed out in his petition:

every March, tens of millions of American workers participate in office pools for the NCAA mens basketball tournament (March Madness). Such pools typically involve money stakes. When these employees use their company computers to generate their brackets or to check their standing in the pools, they likely violate their employers computer policies. Again, the answer to the question presented determines whether these employees are guilty of a felony.

Petition for Writ of Certiorari, Van Buren v. United States, No. 19-783, at 12-13; see also Nosal, 676 F.3d at 860-63 (applying use restrictions would turn millions of ordinary citizens into criminals). Numerous trial courts in these jurisdictions have followed suit. See, e.g., Shamrock Foods v. Gast, 535 F. Supp. 2d 962, 967 (D. Ariz. 2008) (concluding that the plain language, legislative history, and principles of statutory construction support the restrictive view of authorization); Lockheed Martin Corp. v. Speed, et al., No. 6:05-cv-1580, 2006 U.S. Dist. LEXIS 53108 at *24 (M.D. Fla. 2006) (finding that the narrow construction follows the statutes plain meaning, and coincidently, has the added benefit of comporting with the rule of lenity.).

So at bottom, the Supreme Court will decide whether the CFAA, in addition to access restrictions, also encompasses use restrictions.

What Future Impact May the Supreme Courts Decision Have on Trade Secret Cases?

If the Supreme Court adopts the narrower access-restriction-only enforcement of the CFAA, then the nature and extent of the alleged misappropriators authorization to access the trade secrets will determine the applicability of the CFAA. Even with this narrower interpretation, however, employers can still proactively take certain steps to improve their chances of being able to assert CFAA claims in the future.

Misappropriation of trade secrets under federal and state statutes, and breach of employment or nondisclosure agreements, are potential claims an employer can assert against an employee who accepts a job offer with a competitor and downloads trade secret information to take with him before leaving the company. Whether the company can also have a claim against this former employee under the CFAA depends on the level of access he had to the employers computer sources during the course of his employment. If (under the narrower interpretation of the CFAA) the employee downloaded the trade secrets from computer sources he had access to in the course of his ordinary job duties, then the company may not have a CFAA claim because the employees actions were neither without authorization nor exceed[ing] authorized access. But if the employee did not have permission to access those computer sources in the course of his normal job duties, then he may be guilty of exceeding his authorized access. See Nosal, 676 F.3d at 858 (accessing a computer without authorization refers to a scenario where a user lacks permission to access any information on the computer, whereas exceeds authorized access refers to a user who has permission to access some information on the computer but then accesses other information to which her authorization does not extend).

There are certain steps employers can take that can determine whether they can assert a CFAA claim against an employee if need be in the future. First, employees computer access should be limited to need-to-know. In other words, employees should not be able to access computer resources and information that are not necessary for them to perform their duties. For example, an employee may be provided access to customer and price lists (economic trade secrets), but not have access to servers where source code and technical information (technical trade secrets) are stored. Even within technical areas, an employees access privileges should be limited as much as possible to their specific areas of work. In addition, employment agreements, confidentiality agreements (with both employees and third parties), and company policies should make clear that employees (and business partners, where applicable) do not have permission to access resources that are not necessary in the performance of their job responsibilities. This may entail some additional IT overhead in tightening up employees access privileges, but any steps employers can take proactively to convert potential use restrictions into access restrictions will go a long way in preserving the viability of a CFAA claim.

Lastly, without use restrictions, if the Supreme Court decides those are overreach, a companys employment agreements, nondisclosures agreements, and computer use policies may still save the day. Under one line of thought which may survive even if the Supreme Court adopts the narrower interpretation, when an employee breaches one of these agreements or policies, or even just violates her duty of loyalty to the company, that can instantly and automatically extinguish her agency relationship with the company and, with it, whatever authority she had to access the companys computers and information. See Intl Airport Ctrs., 440 F.3d at 420-21 (relying on employees breach of his duty of loyalty [which] terminated his agency relationshipand with it his authority to access the laptop, because the only basis of his authority had been that relationship.); see also Shurgard Storage, Inc. v. Safeguard Self Storage, Inc., 119 F. Supp. 2d 1121 (W.D. Wash. 2000) (the authority of the plaintiffs former employees ended when they allegedly became agents of the defendant. Therefore, for the purposes of this 12(b)(6) motion, they lost their authorization and were without authorization when they allegedly obtained and sent the proprietary information to the defendant via e-mail.). Accordingly, it may be possible to bring a CFAA claim where an employee exceeds his authority by, for example, violating a policy that prohibits downloading confidential company files to portable media (essentially, a use restriction), which then automatically forfeits his access rights resulting in an access restriction violation.


The Supreme Court may drastically narrow application of the CFAA when it decides the Van Buren case. But there are proactive measures employers can take now to potentially preserve their ability to use the CFAA in cases of trade secret misappropriation.

[1] WarGames (Metro-Goldwyn-Mayer Studios 1983).

[2] Tim Wu, Fixing the Worst Law in Technology, The New Yorker, Mar. 18, 2013 (available at

[3] Evan Andrews, Who Invented the Internet? (Oct. 28, 2019),

[4] WarGames, supra n. 1.

[5] CFAA claims in civil trade secret misappropriation cases are typically brought under 18 U.S.C. 1030(a)(2) or (a)(4). The former states, Whoever

(2) intentionally accesses a computer without authorization or exceeds authorized access, and thereby obtains

(A) information contained in a financial record of a financial institution, or of a card issuer as defined in section 1602(n) of title 15, or contained in a file of a consumer reporting agency on a consumer, as such terms are defined in the Fair Credit Reporting Act (15 U.S.C. 1681 et seq.);

(B) information from any department or agency of the United States; or

(C) information from any protected computer;

shall be punished as provided in subsection (c) of this section. The latter states, Whoever

(4) knowingly and with intent to defraud, accesses a protected computer without authorization, or exceeds authorized access, and by means of such conduct furthers the intended fraud and obtains anything of value, unless the object of the fraud and the thing obtained consists only of the use of the computer and the value of such use is not more than $5,000 in any 1-year period. shall be punished as provided in subsection (c) of this section.

These are the portions of the CFAA that are usually more applicable to an employee or former employee who steals his employers trade secrets. The CFAA includes other provisions directed to outside hacker situations. For example, 18 U.S.C. 1030(a)(5)-(7) address scenarios such as malicious hacking with intent to cause damage or loss, trafficking in passwords, and ransomware.

[6] Sam Gustin, Aaron Swartz, Tech Prodigy and Internet Activist, Is Dead at 26 (January 13, 2013) (available at

[View source.]

Here is the original post:

From WarGames to Terms of Service: How the Supreme Courts Review of Computer Fraud Abuse Act Will Impact Your Trade Secrets - JD Supra

New Audis To Use Supercomputer That Controls Almost Everything – Motor1

As the mad dash for technology and innovation among automakers surges forward, new cars are becoming even more complex. In an effort to simplify how advanced components like the powertrain, chassis, and safety systems work together, the next crop of Audi cars will have much bigger brains.

That may sound like an oversimplification, but thats what Audi is getting at here. Today, the company showed off plans to incorporate a much more sophisticated computer called the Integrated Vehicle Dynamics computer, which functions as the central facility for the cars dynamic systems. Audis new central computer system is ten times more powerful than the one found in current models and will be able to control up to 90 different systems, depending on vehicle application.

The new vehicle dynamics computer will find its way into the entire Audi lineup. Everything from the compact A3 to the massive Q8 SUV will get the futuristic hardware, including the all-electric E-Tron. Audi says that this speaks to the dynamic computers versatility in that its designed to work with cars of all performance thresholds.

In Audis electric cars, the computer will monitor and control important systems such as brake regeneration, which contributes up to 30 percent of the batterys potential range. In hot rods like the RS models, trick systems like anti-roll stabilization and active suspension will rely on the vehicle dynamics computer for power. This also marks the first time that the chassis and powertrain controls will be housed within one component something that Audi says will bring a greater range of comfort and performance in its vehicles.

Audi didnt specify when the switch to the new dynamics computers would enter its product line, but an engineer mentioned that the component is ready for serialized production which means its happening soon. This doesnt mean much to the driver and how they interact with the car, but its one of several recent announcements from Audi that makes us even more excited for future models.

23 Photos

View original post here:

New Audis To Use Supercomputer That Controls Almost Everything - Motor1

Top 10 Supercomputers | HowStuffWorks


If someone says "supercomputer," your mind may jump to Deep Blue, and you wouldn't be alone. IBM's silicon chess wizard defeated grandmaster Gary Kasparov in 1997, cementing it as one of the most famous computers of all time (some controversy around the win helped, too). For years, Deep Blue was the public face of supercomputers, but it's hardly the only all-powerful artificial thinker on the planet. In fact, IBM took Deep Blue apart shortly after the historic win! More recently, IBM made supercomputing history with Watson, which defeated "Jeopardy!" champions Ken Jennings and Brad Rutter in a special match.

Brilliant as they were, neither Deep Blue nor Watson would be able to match the computational muscle of the systems on the November 2013 TOP500 list. TOP500 calls itself a list of "the 500 most powerful commercially available computer systems known to us." The supercomputers on this list are a throwback to the early computers of the 1950s -- which took up entire rooms -- except modern computers are using racks upon racks of cutting-edge hardware to produce petaflops of processing power.

Your home computer probably runs on four processor cores. Most of today's supercomputers use hundreds of thousands of cores, and the top entry has more than 3 million.

TOP500 currently relies on the Linpack benchmark, which feeds a computer a series of linear equations to measure its processing performance, although an alternative testing method is in the works. The November 2013 list sees China's Tianhe-2 on top of the world. Every six months, TOP500 releases a list, and a few new computers rise into the ranks of the world's fastest. Here are the champions as of early 2014. Read on to see how they're putting their electronic mettle to work.

Read the original:

Top 10 Supercomputers | HowStuffWorks

Japanese supercomputer ranked as worlds most powerful system

A Japanese supercomputer built with technology from Arm Ltd, whose chip designs power most of the worlds smartphones, has taken the top spot among the worlds most powerful systems, displacing one powered by IBM chips.

The Fugaku supercomputer, a system jointly developed by Japanese research institute RIKEN and Fujitsu in Kobe, Japan, took the highest spot on the TOP500 list, a twice-yearly listing of the worlds most powerful computers, its backers said on Monday. The chip technology comes from Arm, which is headquartered in the UK but owned by Japans Softbank.

The previous top-ranked system as of November 2019 was at Oak Ridge National Laboratory in the US with chips designed by IBM. The chips from Intel and IBM had dominated the top 10 rankings, with the lone exception of a system at the National Supercomputing Center in Wuxi, China powered by Chinese-designed chip.

Governments use supercomputers to simulate nuclear blasts to perform virtual weapons testing. They are also used for modeling climate systems and biotechnology research. The Fugaku supercomputer will be used in such research as part of Japans Society 5.0 technology program.

I very much hope that Fugaku will show itself to be highly effective in real-world applications and will help to realize Society 5.0, Naoki Shinjo, corporate executive officer of Fujitsu, said in a statement.

The Arm-based system in Japan in November had taken the highest spot on TOP500s list for power-efficient supercomputers. Arm said the system also took the top spot in a list designed to closely resemble real-world computing tasks known as the high-performance conjugate gradient benchmark.

Go here to see the original:

Japanese supercomputer ranked as worlds most powerful system

What are supercomputers currently used for? | HowStuffWorks

As we said, supercomputers were originally developed for code cracking, as well as ballistics. They were designed to make an enormous amount of calculations at a time, which was a big improvement over, say, 20 mathematics graduate students in a room, hand-scratching operations.

In some ways, supercomputers are still used for those ends. In 2012, the National Nuclear Security Administration and Purdue University began using a network of supercomputers to simulate nuclear weapons capability. A whopping 100,000 machines are used for the testing [source: Appro].

But it's not just the military that's using supercomputers anymore. Whenever you check the weather app on your phone, the National Oceanic and Atmospheric Administration (NOAA) is using a supercomputer called the Weather and Climate Operational Supercomputing System to forecast weather, predict weather events, and track space and oceanic weather activity as well [source: IBM].

As of September 2012, the fastest computer in the world -- for now, anyway -- is IBM's Sequoia machine, which can operate 16.32 petaflops a second. That's 16,000 trillion operations, to you. It's used for nuclear weapon security and to make large-scale molecular dynamics calculations [source: Walt].

But supercomputers aren't just somber, intellectual machines. Some of them are used for fun and games literally. Consider World of Warcraft, the wildly popular online game. If a million people are playing WoW at a time, graphics and speed are of utmost importance. Enter the supercomputers, used to make the endless calculations that help the game go global.

Speaking of games, we can't forget Deep Blue, the supercomputer that beat chess champion Garry Kasparov in a six-game match in 1997. And then there's Watson, the IBM supercomputer that famously beat Ken Jennings in an intense game of Jeopardy. Currently, Watson is being used by a health insurer to predict patient diagnoses and treatments [source: Feldman]. A real jack of all trades, that Watson.

So, yes: We're still benefiting from supercomputers. We're using them when we play war video games and in actual war. They're helping us predict if we need to carry an umbrella to work or if we need to undergo an EKG. And as the calculations become faster, there's little end to the possibility of how we'll use supercomputers in the future.

Go here to read the rest:

What are supercomputers currently used for? | HowStuffWorks

Microsoft announces new supercomputer, lays out vision for …

As weve learned more and more about what we need and the different limits of all the components that make up a supercomputer, we were really able to say, If we could design our dream system, what would it look like? said OpenAI CEO Sam Altman. And then Microsoft was able to build it.

OpenAIs goal is not just to pursue research breakthroughs but also to engineer and develop powerful AI technologies that other people can use, Altman said. The supercomputer developed in partnership with Microsoft was designed to accelerate that cycle.

We are seeing that larger-scale systems are an important component in training more powerful models, Altman said.

For customers who want to push their AI ambitions but who dont require a dedicated supercomputer, Azure AI provides access to powerful compute with the same set of AI accelerators and networks that also power the supercomputer. Microsoft is also making available the tools to train large AI models on these clusters in a distributed and optimized way.

At its Build conference, Microsoft announced that it would soon begin open sourcing its Microsoft Turing models, as well as recipes for training them in Azure Machine Learning. This will give developers access to the same family of powerful language models that the company has used to improve language understanding across its products.

It also unveiled a new version of DeepSpeed, an open source deep learning library for PyTorch that reduces the amount of computing power needed for large distributed model training. The update is significantly more efficient than the version released just three months ago and now allows people to train models more than 15 times larger and 10 times faster than they could without DeepSpeed on the same infrastructure.

Along with the DeepSpeed announcement, Microsoft announced it has added support for distributed training to the ONNX Runtime. The ONNX Runtime is an open source library designed to enable models to be portable across hardware and operating systems. To date, the ONNX Runtime has focused on high-performance inferencing; todays update adds support for model training, as well as adding the optimizations from the DeepSpeed library, which enable performance improvements of up to 17 times over the current ONNX Runtime.

We want to be able to build these very advanced AI technologies that ultimately can be easily used by people to help them get their work done and accomplish their goals more quickly, said Microsoft principal program manager Phil Waymouth. These large models are going to be an enormous accelerant.

In self-supervised learning, AI models can learn from large amounts of unlabeled data. For example, models can learn deep nuances of language by absorbing large volumes of text and predicting missing words and sentences. Art by Craighton Berman.

Designing AI models that might one day understand the world more like people do starts with language, a critical component to understanding human intent, making sense of the vast amount of written knowledge in the world and communicating more effortlessly.

Neural network models that can process language, which are roughly inspired by our understanding of the human brain, arent new. But these deep learning models are now far more sophisticated than earlier versions and are rapidly escalating in size.

A year ago, the largest models had 1 billion parameters, each loosely equivalent to a synaptic connection in the brain. The Microsoft Turing model for natural language generation now stands as the worlds largest publicly available language AI model with 17 billion parameters.

This new class of models learns differently than supervised learning models that rely on meticulously labeled human-generated data to teach an AI system to recognize a cat or determine whether the answer to a question makes sense.

In whats known as self-supervised learning, these AI models can learn about language by examining billions of pages of publicly available documents on the internet Wikipedia entries, self-published books, instruction manuals, history lessons, human resources guidelines. In something like a giant game of Mad Libs, words or sentences are removed, and the model has to predict the missing pieces based on the words around it.

As the model does this billions of times, it gets very good at perceiving how words relate to each other. This results in a rich understanding of grammar, concepts, contextual relationships and other building blocks of language. It also allows the same model to transfer lessons learned across many different language tasks, from document understanding to answering questions to creating conversational bots.

This has enabled things that were seemingly impossible with smaller models, said Luis Vargas, a Microsoft partner technical advisor who is spearheading the companys AI at Scale initiative.

The improvements are somewhat like jumping from an elementary reading level to a more sophisticated and nuanced understanding of language. But its possible to improve accuracy even further by fine tuning these large AI models on a more specific language task or exposing them to material thats specific to a particular industry or company.

Because every organization is going to have its own vocabulary, people can now easily fine tune that model to give it a graduate degree in understanding business, healthcare or legal domains, he said.

Read the original post:

Microsoft announces new supercomputer, lays out vision for ...

GE taps into US supercomputer to advance offshore wind – reNEWS

GE will be able to access one of worlds fastest supercomputers in order to help advance offshore wind power.

GE Research aerodynamics engineer Jing Li is leading a team that has been granted access to the Summit supercomputer at Oak Ridge National Laboratory (ORNL) in Tennessee, through the US Department of Energys competitive Advanced Scientific Computing Research Leadership Computing Challenge programme.

The key focus of the supercomputing project will be to study coastal low-level jets, which produce a distinct wind velocity profile of potential importance to the design and operation of future wind turbines.

Using the Summit supercomputer system, the GE team will run simulations to study and inform new ways of controlling and operating offshore turbines to best optimise wind production.

Li said: The Summit supercomputer will allow our GE team to run computations that would be otherwise impossible.

This research could dramatically accelerate offshore wind power as the future of clean energy and our path to a more sustainable, safe environment.

As part of the project, the GE team will work closely with research teams at NREL and ORNL to advance the ExaWind platform.

ExaWind, one of the applications of the DoEs Exascale computing project, focuses on the development of computer software to simulate different wind farm and atmospheric flow physics.

These simulations provide crucial insights for engineers and scientists to better understand wind dynamics and their impact on wind farms.

Li said: Scientists at NREL and ORNL are part of a broader team that have built up a tremendous catalogue of new software code and technical expertise with ExaWind, and we believe our project can discover critical new insights that support and validate this larger effort.

The ExaWind goal is to establish a virtual wind plant test bed that aids and accelerates the design and control of wind farms.

The Summit supercomputing systems power capability is equivalent to 70 million iPhone 11s and can help test and solve challenges in energy, artificial intelligence, human health and other research areas.

Li said: Were now able to study wind patterns that span hundreds of metres in height across tens of kilometres of territory down to the resolution of airflow over individual turbine blades.

You simply couldnt gather and run experiments on this volume and complexity of data without a supercomputer. These simulations allow us to characterise and understand poorly understood phenomena like coastal low-level jets in ways previously not possible.

Go here to see the original:

GE taps into US supercomputer to advance offshore wind - reNEWS

Summit supercomputer to advance research on wind power for renewable energy – ZDNet

Scientists from GE are looking into the potential of offshore wind power to support renewable energy, using one of the world's fastest supercomputers to help it progress research.

The researchers have been granted access to the IBM Summit supercomputer at Oak Ridge National Laboratory (ORNL) in Tennessee through the US Department of Energy's (DOE) Advanced Scientific Computing Research Leadership Computing Challenge (ALCC) program.

The team plans on using supercomputer-driven simulations to conduct what project lead GE Research aerodynamics engineer Jing Li said would otherwise be infeasible research that will lead to improved efficiencies in offshore wind energy production.

"The Summit supercomputer will allow our GE team to run computations that would be otherwise impossible," Li said.

"This research could dramatically accelerate offshore wind power as the future of clean energy and our path to a more sustainable, safe environment."

GE said the main focus of the project will be to study coastal low-level jets, which it said produce a distinct wind velocity profile of potential importance to the design and operation of future wind turbines.

Using Summit, the GE team will run simulations to study and inform new ways of controlling and operating offshore turbines to best optimise wind production.

"We're now able to study wind patterns that span hundreds of meters in height across tens of kilometres of territory down to the resolution of airflow over individual turbine blades," Li added. "You simply couldn't gather and run experiments on this volume and complexity of data without a supercomputer. These simulations allow us to characterise and understand poorly understood phenomena like coastal low-level jets in ways previously not possible."

GE said the researchers will work closely with teams at the National Renewable Energy Laboratory (NREL) and ORNL to advance the ExaWind platform, which focuses on the development of computer software to simulate different wind farm and atmospheric flow physics.

The simulations, GE said, will allow the researchers to better understand wind dynamics and their impact on wind farms.

ExaWind is one of the applications of the DOE's Exascale Computing Project (ECP).

According to the director of DOE's Exascale Computing Project (ECP), Doug Kothe, the goal is to establish a virtual wind plant test bed that aids and accelerates the design and control of wind farms to inform the researcher's ability to predict the response of the farms to a given atmospheric condition.

"ExaWind's development efforts are building progressively from predictive petascale simulations of a single turbine to a multi-turbine array of turbines in complex terrain," Kothe added.

IBM Summit supercomputer joins fight against COVID-19

Oak Ridge National Laboratory says early research on existing drug compounds via supercomputing could combat coronavirus.

IBM's latest supercomputer will be used... to build even more computers

AiMOS, the 24th most powerful supercomputer worldwide, was recently unveiled at the Rensselaer Polytechnic Institute. Its main job? To find out how to build smarter hardware to support ever-more sophisticated applications of AI.

'Fastest' AI supercomputer in academia to work on climate change, coronavirus projects

The University of Florida has revealed a partnership with Nvidia to upgrade its existing supercomputer to a next-level device that will support AI research.

This new supercomputer promises faster and more accurate weather forecasts

New hardware will support hundreds of researchers working on medium- and long-range forecasting.

The rest is here:

Summit supercomputer to advance research on wind power for renewable energy - ZDNet

BSC Researchers Create Spin-Off Platform to Accelerate the Development of New Chemicals – HPCwire

Aug. 6, 2020 Two researchers from Barcelona Supercomputing Center (BSC), Mnica de Mier and Stephan Mohr, have created a new spin-off, Nextmol (Bytelab Solutions SL), which develops tools for atomistic simulation and data analysis to accelerate the design of new chemicals.

Using these tools,Nextmolcharacterizes the behavior of chemical molecules, predicts their performance and identifies the best candidate molecules to meet certain physicochemical properties, by means of the computer and without the need to synthesize the molecule.

In this way, Nextmol shortens the path of innovation and makes chemical R&D more efficient compared to the traditional approach based exclusively on the laboratory, indicates Mnica de Mier, director of the company. De Mier adds that Nextmols mission is to democratize the computational techniques in the chemical industry, accompanying it in its digital transformation, and thus contribute to its competitiveness.

Nextmol markets these tools through its software-as-a-service platform, which enables the creation of the molecules and the system to be studied, build workflows with the sequence of calculations to be carried out, execute the calculation on a supercomputer and analyze the results. It is a collaborative, easy-to-use and cloud-based web platform, which enables computing resources to be immediately scaled to the volume of calculations required by users. Thanks to the catalog of preconfigured workflows, the platform can be used by non-experts in computational chemistry, says Stephan Mohr, scientific director of Nextmol.

BSC is the main partner of this spin-off, which has been founded by researcher Stephan Mohr and by Mnica de Mier, previously head of business development in the CASE department of BSC. The team currently consists of five people. Nextmol also has the support of the Repsol Foundation through its startup acceleration programEntrepeneurs Fund. In addition, it has obtained theStartup Capital grant from ACCI.

Nextmol leads the projectBigDFT4CHEMthat won the call for Spin-off Initiatives (SPI) launched by the EXDCI-2 consortium, has been one of the five startups finalists of theEntrepreneur Awards XXIorganized by Caixabank and Enisa, has obtained the Seal of Excellence in theEIC Acceleratorcall, has been awarded with the Tech Transfer Award granted by the European Network on High-performance Embedded Architecture and Compilation (HiPEAC) and aEurolab4HPC Business Prototyping project.

About BSC

Barcelona Supercomputing Center-Centro Nacional de Supercomputacin (BSC-CNS) is the national supercomputing centre in Spain. The center is specialised in high performance computing (HPC) and manage MareNostrum, one of the most powerful supercomputers in Europe, located in the Torre Girona chapel. BSC is involved in a number of projects to design and develop energy efficient and high performance chips, based on open architectures like RISC-V, for use within future exascale supercomputers and other high performance domains. The centre leads the pillar of the European Processor Project (EPI), creating a high performance accelerator based on RISC-V. More

Source: Barcelona Supercomputing Center

Read the original here:

BSC Researchers Create Spin-Off Platform to Accelerate the Development of New Chemicals - HPCwire

Supercomputer study of mobility in Spain at the peak of COVID-19 using Facebook and Google data – Science Business

Researchers from the Barcelona Supercomputing Center have published a study based on mobility data from Google and Facebook at the peak of the COVID-19 pandemic, to demonstrate how this can be a sound source of information for epidemiological and socioeconomic analyses.

The data were collected between March 1 and June 27, from the phones of volunteers who had agreed to use tracking apps.

The findings show the Spanish population was closely following health guidelines and restrictions imposed by the government throughout the seven weeks of the study. Sunday was the day when mobility was at its lowest, which could point to it being the best day for lifting control measures. Meanwhile, Friday was the day when movement was most different from normal, suggesting extra support, and reminders of the need to adhere to control measures, is needed as the weekend begins.

The mobility data align with various announcements by the government on the state of the pandemic, the travel restrictions, and on the timing of easing lockdown measures, indicating analysis of tracking data could be used as a decision support tool to assess adherence and guide real time responses in future health crises.

View original post here:

Supercomputer study of mobility in Spain at the peak of COVID-19 using Facebook and Google data - Science Business

Julia and PyCaret Latest Versions, arXiv on Kaggle, UK’s AI Supercomputer And More In This Week’s Top AI News – Analytics India Magazine

Every week, we at Analytics India Magazine aggregate the most important news stories that affect the AI/ML industry. Lets take a look at all the top news stories that took place recently. The following paragraphs summarise the news, and you can click on the hyperlinks for the full coverage.

This was one of the biggest news of the week for all data scientists and ML enthusiasts. arXiv, the most comprehensive repository of research papers, has recently stated that they are offering a free and open pipeline of its dataset, with all the relevant features like article titles, authors, categories, abstracts, full-text PDFs, and more. Now, with the machine-readable dataset of 1.7 million articles, the Kaggle community would benefit tremendously from the rich corpus of information.

The objective of the move is to promote developments in fields such as machine learning and artificial intelligence. arXiv hopes that Kaggle users can further drive the boundaries of this innovation using its knowledge base, and it can be a new outlet for the research community to collaborate on machine learning innovation. arXiv has functioned as the knowledge hub for public and research communities by providing open access to scholarly articles.

The India Meteorological Department (IMD) is aiming to use artificial intelligence in weather forecasting. The use of AI here is particularly focused on issuing nowcasts, which can help in almost real-time (3-6 hours) prediction of drastic weather episodes; the Director-General Mrutunjay Mohapatra said last week. In this regard, IMD has invited research firms to evaluate how AI is of value to enhance weather forecasting.

Weather forecasting has typically been done by physical models of the atmosphere, which are uncertain to perturbations, and therefore are erroneous for significant periods. Since machine learning methods are more robust against perturbations, researchers have been investigating their application in weather forecasting to produce more precise weather predictions for substantial periods of time. Artificial intelligence helps in understanding past weather models, and this can make decision-making faster, Mohapatra said.

PyCaret- the open-source low-code machine learning library in Python has come up with the new version PyCaret 2.0. The design and simplicity of PyCaret is inspired by the emerging role of citizen data scientists and users who can perform both simple and moderately sophisticated analytical tasks that would previously have required more expertise.

The latest release aims to reduce the hypothesis to insights cycle time in an ML experiment and enables data scientists to perform end-to-end experiments quickly and efficiently. Some major updates in the new release of PyCaret include features like Logging back-end, Modular Automation, Command Line Interface (CLI), GPU enabled training, and Parallel Processing.

Global manufacture of mobile devices and technology solutions company Nokia said it would set up a robotics lab at Indian Institute of Science to drive research on use cases on 5G and emerging technologies. The lab will be hosted by Nokia Center of Excellence for Networked Robotics and serve as an interdisciplinary laboratory which will power socially relevant use cases across areas like disaster and emergency management, farming and manufacturing automation.

Apart from research activity, the lab will also promote engagement among ecosystem partners and startups in generating end-to-end use cases. This will also include Nokia student fellowships which will be granted to select IISC students that engage in the advancement of innovative use cases.

Julia recently launched its new version. The launch introduces many new features and performance enhancements for users. Some of the new features and updates include Struct layout and allocation optimisations, multithreading API stabilisation & improvements, Per-module optimisation levels, latency improvements, making Pkg Protocol the default, Automated rr-based bug reports and more.

It has also brought about some impressive algorithmic improvements for some popular cases such as generating normally-distributed double-precision floats.

In an important update relating to the technology infrastructure, the Ministry of Electronics and Information Technology (MeitY) may soon launch a national policy framework for building data centres across India. Keeping in sync with the demands of Indias burgeoning digital sector, the data centre national framework will make it easy for companies to establish hardware necessary to support the rising data workloads, and support business continuity.

The data centre policy framework will focus on the usage of renewable power, state-level subsidy in electricity costs for data centres, and easing other regulations for companies. According to a report, the national framework will boost the data centre industry in India and facilitate a single-window clearance for approvals. Read more here.

A new commission has been formed by Oxford University to advise world leaders on effective ways to use Artificial Intelligence (AI) and machine learning in public administration and governance.

The Oxford Commission on AI and Good Governance (OxCAIGG) will bring together academics, technology experts and policymakers to analyse the AI implementation and procurement challenges faced by governments around the world. Led by the Oxford Internet Institute, the Commission will make recommendations on how AI-related tools can be adapted and adopted by policymakers for good governance now and in the near future. The report outlines four significant challenges relating to AI development and application that need to be overcome for AI to be put to work for good governance and leverage it as a force for good in government responses to the COVID-19 pandemic.

The University of Oxford has partnered with Atos to build the UKs AI-focused supercomputer. The AI supercomputer will be built on the Nvidia DGX SuperPOD architecture and comprises 63 nodes. The deal with Atos has cost 5 million ($6.5 million) and is funded by the Engineering and Physical Sciences Research Council (EPSRC) and Joint Academic Data Science Endeavor, a consortium of 20 universities and the Turing Institute.

Known as JADE2, the AI supercomputer aims to build on the success of the current JADE^1 facility a national resource in the United Kingdom, which provides advanced GPU computing capabilities to AI and machine learning experts.


Vishal Chawla is a senior tech journalist at Analytics India Magazine and writes about AI, data analytics, cybersecurity, cloud computing, and blockchain. Vishal also hosts AIM's video podcast called Simulated Reality- featuring tech leaders, AI experts, and innovative startups of India. Reach out at

Continue reading here:

Julia and PyCaret Latest Versions, arXiv on Kaggle, UK's AI Supercomputer And More In This Week's Top AI News - Analytics India Magazine

Audi To Over-Complicate Cars With Supercomputers And Repair Costs Could Skyrocket – Top Speed

In all honesty, just about every automaker out there is making a run toward new technology and innovation. It makes our lives easier and safer, they say. And, sometimes it does. The fact that our cars can now automatically control torque distribution between wheels and braking force as needed to prevent the loss of control is amazing. But, you have to take the good with the bad, and the bad in this case is that the replacement of electronics when the fail is expensive, especially on newer cars.

But, with everything separated into somewhat individual units, a single failure doesnt necessarily mean your car is undrivable. Audis new Integrated Vehicle Dynamics Computer, on the other hand, could change all that.

Audis Integrated Vehicle Dynamics Computer is far more sophisticated than anything we have in cars in 2020 even when you look to the most advanced cars like the Tesla Model S or Porsche Taycan. The IVDC in future Audis will serve as a central facility or hub for all the cars dynamic systems, from passive safety features like automatic braking and stability control to engine management and door lock control.

Audi claims that its new IVDC is ten times more powerful than the computers found in current models and will be able to control up to 90 different systems.

I bet you didnt know that your car had 90 different controllable systems built into it, did you?

In just a short time from now, Audis new IVDC will land in every car in the brands lineup from the compact A3, all the way up to the Q8 SUV and even its entire offering of EVs.

To give you an example of some of the things the IVDC will control, important systems like torque vectoring and brake regeneration will be on the priority list in electric cars. Performance cars with the RS badge will see it control anti-roll stabilization, active suspension, and engine control.

In short, the IVDC will mark the very first time in automotive history that chassis and powertrain controls are controlled by the same computer.

Its a big step forward, and Audi claims that it will bring a greater range of performance and comfort to its vehicles, but thats only the good side of things.

All of this sounds good in theory, but as a mechanic, I cant help but think about repair costs. Replacing certain control modules on cars today can already be very expensive, so the thought of having everything housed in one unit is concerning. A single failure of the IVDC can render your new car inoperable and, to top it off, the company has you over a barrel once your warranty has passed. Should that IVDC experience any type of failure, you may have no choice but to replace it or be stuck with a car you cant drive potentially one that youre still making payments on. With this being proprietary and new technology, there wont be an aftermarket offering for some time to come, and since its a must-have, Audi will either be able to charge you a small fortune for replacement or push you to trade-in and buy a new car.

I like the idea in theory, and maybe itll work out well, but as an all-new technology, there will be flaws, and until those are ironed out, things could be very dicey. Fortunately, all cars equipped will have some kind of warranty as a bit of a safety shield, but in the end, replacement down the road will still end up being a lot more expensive than replacing one of many stand-alone control units in the event of a random failure.

The rest is here:

Audi To Over-Complicate Cars With Supercomputers And Repair Costs Could Skyrocket - Top Speed