How coronavirus antibody testing works – Livemint

Antibody tests look for the presence of antibodies, which are specific proteins made in response to infections. Antibodies are disease specific. For example, measles antibodies will protect you from getting measles if you are exposed to it again, but they won't protect you from getting mumps if you are exposed to mumps.

"Antibodies are important because they prevent infection and heal patients affected by diseases," said Victor Padilla-Sanchez, a researcher at The Catholic University of America in Washington D.C.

"If we have antibodies, we are immune to disease, as long as they are in your system, you are protected. If you don't have antibodies, then infection proceeds and the pandemic continues," added Sanchez.

This form of foreign-antibody-based protection is called passive immunity -- short-term immunity provided when a person is given antibodies to a disease rather than producing these antibodies through their own immune system.

"We're at the initial steps of this now, and this is where I'm hoping my work might help," Padilla-Sanchez said.

Padilla-Sanchez specializes in viruses. Specifically, he uses computer models to understand the structure of viruses on the molecular level and uses this information to try to figure out how the virus functions.

Severe acute respiratory syndrome (SARS) was the first new infectious disease identified in the 21st century. This respiratory illness originated in the Guangdong province of China in November 2002. The World Health Organization identified this new coronavirus (SARS-CoV) as the agent that caused the outbreak.

Now we're in the middle of yet another new coronavirus (SARS-CoV-2), which emerged in Wuhan, China in 2019. COVID-19, the disease caused by SARS-CoV-2, has become a rapidly spreading pandemic that has reached most countries in the world. As of July 2020, COVID-19 has infected more than 15.5 million people worldwide with more than 630,000 deaths.

To date, there are not any vaccines or therapeutics to fight the illness.

Since both illnesses (SARS-CoV and SARS-CoV-2) share the same spike protein, the entry key that allows the virus into the human cells, Padilla-Sanchez's idea was to take the antibodies found in the first outbreak in 2002 -- 80R and m396 -- and reengineer them to fit the current COVID-19 virus.

A June 2020 study in the online journal, Research Ideas and Outcomes, describes efforts by Padilla-Sanchez to unravel this problem using computer simulation. He discovered that sequence differences prevent 80R and m396 from binding to COVID-19.

"Understanding why 80R and m396 did not bind to the SARS-CoV-2 spike protein could pave the way to engineering new antibodies that are effective," Padilla-Sanchez said. "Mutated versions of the 80r and m396 antibodies can be produced and administered as a therapeutic to fight the disease and prevent infection."

His docking experiments showed that amino acid substitutions in 80R and m396 should increase binding interactions between the antibodies and SARS-CoV-2, providing new antibodies to neutralize the virus.

"Now, I need to prove it in the lab," he said.

For his research, Padilla-Sanchez relied on supercomputing resources allocated through the Extreme Science and Engineering Discovery Environment (XSEDE). XSEDE is a single virtual system funded by the National Science Foundation used by scientists to interactively share computing resources, data, and expertise.

The XSEDE-allocated Stampede2 and Bridges systems at the Texas Advanced Computing Center (TACC) and Pittsburgh Supercomputer Center supported the docking experiments, macromolecular assemblies, and large-scale analysis and visualization.

"XSEDE resources were essential to this research," Padilla-Sanchez said.

He ran the docking experiments on Stampede2 using the Rosetta software suite, which includes algorithms for computational modeling and analysis of protein structures. The software virtually binds the proteins then provides a score for each binding experiment.

"If you find a good docking position, then you can recommend that this new, mutated antibody should go to production," said Sanchez.

TACC's Frontera supercomputer, the 8th most powerful supercomputer in the world and the fastest supercomputer on a university campus, also provided vital help to Padilla-Sanchez. He used the Chimera software on Frontera to generate extremely high-resolution visualizations. From there, he transferred the work to Bridges because of its large memory nodes.

"Frontera has great performance when importing a lot of big data. We're usually able to look at just protein interactions, but with Frontera and Bridges, we were able to study full infection processes in the computer," he said. Padilla-Sanchez's findings will be tested in a wet lab. Upon successful completion of that stage, his work can proceed to human trials.

Currently, various labs across the world are already testing vaccines.

"If we don't find a vaccine in the near term we still have passive immunity, which can prevent infection for several months as long as you have the antibodies," Padilla-Sanchez said. "Of course, a vaccine is the best outcome. However, passive immunity may be a fast track in providing relief for the pandemic," said Padilla-Sanchez.

Subscribe to newsletters

* Enter a valid email

* Thank you for subscribing to our newsletter.

Read the original here:

How coronavirus antibody testing works - Livemint

Researchers Use Supercomputers To Discover New Pathway For Covid-19 Inflammation – Forbes

Covid-19 is challenging to treat. Researchshowsthat there can be six distinct "types" of the disease involving different clusters of symptoms. The coronavirus can infect different organs of the body leading to a variety of symptoms. While pharmaceutical companies are working on a vaccine, a team of scientists led by Dan Jacobson at the Oak Ridge National Laboratory (ORNL) has been working to understand the systems biology of the virus using data analytic, and explainable AI tools on ORNL's Summit supercomputer. Recently, theypublisheda paper on a mechanistic model for Covid-19 that can lead to more targeted therapeutic interventions for patients.

Severely ill Covid-19 patients often end up on a ventilator as their lungs are unable to take in enough oxygen. Analyzing gene expression data, these researchers took a holistic approach to the study using Systems Biology frameworks. By understanding the body's underlying mechanisms and how they respond to the coronavirus, severe symptoms of Covid-19 can be explained. If the team's mechanistic model is proven to be accurate, then time and money can be saved by repurposing existing FDA approved drugs to treat severe cases of Covid-19.

Jacobson says, "We are systems biologists, and so this is how we view the world, and we're trying to understand holistically all the molecular interactions that are happening in cells that lead to phenotypic outcomes, whether those outcomes are diseases or other traits. Our understanding of complex processes focuses on looking at all these omics layers, from genome to gene, protein or metabolite expression from a population as well as the microbiome and how that's all conditional on environment. And, overall, that is what we are doing for Covid-19 as really a holistic systems-based approach."

Analyzing the gene expressions of infected individuals against a control group as well as population-scale data, researchers used the Summit and Rhea supercomputers, housed at the Oak Ridge Leadership Computing Facility at ORNL, to discover that the bradykinin system may be responsible for much of the viral pathogenesis. Bradykinin is a peptide that helps to manage blood pressure and can promote inflammation. When more of it is present, it can dilate blood vessels and makes them permeable. If produced excessively, it causes blood vessels to leak and thus leads to a fluid buildup in the surrounding tissues.

Jacobson says, "What we've found is that the imbalance in the renin-angiotensin system (RAS) pathway that appeared to be present in Covid-19 patients could be responsible for constantly resensitizing bradykinin receptors. So, this imbalance in the RAS pathways will take the brakes off the bottom of the bradykinin pathway at the receptor level. In addition, the downregulation of the ACE gene in Covid-19 patients, which usually degrades bradykinin, is another key imbalance in the regulation of bradykinin levels. We have also observed that the key negative regulator at the top of the bradykinin pathway is dramatically down-regulated. Thus, you likely have an increase in bradykin production as well, stopping many of the braking mechanisms usually in place, so the bradykinin signal spirals out of control. "

Using the Summit to run 2.5 billion correlation calculations, the team found gene expression changes that would likely trigger the production of bradykinin. It decreased the expression of enzymes that can break down bradykinin or change how it perceived by cell-surface receptors. Such an escalating buildup of bradykinin would cause blood vessels to leak.

Jacobson says, "It could affect other organs in this way as well. There is a broad range of symptoms being observed across the patient population. For example, if you have a lot of fluid leaking out of the blood vessels in your brain, this could tend to lead to many of the neurological symptoms

A normal blood vessel, shown at top, is compared with a blood vessel affected by excess bradykinin. ... [+] A hyperactive bradykinin system permits fluid, shown in yellow, to leak out and allows immune cells, shown in purple, to squeeze their way out of blood vessels.

The research team also examined the relationship between vitamin D binding sites and the genes in the RAS-bradykinin pathways. Vitamin D helps to regulate the RAS pathway. Vitamin D deficiency has been associated with severe cases of COVID-19. Clinical, pharmaceutical, and research partners are needed to understand Vitamin D's role in treatment.

Jacobson says, "The vitamin D link was an interesting one that affects the very early steps of the RAS pathway. It is simply one component involved in a complex system, and we're probably going to have to target multiple treatments across the entire system to break the cascade. One single intervention alone is probably not going to solve it. But if we can understand all the different components and target those collectively, I think we have a better shot at it."

Another potential therapeutic development path is to repurpose existing FDA approved drugs such as Danazol, Stanasolol, Icatibant, Ecallantide, Berinert, Cynryze, Haegarda, etc.. to reduce the amount of bradykinin signaling to prevent the escalation of the bradykinin storm. Partnerships with pharmaceutical companies and clinical research are needed to design and implement the right clinical trials to see how these types of treatments can be applied.

Jacobson says, "In other work, we are also looking at the SARS-CoV-2 virus itself from a systems biology perspective and think that attempts to inhibit the virus itself will also probably require a combinatorial strategy. It's probably unlikely that there will be a single solution but instead, there will need to be a collection of therapies, similar to what's been done with HIV. We will probably need to have a cocktail of different drugs to help contain the virus. So, it's possible that we will need a combinatorial approach to therapies both on the human side and on the viral side."

Dan's team at ORNL has been consciously building explainable-AI tools for applications across many research areas. Coupled with the Summit supercomputer, Dan's team can examine gene expression data at a much larger scale in a fraction of the time it would take on a desktop computer. One of the difficulties of large-scale gene expression research is that often associations must be generated across a large population of people and tissues. It takes significant computing power and the integration of results from other existing research to make sense of the data. They examined 17,000 different samples of people and their organ tissues to understand the normal gene expression patterns involved in uninfected individuals.

IBM's Summit Supercomputer at ORNL

Jacobson says, "There was a Sunday afternoon eureka moment just staring at the data in the context of different pathways. We've been very interested in the RAS pathways because coronaviruses so often target them. When we looked at the Covid-19 expression data in the context of the RAS pathways, these patterns jumped out at me by simply looking at the data in a different way."

Using system biology, the underlying environmental and biological considerations can be examined using explainable-AI and supercomputing. The group works on other projects that involve a broad range of biology, including bioenergy, microbiomes, cardiovascular disease, autism, opioid addiction, and suicide to name a few. Building tools that can apply to a variety of projects not only allows researchers to save time; it can also add a level of additional transparency into the process to ensure accuracy. The necessary creative aspects of the research process can be taken to the next level with the productive use of various explainable-AI tools.

Dan Jacobsons work at ORNL for the Department of Energy along with his colleagues at ORNL, the Veterans Administration, Yale University, Cincinnati Childrens Hospital and the Versiti Blood Research Institute will likely usher in a new era of using supercomputing and explainable AI to help researchers take a more holistic view to basic scientific research emphasizing the need to understand the bodys mechanisms to find cheaper and better ways to develop clinical treatments.

Original article: Garvin, M.R., Alvarez, C., Miller, J.I., Prates, E.T., Walker, A.M., Amos, B.K., Mast, A.E., Justice, A., Aronow, B. and Jacobson, D., 2020. A mechanistic model and therapeutic interventions for COVID-19 involving a RAS-mediated bradykinin storm. Elife, 9, p.e59177.

The rest is here:

Researchers Use Supercomputers To Discover New Pathway For Covid-19 Inflammation - Forbes

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms – HPCwire

Doctors and medical researchers have struggled to pinpoint let alone explain the deluge of symptoms induced by COVID-19 infections in patients, and what was once seen as a purely respiratory virus is beginning to be seen as a far more wide-ranging ailment. Now, new research from a team at Oak Ridge National Laboratory (ORNL) is using supercomputing power to illuminate how SARS-CoV-2 instigates various symptoms in hitherto unexplained ways.

The team, led by Dan Jacobson (chief scientist for computational systems biology at ORNL), compared the genes of cells from the lung fluids of nine COVID-19-infected patients to similar cells from 40 control patients. They were hunting for co-expression that is, correlation between certain genes being active or inactive. To comb through the genetic data, they turned to two of ORNLs HPC systems: Summit and Rhea.

Summits 4,608 nodes, each powered by two IBM Power9 CPUs and six Nvidia Volta GPUs, deliver 148 Linpack petaflops, placing it second on the most recent Top500 list of the worlds most powerful supercomputers. Rhea, meanwhile, is a 521-node cluster equipped with Intel Xeon E5 CPUs and Nvidia K80 GPUs that is well-suited for post-processing of data from more powerful systems.

Using this computing firepower, the researchers completed 2.5 billion correlation calculations over the course of a week. Then, they had what Jacobson describes as a eureka moment. In COVID-19-infected cells, the researchers found increased expression of enzymes that produce bradykinin (a compound that makes blood vessels dilated and permeable), decreased expression of enzymes that break down bradykinin and decreased expression of an enzyme that helps stall a catastrophic cascade: a bradykinin storm.

Based on their results, the team believes that a bradykinin storm may explain COVID-19s wide range of symptoms such as muscle pain, fatigue, headaches and brain fog better than the feared cytokine storm, a similar (and much better-known) effect where the body releases too many of the cytokine proteins that help regulate the human immune system.

We believe that when you take the inhibition at the top of this pathway off, you end up with an out-of-control cascade that leads to an opening up of the blood vessels, causing them to leak, Jacobson said in an interview with ORNLs Rachel Harken. If that happens in the lung, thats not good. Immune cells that are normally contained in the blood vessels flood into the surrounding infected tissue, causing inflammation.

The team is hopeful that if their results are validated by experimental analysis, at least ten different known drugs may prove promising to assist patients suffering from a bradykinin storm. If we can block this pathogenesis in severe patients, we can keep the human response from going overboard and give their immune system time to fight off the virus so they can recover, Jacobson said.

The researchers also found that the lung fluids of COVID-19 patients had higher expression of genes that increase the production and decrease the breakdown of hyaluronic acid a substance that can make patients feel like theyre trying to breathe through Jell-O. The team is hopeful that drug compounds known to treat this acid buildup will now also be explored.

To read ORNLs Rachel Harkens reporting on this research, click here.

See the original post:

Supercomputer-Powered Research Uncovers Signs of 'Bradykinin Storm' That May Explain COVID-19 Symptoms - HPCwire

Celtic and Rangers title race outcome predicted by betting supercomputer – HeraldScotland

CELTIC will make history with ten in a row as the Scottish Premiership season gets underway, according to betting firm unikrn's supercomputer calculations.

Brainboxes at the bookmaker have used a number of different markets and a prediction algorithm to determine the final table and it's good news for the Hoops' quest to make history.

Rangers find themselves in a familiar spot in second place with Aberdeen booked for another bronze medal to complete the top three.

At the bottom of the table, the number crunching doesn't bode well for Hamilton, who are predicted to finish at the bottom of the table, while St Mirrens fate will be decided in the play-offs.

The system is based on factoring a range of the most informative betting markets in terms of influencing the final outcome of the season including title winner, 'without the Old Firm', bottom 6 and bottom place.

A unikrn spokesperson said: "It might not come as much of a surprise that Celtic are booked for ten in a row and our calculations read well for Hoops fans ahead of the restart. Steven Gerrard's Rangers will be hoping to run them close, but the numbers suggest they're booked for second place again.

"Things are looking bleak for Hamilton, who will fight with St Mirren to avoid finishing last and earn a last-chance place in the play-offs."

Scottish Premiership Supercomputer from unikrn

1. Celtic (1/2) (title odds)

2. Rangers (7/4)

---

3. Aberdeen (2/1) (winner without Celtic and Rangers)

4. Hibernian (7/2)

5. Motherwell (5/1)

6. Kilmarnock (14/1)

---

7. Livingston (2/5) (to finish bottom 6)

8. St. Johnstone (1/2)

9. Dundee United (3/5)

10. Ross County (1/10)

---

11. St Mirren (7/4) (to finish bottom)

12. Hamilton (6/4)

Read more from the original source:

Celtic and Rangers title race outcome predicted by betting supercomputer - HeraldScotland

Nvidia reportedly in advanced talks to buy Arm – ZDNet

Nvidia is in advanced talks to acquire chip designer Arm from Softbank, according to reports -- a move that would be sure to draw regulatory scrutiny. According to the Financial Times and Bloomberg, the potential deal could value Arm at more than $32 billion, the price at which it was acquired by SoftBank four years ago.

Nvidia's market value has already surpassed that of Intel, and purchasing Arm could give the GPU maker a much broader footprint in the semiconductor industry. The deal could elicit objections from companies that license Arm's technology, including Apple, Broadcom or Qualcomm.

While Arm's technology is already dominant in mobile devices, the company has recently expanded its reach in several other areas as well. The company hit a huge milestone in the infrastructure space last month when Japan's Fugaku supercomputer became the first ARM-powered supercomputer to be dubbed the fastest computer in the world. And last year, the company finally gained real traction in the data center market with the debut of Amazon Web Service's Arm-based Graviton2 processor. Additionally, Apple last month announced plans to move the Mac to its own processors based on Arm.

SoftBank is now considering selling Arm to bolster the firm's cash reserves as part of a $41 billion debt reduction program. Earlier this month, Arm announced plans to shed its two IoT Services Group (ISG) businesses, spinning them off into new entities owned and operated by SoftBank. The move would let Arm focus more on its core semiconductor IP business.

Here is the original post:

Nvidia reportedly in advanced talks to buy Arm - ZDNet

NVIDIA Claims To Have Won MLPerf Benchmarking, But Google Says Otherwise – Analytics India Magazine

With the third round of MLPerf benchmarking results coming out, graphic giant, NVIDIA announced breaking AI performance records becoming the fastest products available commercially for AI training. However, on the other hand, Google has also proclaimed acing the MLPerf tests with the worlds fastest training supercomputer.

Although both companies have showcased significant achievements in creating faster training processes of ML models, which indeed would be critical for research breakthroughs, there is a hidden element that needs to be paid attention to.

Similar to System Performance Evaluation Consortium benchmark and Transaction Processing Council benchmark, MLPerf is an industry-standard benchmark that has been designed to measure the time to train ML models on a few specific tasks. Comprising some 80 companies and universities from all over the world, some other prominent vendors competing in the MLPerf benchmark tests were Intels Xeon processors and Huawei.

Also Read: How MLPerf Will Propel A New Benchmarking Era For Machine Learning

The current result announced by MLPerf was similar to its last two results where NVIDIA stood firm in the top position for all 16 benchmarks with its commercially available hardware and software products for a variety of ML tasks. However, on the other hand, Googles Tensor Processing Unit has surpassed NVIDIAs results in most tasks for the category of research projects.

In a recent tweet, Google AI lead, Jeff Dean shared his excitement on setting records in six out of eight benchmarks in MLPerf tests. According to the results Google topped in training DLRM, Transformer, BERT, SSD, ResNet-50 and Mask R-CNN leveraging its new machine learning supercomputer and TPU chip. In fact, it is believed that Googles supercomputer is 4x larger than Googles Cloud TPU v3 Pod, which made records in the previous results, becoming the first cloud provider to outperform on-premise systems.

While Google has powered up its Cloud TPUs providing faster training time for ML models, NVIDIAs advanced GPU A100 also proved to execute five petaflops of performance excelling on all eight benchmarks of MLPerf. A100 has been the first processor designed on NVIDIAs Ampere architecture, which allows the GPU to address different sized acceleration needs from small to big multi-node workload.

Strangely enough, realising the advanced capabilities of A100 that could easily dominate the field, the only companies to put their submission for commercially available servers were Google and Huawei, that also for only for two categories image classification and natural language processing. Thereby, NVIDIAs results remained high in system design and training models beating Huawei, Googles TPU as well as Intel, which has recently switched to Habanas AI chips.

According to the results, it took about 49 seconds for NVIDIAs DGX A100 to train BERT, which is better than Google with 57 minutes. Understanding its position, Google, therefore submitted its V4 chip in the research category to build its position for the same, with an astounding performance for many training tasks. With that being said, the company hasnt yet released it on the Google Cloud, which again can put it way behind in the race against A100, which is commercially in production.

Also Read: TPU Vs GPU Vs CPU: Which Hardware Should You Choose For Deep Learning

These pointers above highlight how NVIDIA single-handedly dominates the commercial category with other vendor companies like Dell EMC, Alibaba, and even Google using A100 for submitting their performance results. However, even though Googles TPU will still not be available in the market for some time, it indeed showcased intensive performance for many tasks in MLPerf. Interestingly, Intel has also joined the force with its soon-to-be-released CPU, however too early to predict its success in applications.

Home NVIDIA Claims To Have Won MLPerf Benchmarking, But Google Says Otherwise

On another note, MLPerf added a new test to reflect upon the growing application of machine learning in production settings Deep Learning Recommendation Model. Although NVIDIA has performed brilliantly on the newly added benchmarking test, it might still lag in building recommendation engines due to the requirement of a massive amount of supercomputing memory. Google, on the other hand, with its supercomputing abilities, has recently launched a beta version of Recommendation AI, making building recommendation engines super easy for developers.

Having said that, even though MLPerf assesses almost all aspects of AI performance, two vital parameters left out in this benchmarking test are the comparison of the price of chips and computers, and the consumption of energy, which are critical for todays era. It is believed that machines with better accuracy and more chips are going to consume more energy and are going to be heavier on the pockets, which might put Google on the lead once again.

With all that information in hand, it can be established that a usual trend of bigger hardware is indeed gaining traction, which will drastically reduce the amount of training time of machine learning models. However, NVIDIA cannot be the only one dominating the market. Googles supercomputer also showed tremendous results in training ML models to perform tasks.

While NVIDIA and Google are still going to continue the race of achieving the highest AI performance record, NVIDIA stated in their company blog, the real winners are the customers who will now be able to leverage this advancement to transform their businesses faster with AI.

Check full results here.

comments

Continued here:

NVIDIA Claims To Have Won MLPerf Benchmarking, But Google Says Otherwise - Analytics India Magazine

Continental is supercharging the development of driver-assistance tech – CNET

There's not much to see here, but the processing power in this room is staggering.

No self-driving cars are available today, but like it or not, they're coming. Automakers and supplier companies around the world are hard at work developing the technology that will enable these autonomous machines. Accelerating its own efforts, German firm Continental has set up a new supercomputer specifically for developing advanced driver-assistance technologies.

The unit in question is built around more than 50 Nvidia DGX systems and has been up and running since the beginning of the year. This supercomputer dramatically increases Continental's number-crunching capabilities. In the same amount of time, researchers can now run more than 14 times as many tests as they could before.

Subscribe to the Roadshow newsletter, receive notifications and see related stories on CNET.

It's difficult to say if this is the most powerful supercomputer in the automotive industry, but it's certainly near the top. "It puts Continental in a pretty good spot," said Phil van den Berge, Nvidia's vice president of automotive. Developing advanced driver-assistance technology is tremendously intensive work, he added, and having a purpose-built architecture like this one is a major help.

"That is now the game-changer," said Christian Schumacher, head of program management systems in Continental's advanced driver-assistance systems business unit. The new supercomputer can enable things the company only dreamt of doing just a decade ago.

Autonomous cars are coming whether you like it or not,

To help develop advanced automotive technologies, Continental operates a large fleet of evaluation vehicles around the world. Combined, they log around 9,300 miles of driving per day, collecting around 100 terabytes of data in the process. "In the past, it was impossible to deal with this [volume of] data," explained Schumacher. But thanks to the new computer, Continental can actually use it. He said tasks that took weeks to run can now be completed in just days -- a huge improvement.

Not only can this supercomputer chew through all the information collected from real-world testing, it can also generate data synthetically. "A lot of the situations out there we cannot predict," said Schumacher, so their new system can basically conjure up new simulations out of thin air. Of course, this sort of testing will never replace physical cars and actual driving, but he noted this capability is still hugely important in the development of driver-assistance tech.

And if they do run short of number-crunching capability, researchers and engineers can still tap into internet-based services for a little extra horsepower. Continental's new supercomputer is located in Frankfurt, Germany, a city with nearby cloud providers and other benefits. Helping keep the local environment just a little cleaner, Schumacher said, "We have certified green energy that is used to power the computer," which is air cooled and, as you might imagine, hungry for electricity.

Dude, you're getting a Dell... or rather, something much more powerful from Nvidia.

On the road to autonomy, "What we are currently targeting is Level 2 [plus]," said Schumacher, which Continental is looking to roll out in the 2022-to-2023 timeframe. The SAE vehicle autonomy scale runs from Level 0, which offers no automation whatsoever, to Level 5, where the vehicle drives itself under all conditions. Level 2 provides things like lane centering and adaptive cruise control, while Level 3 throws traffic-jam assistance and automatic steering into the mix, though the driver must still pay attention and be able to take over if the system demands it.

Autonomous vehicles are not here yet, and it's debatable when they'll arrive, but they're likely just a little bit closer to reality now that Continental has invested in extra processing power.

Now playing: Watch this: Super Cruising with the 2018 Cadillac CT6's autonomous...

5:04

The rest is here:

Continental is supercharging the development of driver-assistance tech - CNET

COVID-19 Pandemic Can Help More of Us Learn About Climate Change – UT News | The University of Texas at Austin

In a climactic scene in the 1983 movie WarGames, as the supercomputer rushes to find a pattern of missile launches that would win a nuclear war with the Soviet Union, the protagonist has the computer play tic-tac-toe against itself. After learning that neither side ever wins in tic-tac-toe, the computer recognizes the analogy to nuclear war and decides not to launch a strike.

Usually a gifted writer finds analogies that are perfect learning opportunities, but COVID-19 can help us learn our lesson about another potential catastrophe climate change.

The pandemic has several key properties. Without good social distancing behaviors, the growth of the number of cases is exponential. There is also a delay of about two weeks from the behaviors people engage in and the time it takes for people to get diagnosed with the disease. So, when regions relax their social distancing measures, it takes a few weeks before the cases start to be recorded.

Finally, exponential growth is hard to detect at first because early on when you plot the growth in cases over time, it is hard to tell the difference between exponential growth and straight line growth. Unfortunately, by the time you really can tell the difference between them, the number of cases is growing very quickly.

This combination of factors contributed to the recent surge in COVID-19 cases in many states. Governors and local officials began relaxing social distancing rules. For about a month, cases in those states rose slowly in ways that could easily have been interpreted as demonstrating that the increased social contact was not contributing significantly to the spread of the disease.

Now we know that it has to the point where many states that had taken a relaxed attitude toward the pandemic are placing restrictions on business openings and are requiring people to wear masks in public.

Even if people do start engaging in more social distancing now, it will be a few weeks before the growth in the number of cases starts to flatten out, because our current behavior affects the cases we are able to detect in about two weeks. And knowing what we do now it probably would have been best if we had acted more quickly to get people to wear masks in public and to keep their distance from other people.

This situation is an excellent analogy to climate change. Increasing greenhouse gas emissions are leading to an exponential increase in temperature on Earth. The effects of climate change are delayed, though, so the greenhouse gases released now have their impact on climate in the future. For years, temperatures have been rising slowly, in ways that were hard to distinguish from simple linear growth, but now they are growing more quickly. Yet, people have not been willing to change their individual behavior much, and governments have been reluctant to pursue environmentally friendly policies.

The critical lesson is that we do not have to wait for more evidence that we are in an exponential growth situation before acting to protect the climate. Because of the lag between action and effect, we need to take steps now to reduce our carbon footprint.

In addition, like the pandemic, it is the collective action of the world community that is required. That means we need to put social pressure on other countries to reduce emissions even as we take steps ourselves. Reengaging with the Paris Agreement would be a step in that direction. Just as many governors have reversed course on relaxing social distancing guidelines in the face of new infections, the United States must admit that we have not taken climate change seriously enough while there is still time to prevent catastrophe.

Ultimately, the pandemic has taught us that changing behavior only after the growth in cases is clearly exponential has made it difficult to get the number of new cases under control and has threatened to overrun the capacity of hospitals in many regions.

Will we be able to apply this logic to climate change before it is too late? Perhaps we can take time during the pandemic to start flattening the climate change curve.

Art Markman is a professor of psychology and marketing and executive director of the IC2 Institute at The University of Texas at Austin. He is the author of Smart Thinking, which explores the role of analogies in effective thought.

A version of this op-ed appeared in the San Antonio Express News.

See original here:

COVID-19 Pandemic Can Help More of Us Learn About Climate Change - UT News | The University of Texas at Austin

From rocks to icebergs, the natural world tends to break into cubes – Science Magazine

A scanning electron micrograph of dolomite, a mineral with a striking rhombohedric structure

By Adam MannJul. 27, 2020 , 3:25 PM

Perhaps the Cubists were right. Researchers have found that when everything from icebergs to rocks breaks apart, their pieces tend to resemble cubes. The finding suggests a universal rule of fragmentation at scales ranging from the microscopic to the planetary.

Its a very beautiful combination of pure mathematics, materials science, and geology, says Sujit Datta, a chemical and biological engineer at Princeton University who was not involved in the work.

The finding builds on the previous work of mathematician Gbor Domokos of Budapest University of Technology and Economics, who in 2006 helped prove the existence of the gmbc, a gemstonelike shape that has only one stable balance point. Set a gmbc down on a table, and it will always come to rest in the exact same position, unlike, say, a cylinder, which can rest on its end or its side. In subsequent work, Domokos and his colleagues found that entities such as pebbles washing downriver and sand grains blowing in the wind tend to erode toward gmbcish shapes without ever achieving that ideal. The gmbc is part of nature, but only as a dream, Domokos says.

He and his team then turned to the other side of this processhow rocks themselves are born. They started their study fragmenting an abstract cube in a computer simulation by slicing it with 50 2Dplanes inserted at random angles. The planes cut the cube into 600,000 fragments, which were, on average, cubic themselvesmeaning that, on average, the fragments had six sides that were quadrangles, although any individual fragment need not be a cube. The result led the researchers to suspect that cubes might be a common feature of fragmentation.

The researchers tried to confirm this hunch using real-world measurements. They headed to an outcrop of the mineral dolomite on the mountain Hrmashatrhegy in Budapest, Hungary, and counted the number of vertices in cracks in the stone face. Most of these cracks formed squarish shapes, which is one of the faces of a cube, regardless of whetherthey had been weathered naturally or had been created by humans dynamiting the mountain.

Finally, the team created more powerful supercomputer simulations modeling the breakup of 3D materials under idealized conditionslike a rock being pulled equally in all directions. Such cases formed polyhedral pieces that were, in an average sense, cubes, the researchers report this week in theProceedings of the National Academy of Sciences.

Skeptics might point out that many things in the natural world dont fragment into cubes. Minerals such as mica, for instance, come off in flakes, whereas basaltic formations including the Giants Causeway in Northern Ireland break into hexagonal columns.

Thats because real materials are not like the idealized forms found in the teams simulations, says Douglas Jerolmack, a geophysicist at the University of Pennsylvaniaand co-author of the paper. They usually contain interior structures or properties that favor noncubic breakages. For example, mica flakes because it is weaker in one direction than in the perpendicular directions. But in a statistical averaged sense, rocks are born as something thats a vague shadow of a cube, Jerolmack says. The findings, he adds, could help hydrologists predict fluid flow through cracks in the ground for oil extraction, or help geologists calculate the sizes of hazardous rocks breaking off cliff faces.

Some find the study a bit difficult to parse, however. You need to have this abstract theoretical view of earth surface processes to really dig into what this can mean, says Anne Voigtlnder, a geologist at theGFZ German Research Centre for Geosciences. Its sometimes hard for geologists to understand the value of it, or to see where it applies.

Jerolmack agrees that, in some sense, the result is more philosophical than scientific. He notes that his team took inspiration from the Greek philosopher Plato, who related each of the four classical elementsearth, air, fire, and waterto a regular polyhedron, coincidentally linking earth with the cube. But Plato is more remembered for his allegory of the cave, in which he speculated about certain idealized and eternal forms, of which only garbled versions existed in the real world. With the naked eye you see distorted imagesthe fragments, Domokos says. But in order to see the ideal, you have to use your mind.

Excerpt from:

From rocks to icebergs, the natural world tends to break into cubes - Science Magazine

New Data on Genetic Expression In Severe COVID-19, Pre-Existing Immune Response – Bio-IT World

July 31, 2020 |Research continues to uncover the underlying biology of SARS-CoV-2 and reveal some surprises. A German team found that 35% of their healthy controls had pre-existing SARS-CoV-2 cross-reactive T cells, and several groups are narrowing down the gene expression signatures that might explain why COVID-19 is so severe in some patients.

Literature Updates

In a preprint made available byNature(peer reviewed and accepted for publication, but not copy edited or typeset), a German team from Charit - Universittsmedizin Berlin and the Max Planck Institute for Molecular Genetics detected SARS-CoV-2 S-reactive CD4+ T cells in 83% of 18 patients with COVID-19 but also in 35% of the 68 healthy controls. The role ofpre-existing SARS-CoV-2 cross-reactive T cellsfor clinical outcomes remains to be determined in larger cohorts, the authors write. However, the presence of S-cross-reactive T cells in a sizable fraction of the general population may affect the dynamics of the current pandemic, and has important implications for the design and analysis of upcoming COVID-19 vaccine trials.DOI: 10.1038/s41586-020-2598-9

A German team has characterized thepapain-like protease PLprothat is implicated in evading host anti-viral immune responses. They have just published biochemical, structural and functional characterization of the SARS-CoV-2 PLpro (SCoV2-PLpro) in Nature and outlined differences to SARS-CoV PLpro (SCoV-PLpro) in controlling host interferon (IFN) and NF-B pathways. While SCoV2-PLpro and SCoV-PLpro share 83% sequence identity, they exhibit different host substrate preferences. In particular, SCoV2-PLpro preferentially cleaves the ubiquitin-like protein ISG15, whereas SCoV-PLpro predominantly targets ubiquitin chains. Their results highlight a dual therapeutic strategy in which targeting of SCoV2-PLpro can suppress SARS-CoV-2 infection and promote anti-viral immunity.DOI: 10.1038/s41586-020-2601-5

Analyses of lung fluid cells from 19 COVID-19 patients and 40 controls conducted on Oak Ridge National Laboratorys Summit supercomputer fastest supercomputer point togene expression patterns that may explain the runaway symptomsproduced by the body's response to SARS-CoV-2. The computational analyses suggest that genes related to one of the body's systems responsible for lowering blood pressurethe bradykinin systemappear to be excessively "turned on" in the lung fluid cells of those with the virus. The results were published ineLife.Based on their analyses, the team posits that bradykininthe compound that dilates blood vessels and makes them permeableis overproduced in the body of COVID-19 patients; related systems either contribute to overproduction or cannot slow the process. Excessive bradykinin leads to leaky blood vessels, allowing fluid to build up in the body's soft tissues.DOI: 10.7554/eLife.59177

In a new published study inCell Discovery, researchers from Nanjing University and two other groups from Wuhan Institute of Virology and the Second Hospital of Nanjing present a novel finding that absorbed miRNA MIR2911 inhoneysuckle decoction (HD) can directly target SARS-CoV-2 genesand inhibit viral replication. The authors posit that drinking HD may accelerate the negative conversion of COVID-19 patients.By reconstructing the evolutionary history of SARS-CoV-2, an international research team of Chinese, European and U.S. scientists has discovered that the lineage that gave rise to the virus has been circulating in bats for decades and likely includes other viruses with the ability to infect humans. They published their findings inNature Microbiology. The team used three different bioinformatic approaches to identify and remove the recombinant regions within the SARS-CoV-2 genome. Next, they reconstructed phylogenetic histories for the non-recombinant regions and compared them to each other to see which specific viruses have been involved in recombination events in the past. They were able to reconstruct the evolutionary relationships between SARS-CoV-2 and its closest known bat and pangolin viruses.DOI: 10.1038/s41564-020-0771-4

In a paper published inNature Microbiology, an international team from Germany, Switzerland, and the US showed that lymphocyte antigen 6 complex, locus E (LY6E) potentlyrestricts infection by multiple CoVs, including SARS-CoV, SARS-CoV-2 and MERS-CoV. Mechanistic studies revealed that LY6E inhibits CoV entry into cells by interfering with spike protein-mediated membrane fusion. These findings advance our understanding of immune-mediated control of CoVin vitroandin vivoknowledge that could help inform strategies to combat infection by emerging CoVs, the authors write.DOI: 10.1038/s41564-020-0769-y

Researchers at Yale tracked the progress of 113 patients admitted to Yale New Haven Hospital for COVID-19 and analyzed the varying immune system responses they exhibited during their hospital stay. They found anassociation between early, elevated cytokines and worse disease outcomes. Following an early increase in cytokines, COVID-19 patients with moderate disease displayed a progressive reduction in type-1 (antiviral) and type-3 (antifungal) responses, the authors wrote. In contrast, patients with severe disease maintained these elevated responses throughout the course of disease and saw an increase in multiple type 2 (anti-helminths) effectors including, IL-5, IL-13, IgE and eosinophils. They published their findings inNature.DOI: 10.1038/s41586-020-2588-y

Using cohort of 782 COVID-19 positive patients and 7,025 COVID-19 negative patients, a team of researchers in Israel identified thatlow plasma vitamin D levelsappears to be an independent risk factor for COVID-19 infection and hospitalization. The research was published inThe FEBS Journal. The mean plasma vitamin D level was significantly lower among those who tested positive than negative for COVID19.DOI: 10.1111/febs.15495

Dutch researchers have identified agenetic link to severe COVID-19 disease among healthy, young men. In two separate families with severely sick young me, the researchers found mutations in the gene encoding for the Toll-like receptor 7. Rare putative loss-of-function variants of X-chromosomal TLR7 were identified that were associated with impaired type I and II IFN responses, the authors write inJAMA. These preliminary findings provide insights into the pathogenesis of COVID-19.DOI: 10.1001/jama.2020.13719

Loss of smellis the main neurological symptom of COVID-19, but the underlying mechanism has been unclear. New research published inScience Advancessuggests that olfactory sensory neurons are not vulnerable to SARS-CoV-2 infection because they do not express ACE2, a key protein that the virus uses to enter human cells. Instead, loss of smell stems from infection of nonneuronal supporting cells in the nose and forebrain.DOI: 10.1126/sciadv.abc5801

A global research team has identified21 existing drugs that stop the replication of SARS-CoV-2, the virus that causes COVID-19 at safe human doses. They found the drugs by analyzing 12,000 clinical-stage or FDA-approved small molecules for their ability to block the replication of SARS-CoV-2. Their findings will be published in Nature.DOI: 10.1038/s41586-020-2577-1

Researchers atKing Abdullah University of Science and Technology (KAUST)used comparative pangenomic analysis of all sequenced reference Betacoronaviruses to determine that the envelope protein E is shared between SARS and SARS-CoV-2. They suggest E protein as an alternative therapeutic target to be considered for further studies to reduce complications of SARS-CoV-2 infections in COVID-19. The spike (S) protein of SARS-CoV-2 has been the prime target of vaccine work thus far, and while the S proteins from SARS and SARS-CoV-2 are similar, structural differences in the receptor binding domain (RBD) preclude the use of SARS-specific neutralizing antibodies to inhibit SARS-CoV-2. The KAUST team looked elsewhere for similarities, using comparative pangenomic analysis complemented with functional and structural analyses, and found that among all core gene clusters present in these viruses, the envelope protein E shows a variant cluster shared by SARS and SARS-CoV-2 with two completely-conserved key functional features, namely an ion-channel, and a PDZ-binding motif (PBM). The work was published inFrontiers in Cellular and Infection Molecular Biology.DOI: 10.3389/fcimb.2020.00405

In a study published inJAMA Cardiology, German researchers looked at 100 patients aged 45-53 who had recovered from COVID-19 (67 at home; 33 had been hospitalized) and had no pre-existing heart conditions. Cardiac MRI after recovery showed 78% with some cardiac involvement; 60% of those showedongoing myocardial inflammation. Our findings reveal that significant cardiac involvement occurs independently of the severity of original presentation and persists beyond the period of acute presentation, with no significant trend toward reduction of imaging or serological findings during the recovery period, the authors write. Our findings may provide an indication of potentially considerable burden of inflammatory disease in large and growing parts of the population and urgently require confirmation in a larger cohort.DOI: 10.1001/jamacardio.2020.3557

In a separateJAMA Cardiologypublication also from German researchers, a study of 39 autopsy cases of patients with COVID-19 found thatcardiac infection with SARS-CoV-2was found to be frequent but not associated with myocarditis-like influx of inflammatory cells into the myocardium.DOI: 10.1001/jamacardio.2020.3551

Researchers at theUniversity of Texas, Austin, have engineered the spike protein of the SARS-CoV-2 virusa critical component of potential COVID-19 vaccinesto be more environmentally stable and generate larger yields in the lab. They characterized 100 structure-guided spike designs and identified 26 individual substitutions that increased protein yields and stability. Testing combinations of beneficial substitutions resulted in the identification of HexaPro, a variant with six beneficial proline substitutions exhibiting ~10-fold higher expression than its parental construct and the ability to withstand heat stress, storage at room temperature, and three freeze-thaw cycles. They published their findings inScience.DOI: 10.1126/science.abd0826

Read more from the original source:

New Data on Genetic Expression In Severe COVID-19, Pre-Existing Immune Response - Bio-IT World

Superman’s Glasses Are Secretly Used For Mind Control – Screen Rant

Superman has an almost endless number of super powers but did you know he can also use his glasses to hypnotize others? Believe it!

Ah, Superman! Able to defy gravity, bend steel in his bare hands, shrug off bullets, and hypnotize people? Yes, strange as it may seem, Supermans multitude of powers once included super-hypnotism. Even stranger? He was able to enhance his power with his Clark Kent glasses!

Back in the Silver Age, Superman enjoyed a number of offbeat powers from super plasticity (which allowed him to reshape his face), super-ventriloquism (which allowed him to mimic any voice and throw his voice anywhere), and even a super-kiss! However, while many of these powers had a limited shelf life, super-hypnosis kept showing up in many of Clark Kents adventures and eventually became the official reason why no one recognized Clark Kent was Superman!

Related: Superman Hides His Costume In The WEIRDEST Place

Superman could originally use his power of super-hypnosis simply by looking at a person and concentrating. In certain stories, he also employed a watch, which he swung back and forth in front of his subjects like a stage hypnotist. He even managed to use his power of super-hypnosis on himself on certain occasions, although like many hypnotists, he claimed that his subjects needed to want to be hypnotized in order for this superpower to work.

Other times, however, this didnt appear to be true. During one of Mr. Mxyzptlks schemes, Superman wound up being caught in a warped world where the male and female genders were switched, causing him to contend with a Superwoman and a Wonder Warrior. At one point, he was captured and his super powers were muted by a helmet full of Kryptonite gas. Despite this, Superman was able to use some reflected light off of his helmet to shine a beam into Wonder Warriors eyes and hypnotize the male Amazon into a deep sleep.

Interestingly then, it appeared that Supermans super-hypnotism was more of a learned ability than a Kryptonian super power. However, in other Superman comic books, he learned he had unknowingly been using Kryptonian technology for years to super-hypnotize everyone. In Superman #330, Clark finally starts questioning his use of glasses as a disguise and concludes that its the dumbest disguise hed ever seen. Wondering how his Clark Kent disguise managed to fool people all these years, he starts wondering if an ace reporter like Lois Lane has just been humoring him all these years.

Later, however, Superman discovers the secret to his disguise actually lies in special properties in his glasses. Since the lenses in Clark Kents glasses are actually made of the plexiglass window from the rocket ship that brought him to Earth, the lenses actually amplify his super-hypnotic abilities whenever he wears his glasses, causing people to see Clark Kent as a frailer and weaker man than Superman.

Superman found he could even use his Kryptonian lenses to brainwash his foes as he did when he tackled the Parasite, an enemy who had leached away his superpowers, and hypnotized him into giving him those powers back. Since the Parasite clearly didnt want to be hypnotized, Clark Kents glasses apparently gave Superman the power to mind control anyone he wanted to. (Fortunately, he was too super-ethical to use this power immorally as far as anyone knows).

Eventually, the mind-controlling glasses and super-hypnosis power were dropped as Superman entered the modern age. Later writers simply explained that people didnt suspect Clark was Superman because Superman never let on that he had a dual identity. Even Lex Luthor refused to believe Superman would lower himself to pretend to be an ordinary mortal, even after a super computer deduced Clark Kent was Superman. Nevertheless, at one time, Clark Kents glasses actually did give Superman the power of mind control and allowed the most stupid disguise in the world to actually fool people for years.

Next: For Green Lantern, Beating Superman Is Hilariously Easy

Shazam vs. Superman: Who Would Win In A Fight?

Michael Jung is a mild-mannered freelance writer-for-hire, actor, and professional storyteller with a keen interest in pop culture, education, nonprofit organizations, and unusual side hustles. His work has been featured in Screen Rant, ASU Now, Sell Books Fast, Study.com, and Free Arts among others. A graduate of Arizona State University with a PhD in 20th Century American Literature, Michael has written novels, short stories, stage plays, screenplays, and how-to manuals.

Michaels background in storytelling draws him to find the most fascinating aspects of any topic and transform them into a narrative that informs and entertains the reader. Thanks to a life spent immersed in comic books and movies, Michael is always ready to infuse his articles with offbeat bits of trivia for an extra layer of fun. In his spare time, you can find him entertaining kids as Spider-Man or Darth Vader at birthday parties or scaring the heck out of them at haunted houses.

Visit Michael Jungs website for information on how to hire him, follow him on Twitter Michael50834213, or contact him directly: michael(at)michaeljungwriter(dot)com.

See the original post:

Superman's Glasses Are Secretly Used For Mind Control - Screen Rant

The Israeli company that has come as close as possible to the sun – Haaretz.com

In mid-June, when Education Minister Yoav Gallant was threatening teachers with restraining orders (if they refused to work during the summer) and then-Health Ministry director general Moshe Bar Siman Tov warned against a second wave of the coronavirus in Israel, a spacecraft the size of a minivan was on a bold journey.

The European Space Agencys Solar Orbiter was photographing the sun from a distance of 77 million kilometers (about 48 million miles) approximately half the distance between the sun and Earth. The images the closest ever taken of the sun revealed a previously unknown phenomenon: The suns surface turns out to be covered with miniature solar flares, which the scientists quickly dubbed campfires. But no less impressive is the fact that the computer that operated the spacecrafts camera was manufactured in the industrial zone aka startup village of Yokneam, a town in Lower Galilee.

The company behind the computer is Ramon.Space, the great Israeli hope in the New Space revolution that is, the privatization of the space sector. In the past few years, the semiconductor chips produced by the Israeli company have reached halfway to the sun, in the ESAs Solar Orbiter; gone to the moon, in the first Israeli spacecraft (the SpaceIL Beresheet Lander, 2019); Mars, taken part in the ExoMars project of the European and Russian space agencies; and participated in the Hayabusa2 asteroid sample-return mission of the Japanese space agency. Next year, Ramon.Space chips will be launched to the Jupiter moons Ganymede, Callisto and Europa as part of the search for signs of life beneath the planets ice cover.

At the same time, Ramon.Space is also seeking to gain the upper hand in the burgeoning market of earth-orbiting satellites. Wherever you look in the sky, says the companys co-founder, Ran Ginosar, we are there.

How many satellites equipped with your computers are currently in space?

Ginosar: Approximately 200. We dont know about most of them. If its not a scientific mission, like Solar Orbiter, were not told. We see it in our sales. The mechanism [in the Defense Ministry] that oversees Israels security exports protects me [from knowing about military applications of their technology]. Even in completely legitimate places like France, I was told, If you dont need to know, its better not to know. What I dont need to know from the legal or commercial point of view, I dont know.

Two-hundred is a substantial amount, given that there are 5,000 active satellites in space altogether.

This is only the beginning. We want to be the new standard in the field.

Prof. Ginosar, 68, comes from the computer sciences field, not space research. After graduating from the Technion Israel Institute of Technology in Haifa with degrees in electrical engineering and in computer sciences, he completed his Ph.D. at Princeton in 1982 and subsequently returned to the Technion, this time to join the faculty.

Theres no shortage of Israeli startups in the realm of Earth-bound computers. When did you become interested in outer space?

I build computer chips. Thats my field. And the truth is that I was involved in all kinds of startups over the years. But the Technion was bitten by the space bug. A few students there wanted to launch a microsatellite, and I built the chips for them. The TechSat, the students satellite, was launched in 2000 and was in operation for 12 years an all-time record for a microsatellite. After that success, I was approached by the Defense Ministrys Satellite Administration. They asked me to drop the startups and build chips for observation satellites.

Espionage.

Observation. Checking out whats going on with your rivals is called espionage. But there is also early warning and deterrence.

Isnt that a career mistake working for the Defense Ministry instead of the private market?

Yes. But they said it was a Zionist need, so I agreed, just as I served in Golani [infantry brigade]. It wasnt exactly a startup. I was asked, alongside my work in the Technion, to do what I knew how to do anyway: to take regular computers and turn them into space computers.

Strategic restrictions

Isnt it cheaper to buy such things from the Americans? They know a thing or two about space.

The Americans impose export restrictions on everything that is in some way security-related. And those are strategic restrictions. In other words, you dont get it for free. You need to give something in return. You need to behave nicely and ask permission. You can use American products only for purposes that the Americans allow.

We dont want to be told what to do. Its important to have blue-and-white technology of this sort, for Israel to be independent in space. So I gathered a few students and colleagues of mine and we started to look into the subject. We saw what was done in other places and we realized that we could manufacture a better chip.

You already had the recipe.

Yes, but there is an Israeli way. Because there is no money. If youre poor, you have to go about it very carefully. We cant build a special chip for every type of satellite. Whats needed is one chip that will be good for all the missions, a universal chip. We built a first chip and we checked [its functioning in the presence of intense radiation] it at the Nahal Sorek nuclear research center. During the initial testing it came out fine. The Defense Ministry told me, Well done, now youll make us real computers. Well, the Technion is not the place for that the Technion doesnt sell products. We needed a company. So we founded Ramon.Space in 2003.

At first you were known as Ramon Chips.

Because at first we were paranoid. The Ramon is clear: It was right after [Israeli astronaut] Ilan Ramon was killed. And the Chips because we manufacture chips, but we didnt want people to know what we were really doing, namely, making chips for space. Afterward it turned out that the best way to protect yourself is for everyone to know exactly what youre doing. You dont want the Americans saying that you concealed information from them. Thats why we switched to Ramon.Space [in 2019].

Every year we still fly to one of the big conferences in America and say, Here, please, this is what we do, theres none of your technology here, so we are not subject to your export laws. Very quickly we understood that the Israelis are not the only ones who wanted independence. The Europeans also dont want to be subject to the Americans strategic umbrella. Thats why they prefer to buy from us.

Dont the Germans have their own computer manufacturers?

In the field of chips that are durable in space, there are the American companies, and besides that there are three or four more players in the world. The European projects are government-sponsored, heavy and ponderous. They just tried to copy from the Americans. We thought that as long as were building a space chip anyway, we might as well build the best chip in the world.

There are 5,000 people in Israel who know how to build chips better than all the Europeans combined. We have a magnificent chip industry, in part thanks to the Intel plant that was built here in 1974, and also because many of the Israelis who were working in Silicon Valley returned to Israel. So we established a private chip company, and we did it with the best human capital there is. We built a super-duper chip here, one that is a whole computer. Its still considered the best and most durable chip of its kind. Thanks to it, weve gotten to places Id never imagined.

Hayabusa2, for example.

For example. Its true that the development of the first chip was intended for security applications, but not everyone sends up satellites to spy on their adversaries. There are also observation satellites that check whether crop irrigation is effective, or where sandstorms that start in the Sahara end up. And there are satellites that look outward, into space. There are also orbiters and probes and landers that are launched into remote space. And they all need chips. And before I knew it, Im told that Im on Hayabusa2.

For readers who dont follow space exploration news, the Japanese space agencys Hayabusa2 (the word means Peregrine falcon in Japanese) is one of the most ambitious space projects of recent years. In 2018, four years after its launch, the spacecraft rendezvoused with the asteroid Ryugu at a distance of 280 million kilometers (174 million miles) from Earth, or about twice the distance between Earth and the sun. This particular asteroid was chosen for two reasons: It is liable to collide with our planet one day and to wipe out humanity; and it contains metals such as cobalt and nickel worth $82 billion (in the estimation of the website asterank.com), which human beings might want to mine one day.

On October 3, 2018, Hayabusa2 deployed a mobile surface scout, built by the German Aerospace Center and called MASCOT, to the surface of the asteroid. MASCOT photographed the ancient gravel and measured the refracted light spectrum, the radiation and the magnetic properties of the asteroid all of it operated by a computer made by Ramon.Space.

At the same time, the Hayabusa2 mother ship bombarded the asteroid with a projectile, and collected the dust that rose from its surface. Hayabusa2 left Ryugu last November and this December will parachute to Earth a capsule containing the precious stardust. MASCOT and its Israeli-made processor will remain idle on the asteroids primeval surface .

Your computer will remain on Ryugu forever. As long as the solar system is here, and as long as the sun doesnt turn into a red giant [star], your chip is out there.

Unless someone volunteers to go there and bring it back. Theyd be welcome. I have one on the moon, too, in Beresheet referring to the Israeli lander that crashed on the lunar surface in April 2019.

How do you feel about your place in space?

Wonderful. Its the farthest any Israeli product has reached. No other Israeli product has gone farther. And next year were going to outdo even that, with JUICE, the European Space Agencys JUpiter ICy Moons Explorer. It will be flying with our computers to three moons of Jupiter in order to search for microorganisms beneath the surface. Listen, these scientific applications arent where the business lies they buy a chip here, a chip there. The real money is in communications and observation satellites. But nothing is more exciting than pure science. Do you know whats like to come to a school and tell the children that an Israeli space computer is on an asteroid? Theyre thrilled, Im thrilled, its an extraordinary feeling.

Proton troubles

What is the difference between a space computer and a regular computer why not simply send a Lenovo laptop to the sun, or an Intel processor to an asteroid? Because computers in space, like people in space, are exposed to a lethal, two-headed monster: solar radiation and cosmic radiation.

Ginosar: Solar radiation, or solar wind, is a stream of charged particles, which is serious trouble. Not because it comes from the sun, but because it gets stuck in two belts in Earths magnetic field, the Van Allen belts. The inner belt is packed with protons and the outer belt is packed with electrons. Both of them short-circuit the system [when it passes through them]. Do you remember how Beresheet stopped and did a reset? That was because it was hit by a proton from the Van Allen belt.

So what do you do [if that happens]?

Pray. We pass through quickly and carefully, and turn off the electricity where its not needed. A short cant happen if we turn off the electricity. But thats the lesser problem. Those are two belts that we have to get through. Whats truly lethal is cosmic radiation from interstellar space. Its not just particles, its heavy ions, the heaviest. Uranium. Gold. And they move at tremendous speed, close to the speed of light. Where do they come from? From the most violent cosmic events: supernovas [the explosion of massive stars], collisions with neutron stars, mergers of black holes.

Earths magnetic field protects us only partially from cosmic radiation. When a particle like that strikes an animal, it tears the DNA, so you have a mutation. A mutation can lead to evolution, but also to cancer. We are all exposed to these particles all the time, no matter where we hide. When an ion strikes an electronic product, it causes a power surge. The damage can be temporary, meaning a mistake in calculation, or permanent the component can be burned.

How often is a spacecraft likely to be hit by a particle like that in space?

It could be measured in days or seconds. Its a matter of luck, but in the end it reaches you. Thats why in many satellites resources are wasted on double systems. You can make systems redundant at the level of the individual transistor, at the level of the chip, at the level of the whole computer and at the level of the whole spacecraft. Some will say: I will not send one satellite, I will send three and one will survive, but with our chip you dont need to send three.

What makes your chip durable?

We built the silicon cells in the chip so that they would be immune to radiation damage, and instead of physical redundancy we added mechanisms of calculation redundancy: algorithms that bypass the errors caused by exposure to radiation. Of course, the larger and more sophisticated the computer, the more acute the problem becomes. In the past, there had to be a simple controller for the camera in an observation satellite and for the antenna in a communications satellite. But a late-generation communications satellite serves 10,000 iPhones, so it needs 10,000 times as much calculating power than a single iPhone has. The demand today is for supercomputers that will also be durable in space.

To date, Ramon.Space has been financed primarily by the Defense Ministry, and by the Office of the Chief Scientist and the Israel Space Agency, which are both parts of the Science and Technology Ministry.

Israel generally invests in startups through the Innovation Authority, explains Avi Blasberger, the ISAs director general. The one difference is that investment in space companies go through the ISA. We are trying to promote this industry, and in the end the state benefits from the transactions involving these companies.

At present, Ramon.Space, which has a staff of only 20, is trying to lift off with a new product not just a space processor to operate simple systems such as steering and cameras, but a 64-core digital signal processor that can process information independently and make decisions in real time namely, by using artificial intelligence or machine learning. By comparison, a new computer on Earth comes with a 2-core or 4-core processor. The aim: to charge ahead with the new multicore processor for contracts worth hundreds of millions of dollars for observation and communications satellites and to gain control of the future and futuristic markets of New Space.

To that end, the Ramonauts launched a campaign to raise capital, in Israel and abroad, targeting private funds. In late 2019, Grove Ventures, whose managing partner is Israeli tech entrepreneur and investor Dov Moran, decided to invest in Ramon.Space one of the VC funds few investments in an Israeli space startup.

We decided to invest in a company that has proved itself, Moran said, which has developed and manufactured many chips that have taken part in dozens of space missions and is proud that all of them are continuing to operate.

Moran also brought Ramon.Space its new CEO, Avi Shabtai. Dov understands that space is the next big thing in tech, Shabtai said.

'Dramatic changes'

People have been promising that space was the next big thing ever since I went to after-school science enrichment programs. For years weve been hearing about space tourism and asteroid mining and orbiting colonies, but were all still here. What has changed?

Shabtai: Space tourism and asteroid mining are for the distant future. We are talking about building private space infrastructures to network Earth. Even governments are starting to use civilian infrastructures to utilize these services. That is an amazing change. Who would have believed that NASA would agree to launch its own astronauts in the spacecraft of a private company [the launch of a Crew Dragon spacecraft on a rocket of Elon Musks SpaceX company]?

The new industry could create dramatic changes on Earth, which are hard even to predict. I will give you an example. We at Ramon.Space announced that we want to provide computing infrastructures in order to take cloud computing into space. And the fact is that just a few days ago, Amazon Web Services also appointed someone to set up a team for cloud computing in space.

What is cloud computing in space?

It refers to information networks based on in satellites. A satellite talks to another satellite and they process data, whether its data collected in space or on Earth. Humanity is creating information on an unprecedented scale. Until now all the information was downloaded to Earth and processed here. That takes time and demands resources. We want to do all the processing in space. And for that, a powerful space processor is needed a supercomputer.

What is the advantage of this kind of flying server farm over a server farm in Finland?

To begin with, every point on Earth can be covered, even if it doesnt have an internet infrastructure. And Im not just talking about some village in Africa. Drive an hour from Silicon Valley and youll get to communities where the people make a very good living, but they dont have internet infrastructure. They cant talk on Zoom the way you and I are doing now, and they cant avail themselves of Netflixs streaming services. And besides that, there is a great deal of information that is collected in space that does not necessarily have to be returned to Earth.

Take commercial observation satellites. Someone is prepared to pay for an image of a specific place on Earth, but suddenly clouds appear. Today the client doesnt know that the picture is covered with clouds; he waits for hours until he can download the image and only then does he see that there are clouds. But imagine a different situation. The photo is taken and sent to a data center located in space that processes it immediately and says: Find a different angle, send a different satellite or just wait an hour until the sky clears up.

The director of NASA estimates that the space economy is already generating a turnover of $383 billion a year more than the entire Israeli economy and the U.S. treasury secretary estimates that this will grow to trillions by the end of the decade. But most people on Earth are not part of this game: Its unlikely that our readers have ever bought a photograph from an observation satellite.

Not true. Look how GPS changed our lives. A taxi driver doesnt care if the image he gets comes from a cluster of satellites he wants to navigate with two clicks without needing a map. The New Space revolution ensures that we will have the ability to receive additional, sophisticated services, without necessarily knowing that they come from space. The space economy makes use of space, but at the end of the day its $400 billion paid in Earth money. Who knows what apps will be developed when we have an infrastructure of cloud computing from space, artificial intelligence from space or internet from space?

Dont your high-tech friends raise an eyebrow every time you say from space?

They did at first. But the embarrassment can be overcome with a few success stories. There are so many startups that are not credible, that are selling dreams. I can sit with a friend and tell him that our technology is in a spacecraft that reached the sun, in another one that is orbiting Mars and in a third one that landed on an asteroid. How many people can say that?

Follow this link:

The Israeli company that has come as close as possible to the sun - Haaretz.com

Celtic and Rangers title race outcome predicted by betting supercomputer – Glasgow Times

CELTIC will make history with ten in a row as the Scottish Premiership season gets underway, according to betting firm unikrn's supercomputer calculations.

Brainboxes at the bookmaker have used a number of different markets and a prediction algorithm to determine the final table and it's good news for the Hoops' quest to make history.

Rangers find themselves in a familiar spot in second place with Aberdeen booked for another bronze medal to complete the top three.

At the bottom of the table, the number crunching doesn't bode well for Hamilton, who are predicted to finish at the bottom of the table, while St Mirrens fate will be decided in the play-offs.

The system is based on factoring a range of the most informative betting markets in terms of influencing the final outcome of the season including title winner, 'without the Old Firm', bottom 6 and bottom place.

A unikrn spokesperson said: "It might not come as much of a surprise that Celtic are booked for ten in a row and our calculations read well for Hoops fans ahead of the restart. Steven Gerrard's Rangers will be hoping to run them close, but the numbers suggest they're booked for second place again.

"Things are looking bleak for Hamilton, who will fight with St Mirren to avoid finishing last and earn a last-chance place in the play-offs."

Scottish Premiership Supercomputer from unikrn

1. Celtic (1/2) (title odds)

2. Rangers (7/4)

---

3. Aberdeen (2/1) (winner without Celtic and Rangers)

4. Hibernian (7/2)

5. Motherwell (5/1)

6. Kilmarnock (14/1)

---

7. Livingston (2/5) (to finish bottom 6)

8. St. Johnstone (1/2)

9. Dundee United (3/5)

10. Ross County (1/10)

---

11. St Mirren (7/4) (to finish bottom)

12. Hamilton (6/4)

Read this article:

Celtic and Rangers title race outcome predicted by betting supercomputer - Glasgow Times

PEARC20 Plenary Introduces Five Upcoming NSF-Funded HPC Systems – HPCwire

Five new HPC systemsthree National Science Foundation-funded Capacity systems and two Innovative Prototype/Testbed systemswill be coming online through the end of 2021. John Towns, principal investigator (PI) for XSEDE, introduced panelists who described their upcoming systems at the PEARC20 virtual conference on July 29, 2020.

The systems are part of NSFs Advanced Computing Systems & Services: Adapting to the Rapid Evolution of Science and Engineering Research solicitation. The Capacity systems, which will support a range of computation and data analytics in science and engineering, are expected to be available for allocation via XSEDEs process for projects starting Oct 1, 2021. The Innovative platforms, which will deploy specialized hardware tailored for artificial intelligence, will be available for early user access in late 2021 followed by a production period as the platforms mature.

The Practice and Experience in Advanced Research Computing (PEARC) Conference Series is a community-driven effort built on the successes of the past, with the aim to grow and be more inclusive by involving additional local, regional, national, and international cyberinfrastructure and research computing partners spanning academia, government and industry. Sponsored by the ACM, the worlds largest educational and scientific computing society, PEARC20 is now taking place online through July 31.

This years theme, Catch the Wave, embodies the spirit of the communitys drive to stay on pace and in front of all the new waves in technology, analytics, and a globally connected and diverse workforce. Scientific discovery and innovation require a robust, innovative and resilient cyberinfrastructure to support the critical research required to address world challenges in climate change, population, health, energy and environment.

Anvil: Composable, Interactive, User-Focused

Anvil, the first of the three NSF Category I Capacity Systems, was introduced by principal investigator Carol Song, senior research scientist and director of Scientific Solutions with Research Computing at Purdue University. Song stressed the capabilities of the $9.9-million system in providing composability and interactivity to meet the increasing demand for computational resources, enable new computational paradigms, expand HPC to non-traditional research domains, and train the next generation of researchers and HPC workforce.

Its not just the CPU nodes or the GPU nodes, Song said. Its the entire ecosystem that focuses on getting more users onto the significant resources.

Partnering Purdue with Dell, DDN, and Nvidia, Anvil will feature:

The system, which will have a peak performance of 5.3 petaflops, will become operational by Sept. 30, 2021, with early-user access the previous summer. It will be 90% allocated through XSEDEs XRAC allocations system, with the remainder as discretionary allocation by Purdue.

Delta: The Mark of Change

Bill Gropp, director of the National Center for Supercomputing Applications, University of Illinois Urbana-Champaign, introduced the Category I Delta system. With more than 800 late-model Nvidia GPUs, the $10-million resource will be the largest GPU system by FLOPS in NSFs portfolio at launch.

Titled after the Greek letter, the name was chosen to indicate change, said Gropp, PI of the new resource. Theres a lot of change in the hardware and software and the way we make use of the systems. Delta is intended to help drive a broader adoption of GPU technology past the end of Dennard scaling.

Delta will feature:

Delta, like Anvil, will be 90% allocated through XSEDE, will start operations on Oct. 1, 2020.

Jetstream2: An Approaching Front in Cloud HPC

Jetstream2, the final new NSF Category I system, was introduced by PI David Hancock, director for advanced cyberinfrastructure at Indiana University. Building on the success of the Jetstream system, the new $10-million supercomputer will serve a similar role in interactive, configurable computing for research and education, thanks in part to agreements with Amazon, Google, and Microsoft to support cloud compatibility.

The configuration process for Jetstream2 is in its final phases and is still ongoing, Hancock said. But the new system will feature:

The system, which will combine cyberinfrastructure from Indiana University, Arizona State University, Cornell University, the Texas Advanced Computing Center, and the University of Hawaii, is planned to begin early operations in August 2021 and production by October 2021. Additional partners include the University of Arizona, Johns Hopkins University [Galaxy team], and UCAR [Unidata team]. The system vendor partner for the project will be Dell, Inc. Jetstream2 will be XSEDE-allocated.

Neocortex: The Next Leap Forward in Deep Learning

Paola Buitrago, director of Artificial Intelligence and Deep Learning at the Pittsburgh Supercomputing Center (PSC) at Carnegie Mellon University and the University of Pittsburgh, presented on the centers new NSF Category II system, Neocortex. Named for the brains center for higher functions, the new machine will serve as an experimental testbed of new technology to accelerate deep learning by orders of magnitude, similar to the sea change introduced by GPU technology in 2012.

Its innovative and its meant to be exploratory, PI Buitrago said. In particular we have one goal that we would like to scale this technology we aim to engage a wide audience and foster adoption of innovative technologies in deep learning.

The $5-million system will pair Cerebrass CS-1 and Hewlett Packard Enterprise (HPE) Superdome Flex technology to provide 800,000 AI-optimized cores with a uniquely quick interconnect. Neocortex will feature:

Neocortex will enter its early user program in the fall of 2020.

Voyager: Specialized Processors, Optimized Software for AI

Voyager, another $5-million NSF Category II system, was introduced by PI Amit Majumdar of the San Diego Supercomputer Center. Beginning with focused select projects in October 2021, the supercomputer will stress specialized processors for training and inference linked with a high-performance interconnect, x86 compute nodes, and a rich storage hierarchy.

We are most interested to see this as an experimental machine and see its impact and engagement of the user community, Majumdar said. So we will reach out to AI researchers from a wide variety of science, engineering and social sciences [fields], and there will be deep engagement with users.

Supermicro Inc. and SDSC will jointly deploy Voyager, featuring:

Specific early user applications intended for Voyager will include the use of machine learning to improve trigger, event reconstruction, and signal-to-background in high-energy physics; achieving quantum-modeling-level accuracy in molecular simulations in chemistry, biophysics, and material science; and satellite image analysis.

Voyager will follow a three-year testbed phase focused on select deep user engagement with a minimum of two years of XSEDE-allocation.

View post:

PEARC20 Plenary Introduces Five Upcoming NSF-Funded HPC Systems - HPCwire

NIH Awards $6M to UConn Health Biological Computer Modeling Teams – HPCwire

July 28, 2020 Two UConn School of Medicine biological computer modeling groups at UConn Health have won a five-year award worth more than $6 million to continue and enhance their longstanding software resource, committed to supporting cellular biology research throughout the international scientific community.

The National Institute of General Medical Sciences at the National Institutes of Health awarded the funding toVirtual Cell (VCell)andCOPASIbased on their 20-year record of serving the research community as vital computational resources. The award assures the continued maintenance of both software tools and allows the teams to work together to support their tens of thousands of users.

COPASI is a computer program that shows how a system changes over time, and which factors might affect those changes. Originally designed for biochemistry, it is now used by researchers from many fields, from ecology to cell biology. COPASI has even been used for epidemiology: biochemist Pedro Mendes, one of the original designers of COPASI, is currently using the program to help UConn Health predict how many COVID-19 patients to expect in future weeks.

The components are molecules, but they could be people. COPASI allows you to define their dynamics, and how they interact, Mendes says.

VCell is a virtual environment that allows cell biologists to explore the spatial dimension of biochemistry in cells. It matters where a chemical reaction takes place, and how the products of that reaction might travel to a remote target; for example, a toxic molecule might be easily disarmed if it encounters a certain area of a particular cell in your kidney, but if it doesnt get there, it could stay toxic. VCell allows scientists to incorporate that kind of detail into a model, allowing scientists to simulate biochemical reactions coupled to diffusion and transport in the complex geometries of cells and tissues.

VCell also keeps a library of biological models in an openly accessible database, so researchers dont have to reinvent the wheel every time they want to model a specific biological process. And it has a dedicated supercomputer, housed at UConn Health, that researchers can access remotely to run their simulations if their own machines dont have enough computing power.

COPASI and VCell are both powerful tools on their own, but together they can do extraordinarily sophisticated things. For example, brain cells can be very long, with many fingerlike dendrites that connect with other brain cells. Researchers might use COPASI to develop a model of how such a brain cells chemistry changes over time, and then use the COPASI model inside of a VCell spatial model of a brain cell to see how the chemistry changes in different areas of the cell.

Researchers from all over the world use COPASI and VCell. Because of this NIH funding, the programs, and UConn Healths supercomputing facilities, are available to anyone who wants to use them and has an internet connection. Maintaining and improving such sophisticated computer models requires a whole team of cell biologists, physicists, programmers, and support staff. The grant will go a long way towards supporting this group and the physical infrastructure that makes biological modeling possible for researchers around the world.

Its very unusual for a single institution to have this confluence of expertise in a single area, says Les Loew, a professor of cell biology and Director of the Berlin Center for Cell Analysis and Modeling, who heads the VCell team.

Mendes adds: Both Les and I are pioneers in this field. We began working on simulation in the late 1980s, early 1990s. Its been a lifelong investment. So that working together and getting this grant is really satisfying.

Source: Kim Krieger,University of Connecticut

See the original post here:

NIH Awards $6M to UConn Health Biological Computer Modeling Teams - HPCwire

Continental Debuts the Fastest Supercomputer in the Automotive Industry and It’s Built for AI – EnterpriseAI

The automotive industry finds strong use cases for supercomputers, which can help industrial designers do everything from optimizing engines and aerodynamics to running virtual crash tests. Now, German automotive manufacturer Continental is announcing a new leap forward with the debut of the fastest known supercomputer in the automotive industry.

The new supercomputer (as-yet unnamed) is built from more than 50 networked Nvidia DGX nodes, connected by Nvidia Mellanox InfiniBand and based on the Nvidia DGX SuperPOD reference architecture. The DGX nodes (each of which costs, Continental says, about as much as a luxury sports car) are purpose-built for AI crucial functionality for Continental, which is aiming to strengthen its deep learning chops in order to run smarter simulations and develop future technologies for applications like self-driving cars. Continental expects, for example, that the investment will enable its engineers to run 14 times more autonomous driving experiments.

The high-end computer will be used for innovative software disciplines such as deep learning and AI-driven simulations, explained Christian Schumacher, head of program management systems in Continentals Advanced Driver Assistance Systems business unit. With the computing power we have now gained, we can develop the modern systems we need for assisted, automated and autonomous vehicles in a much quicker, more effective and more cost-efficient way. We use these to simulate real-life, physical test drives and need fewer journeys on the actual road as a result. This significantly reduces the time required for programming, including the training of artificial neural networks.

While the system is based in a datacenter in Frankfurt, that location was chosen specifically to enhance its accessibility to cloud providers and, by extension, its accessibility to Continentals engineers around the world. This tracks with Continentals general business trajectory, with nearly 40 percent of its 51,000 engineers specializing in software and IT among them, nearly a thousand experts in AI (a number expected to double by 2022).

Software is the oxygen of the industry, said Elmar Degenhart, chairman of Continentals executive board. It lays the foundation for entirely new services. Value creation with software is recording double-digit percentage growth each year.

The power of the system is a point of pride for Continental, which is comparing it favorably to systems on the Top500 list of the worlds most powerful publicly ranked supercomputers (the source of its fastest-in-the-industry claim). Continental also touts the efficiency of the system, with the host datacenter using green energy to power the supercomputer and the systems GPU-driven design proving comparatively efficient.

Header image: Continental's new supercomputer.

Related

Go here to read the rest:

Continental Debuts the Fastest Supercomputer in the Automotive Industry and It's Built for AI - EnterpriseAI

WATCH: Supercomputer generates 3D videos that show how Earth may have lost half of its atmosphere to create th – Business Insider India

Planets arent created overnight. It takes billions of years of evolution to get the rocks and gas to come together to create what can be called a planetary body. If in the middle of that formation, something smashes into the planet before its whole it can have wide-ranging consequences, like atmospheric loss.

The 3D videos created by scientists at Durham University and the University of Glasgow propose two different scenarios. One, where the impact is head-on and another, with a grazing impact.

Advertisement

Grazing impacts, like the one thats supposed to be the Moons origin story, lead to much less atmospheric loss as compared to a head-on collision, according to the study published in Astrophysical Journal.

Advertisement

"In spite of the remarkably diverse consequences that can come from different impact angles and speeds, we've found a simple way to predict how much atmosphere would be lost, said the lead author of the study, Jacob Kegerries. Advertisement

SEE ALSO:NASA issues new guidelines to protect the Moon and Mars from Earth's germs

The Hope Probe is on its way to Mars everything you need to know about the UAEs first jump into interplanetary space

IN PICS: Comet NEOWISE spotted blazing from the Stonehenge to the Swiss Alps, in skies all over the world

Go here to read the rest:

WATCH: Supercomputer generates 3D videos that show how Earth may have lost half of its atmosphere to create th - Business Insider India

Supercomputer Market to witness an impressive growth during the forecast period 2020 – 2026 – CueReport

Global Supercomputer market Size study report with COVID-19 effect is considered to be an extremely knowledgeable and in-depth evaluation of the present industrial conditions along with the overall size of the Supercomputer industry, estimated from 2020 to 2026. The research report also provides a detailed overview of leading industry initiatives, potential market share and business-oriented planning, etc. The study discusses favorable factors related to current industrial conditions, levels of growth of the Supercomputer industry, demands, differentiable business-oriented approaches used by the manufacturers of the Supercomputer industry in brief about distinct tactics and futuristic prospects.

The research report on Supercomputer market provides a comparative study of the historical data with the changing market scenario to reveal the future roadmap of the industry. It offers detailed insights pertaining to the growth markers, challenges and opportunities residing in this industry vertical. A magnified view of the regional landscape and competitive terrain of this business sphere is also encompassed in the document. In addition, the report reevaluates the market behavior considering the impact of COVID-19 on the business landscape.

Request Sample Copy of this Report @ https://www.cuereport.com/request-sample/24182

Supercomputer market rundown:

Request Sample Copy of this Report @ https://www.cuereport.com/request-sample/24182

An overview of regional terrain:

Competitive outlook of the Supercomputer market:

Other important takeaways from the Supercomputer market report:

The report answers important questions that companies may have when operating in the global Supercomputer market. Some of the questions are given below:

What will be the size of the global Supercomputer market in 2026?

What is the current CAGR of the global Supercomputer market?

Which product is expected to show the highest market growth?

Which are the top players currently operating in the global Supercomputer market?

Which application is projected to gain a lions share of the global Supercomputer market?

Will there be any changes in market competition during the forecast period?

Which region is foretold to create the most number of opportunities in the global Supercomputer market?

Request Customization on This Report @ https://www.cuereport.com/request-for-customization/24182

Follow this link:

Supercomputer Market to witness an impressive growth during the forecast period 2020 - 2026 - CueReport

Repeated intelligence failures: Time to worry – The Sunday Guardian

Keep an eye on the developments in what is known as the Central Sector where initial though yet to be officially confirmed reports coming in suggest that the Chinese have increased their activity in the area opposite Chitkul in Kinnaur district of Himachal Pradesh.

The mother of all pandemics notwithstanding, it is fairly obvious that China had put into place the plans to take on India quite some time ago. What is also becoming obvious with every passing day is that on the Indian side, despite having a plethora of intelligence agencies, the entire establishment has been caught not just napping, but are so badly compromised by their failure, they have no choice but to further cover up by creating more and more smoke in the hope their little empires do not sink. For those in the know, who have been warning that the rot is extremely deep, all they can do is despair at the state of affairs as the pigeons come home to roost.

Keep an eye on the developments in what is known as the Central Sector where initial though yet to be officially confirmed reports coming in suggest that the Chinese have increased their activity in the area opposite Chitkul in Kinnaur district of Himachal Pradesh. As the crow flies, this is not very far from Nelang, the border post north of Harsil in Uttarakhand. Even though the frontier in these areas is demarcated and the international boundaries are well defined, Xi Jinping seems intent on testing the Indians along the entire 3,500 km border. The scale of operations today is much larger, but the pattern being followed seems to be exactly the same as what the Chinese had done in the pre-1962 build up.

The Kargil War in 1999 was labelled as an intelligence failure and reams and reams were subsequently written on how so-and-so warned this one, and that one warned these ones, but those who mattered failed to join the dots until one fine morning, using Indian cement bought from Indian companies in Indian markets, Pakistani sangars and bunkers were ready and their occupants were ready to cock a snook at the Indian Army. A couple of months later after it was realised that the heights around Drass, Kargil and Batalik had indeed been occupied, with more than 500 officers and men killed on the Indian side, the surviving intruders mainly from the Northern Light Infantry, were forced to withdraw across the LOC. India rejoiced. It had won the limited war. We buried the Pakistani dead, returned their eight prisoners and appointed a committee to see what had gone wrong. The two words intelligence failure kept cropping up with regular frequency, there were some more debates, a few editorials lamenting the fact that the committees recommendations were not being implemented, and then it was life as usual.

Post-Kargil there were strategic changes on the Indian sidean area that was earlier held by 121 Independent Brigade now became the responsibility of XIV Corps. The then Home Minister, who was also the deputy Prime Minister, L.K. Advani, headed a Cabinet Group of Ministers who investigated intelligence lapses during the Kargil War and on their recommendation a comprehensive reform of intelligence agencies was undertaken. Accordingly, the Defence Intelligence Agency (DIA) was created and formally became operational in March 2002. The DIA was to henceforth coordinate with all the three intelligence wings of the Army, Air Force and Navy, and in one of those periodic nods given to jointmanship in the armed forces, the director generals post was to be held in rotation between the three armed services. However, since its inception, owing to other reasons, it has only had DGs from the Army.

DIA, which directly came under the Ministry of Defence, was to coordinate further with the Intelligence Bureau (IB), the Research and Analysis Wing (RAW), National Technical Research Organisation (NTRO), Directorate of Revenue Intelligence (DRI) and the National Investigation Agency (NIA). Small matter that in addition to these organisations, others involved in the business of gathering both internal and external intelligence include the Central Bureau of Investigation (CBI), which, apart from functioning as an investigating agency, also gathers intelligence and acts as a liaison with Interpol; the Aviation Research Centre (ARC) under whom come aerial surveillance and reconnaissance flights (PHOTINT), imagery intelligence (IMINT), and signals intelligence (SIGINT) operations; the Shimla-based All India Radio Monitoring Service (ATRMS); the Central Economic Intelligence Bureau (CEIB); and many, many more. If they were to be listed, it would make India not only sound like an extreme police state, it would seem even a mouse could not find a mate without a file being opened on it.

In this complex labyrinth, if we were to further get into who reports to who, which group is responsible for what, it would perhaps require a super computer to decipher the complex maze and even then you would only have part of the story. This huge mammoth networkincidentally, state governments have their own complex bodiesthough undoubtedly understaffed and over worked, invariably fails to pick up tell-tale signs and like the police in Bollywood movies of yore, always is the last to arrive on the scene. On the western front, the sea-borne Mumbai attack was a classic case and now, across the high Himalayas, with all the eyes supposedly pouring over satellite images, maps, photos, the entire Chinese build-up in Ladakh was missed, or perhaps more accurately, not interpreted correctly. In this Alice in Wonderland scenario, what a pity there is no Queen of Hearts to declare off with their heads!

Far from itthe magical maze ensures there is actually very little responsibility, and as we move up the narrow funnel to the top, it becomes even more critical for those in power to cover-up for their blunders. In a scenario where the border management is with the Ministry of Homethe Border Security Force (BSF) is responsible for the Pakistan and Bangladesh borders; the Indo Tibetan Border Police (ITBP) looks after China; the Sashastra Seema Bal (SSB) with 73 battalions looks after Nepal and Bhutan and the Assam Rifles is deployed in the Northeast where it keeps an eye on the Myanmar border as well without actually guarding it per se. Technically, all come under the operational command of the Army when and if, but it is common knowledge that all is not well in this marriage as well. Fortunately, the hair-brained proposal to merge the Assam Rifles, perhaps one of the best para-military organisations in the world, with the ITBP has been shelved for the time being. Given the way turf wars play out, it will be revived sooner or later yet again.

Maybe there are valid and straight forward answers to these questions if they are asked, but surely apart from the movement of three Chinese divisions for the purported high altitude exercises, someone, somewhere would have noticed the additional stocking up that was required to sustain these troops for a longer period of time. A back of the envelope calculation would suggest upward of 3 lakh tons of material just to create the infrastructure. And let us face it, unlike our boys in the paramilitary and even in the Army, who are often moved and expected to fight with what they have, the Chinese, be it their accommodation, vehicles, winter clothing etc., are not exactly following our standards when it comes to defining the happiness quotient.

Unfortunately, in covering up for this big failure, and combined with the need to always appear on top of the other side, transparency went out of the window, opening the doors for what the Chinese have also perfectedthe weaponization of dissent. This cacophony of defence experts and defence analysts who took over the print media and the airwaves to demolish whatever little credibility the government had, was nothing new. In the pre-1962 build up, though thankfully television was not there, the Chinese had worked the media in a manner where a sizeable population of India was festooning the complex path of international diplomacy with land mines. Nehrus comment that we shall throw the Chinese out at the airport as he left for Sri Lanka just before the conflict, was then used by the PRC as a virtual declaration of war.

We can sigh, roll the eyes and say, as we repeatedly do, that these are the pitfalls of democracy, but we are playing with fire. The fact of the matter is that in 2012, in what one can only describe as some bizarre decisions, it was decided that the Armed Forces would hitherto only be entrusted with human intelligence (HUMINT) and all technical intelligence (TECHINT) would be the responsibility of other agencies. The one agency set up as an ad hoc unit after it was realised that there was no covert capability to strike back at Pakistan after the Mumbai attack, the much-maligned Technical Services Division (TSD), was amazingly declared a rogue organisation and it was disbanded by the very people it was serving.

The TSD was exposed in the media in an orchestrated manner by vested interests at the very top within the Army, but its demise also suited many others who despite operating with humungous budgets were falling short on results that were being put on the table by this small band of officers and men. Forget about RAW and IB, who on their official web page very rightly say their budgets are classified, the DIA and NTRO are packed with officersquite a few re-employedwho have done some imagery course and for whom these tenures are Dilli ki posting where it is a nine-to-five job during which time their own post-retirement life takes precedence over everything else. I am not echoing some disgruntled voices, but one hears this lament repeatedly by those who are in the know. If it is letting out a national classified secret, well, so be it.

CHINESE WILL STAY THROUGH THE WINTER

It should be pretty obvious by now that whatever the outcome of the disengagement talks, the Chinese are going to stay in Eastern Ladakh through the winter, which will throw up its own challenges. The gradual expansion of probes will continue, be it Himachal, Garhwal, Kumaon, Nepal, Sikkim, Bhutan or Arunachal. The reiteration of their claim on Eastern Bhutan, and the chances of them following exactly the same pattern of aggression as in 1962 make the entire border from the Karakoram Pass in the west to Kibithoo in the east a burning hot potato (which is ironical, given the freezing temperatures across this entire zone).

How much time India has before something gives on the border a la Galwan, no one can tell, but there are immediate areas of concern that need to be addressed by the one man who today calls the shots, hopefully even if it concerns those in his immediate decision-making circle. Repeated intelligence failures cannot be swept under the carpet, and accountability has to be demanded.

On the ground, today we have three different Army commanders dealing with the Chinese, plus three Air Force commands, and various para-military headquarters each with their own pulls and pressures. In addition, we have two other countries that are also involved in the standoff. It is imperative that the flow of information is seamless and all differences sorted out. Enough studies and papers have been written on integrated command systems and though the fault-lines have been created over the years, it is now imperative that every resource is brought to bear in an optimal manner to counter the growing threat from the Chinese dragon by tackling these issues.

When he was the Home Minister, P Chidambaram had set up the Multi Agency Centre (MAC), wherein representatives of all intelligence agencies met on a daily basis to share information, but this was more or less entirely terrorism-centric. In fact, the NATGRID had been created that allowed for information to be shared on a real time basis, but then again, in a strange quirk of inverted logic, in the latter half of 2012 it was decided to take the Army out of this loop. With the growing multi-dimensional threat emerging from not only China but Pakistan also, these anomalies have to be corrected. We have to remember that once milk spills out of the bottle, there is no way one can put it back again.

Shiv Kunal Verma is the author of the highly acclaimed 1962: The War That Wasnt and The Long Road to Siachen: The Question Why.

Visit link:

Repeated intelligence failures: Time to worry - The Sunday Guardian

What is supercomputer? – Definition from WhatIs.com

A supercomputer is a computer that performs at or near the currently highest operational rate for computers. Traditionally, supercomputers have been used for scientific and engineering applications that must handle very large databases or do a great amount of computation (or both).Although advances likemulti-core processors and GPGPUs (general-purpose graphics processing units)have enabled powerful machinesfor personal use (see: desktop supercomputer, GPU supercomputer),by definition, a supercomputer is exceptional in terms of performance.

At any given time, there are a few well-publicized supercomputers that operate at extremely high speeds relative to all other computers. The term is also sometimes applied to far slower (but still impressively fast) computers. The largest, most powerful supercomputers are really multiple computers that perform parallel processing. In general, there are two parallel processing approaches: symmetric multiprocessing (SMP) and massively parallel processing (MPP).

As of June 2016, the fastest supercomputer in the world was the Sunway TaihuLight, in the city of Wixu in China. A few statistics on TaihuLight:

The first commercially successful supercomputer, the CDC (Control Data Corporation) 6600 was designed by Seymour Cray. Released in 1964, the CDC 6600 had a single CPU and cost $8 million the equivalent of $60 million today. The CDC could handle three million floating point operations per second (flops).

Cray went on to found a supercomputer company under his name in 1972. Although the company has changed hands a number of times it is still in operation. In September 2008, Cray and Microsoft launched CX1, a $25,000 personal supercomputer aimed at markets such as aerospace, automotive, academic, financial services and life sciences.

IBM has been a keen competitor. The company's Roadrunner, once the top-ranked supercomputer, was twice as fast as IBM's Blue Gene and six times as fast as any of other supercomputers at that time. IBM's Watson is famous for having adopted cognitive computing to beat champion Ken Jennings on Jeopardy!, a popular quiz show.

Year

Supercomputer

Peak speed (Rmax)

Location

2016

Sunway TaihuLight

93.01PFLOPS

Wuxi, China

2013

NUDTTianhe-2

33.86PFLOPS

Guangzhou, China

2012

CrayTitan

17.59PFLOPS

Oak Ridge, U.S.

2012

IBMSequoia

17.17PFLOPS

Livermore, U.S.

2011

FujitsuK computer

10.51PFLOPS

Kobe, Japan

2010

Tianhe-IA

2.566PFLOPS

Tianjin, China

2009

CrayJaguar

1.759PFLOPS

Oak Ridge, U.S.

2008

IBMRoadrunner

1.026PFLOPS

Los Alamos, U.S.

1.105PFLOPS

In the United States, some supercomputer centers are interconnected on an Internet backbone known as vBNS or NSFNet. This network is the foundation for an evolving network infrastructure known as the National Technology Grid. Internet2 is a university-led project that is part of this initiative.

At the lower end of supercomputing, clustering takes more of a build-it-yourself approach to supercomputing. The Beowulf Project offers guidance on how to put together a number of off-the-shelf personal computer processors, using Linux operating systems, and interconnecting the processors with Fast Ethernet. Applications must be written to manage the parallel processing.

Link:

What is supercomputer? - Definition from WhatIs.com