Why You Can’t Call in an Air Strike with an iPhone – War on the Rocks

Christian Brose, The Kill Chain: Defending America in the Future of High-Tech Warfare(Hachette Books, 2020)

Between 1996 and 2011, the U.S. military spent $6 billion to develop and field a new tactical radio. Even in the context of U.S. military spending, $6 billion is a big chunk of change. By comparison, the Air Force spent approximately $3 billion to develop and procure the pathbreaking MQ-1 Predator. The Predator ushered in a new era of drone warfare, whereas the tactical radio project was cancelled before it could produce a single radio.

Harris Communications was one of the companies that hoped to win the contract for this program. The leaders at Harris foresaw the enormous technological and program management challengesthat awaited the winner. Harris didnt win the contract, but that didnt stop the company from taking advantage of the opportunity.Itinvested $200 million of its own research and development dollars to develop a radio system with less ambitious performance goals and that would be unencumbered by unwieldy Pentagon acquisition regulations. Harris succeeded in 2008 with the PRC-117G radio, which could support a modest tactical voice and data network that has since become the workhorse standard for the Army and Marine Corps.

Harris radio has many of the hallmarks in which proponents of greater commercial technology in the U.S. military believe. Namely, nimble commercial firms can be more effective if they are less constrained by the Pentagons cumbersome acquisition bureaucracy. Although Harris is not a Silicon Valley start-up, the success of its radio when compared to the more ambitious radio-that-never-was illustrates the problems of the acquisition system and highlights theattractiveness of letting technologists have more freedom to work.

The discourse about emerging technology in theDepartment of Defensetoday is centered around themilitarypotential of commercially developed information technology. If anything, the role of commercial technology is clearer today than it was in in 1996. Pivotal, a software company, worked with the Air Forces Kessel Run program to transformtanker refueling schedules with easy-to-use software. Artificial intelligence-driven drones have the potential to overwhelm defenses while quantum sensors can detect even stealthy submarines beneath the waves by their minute gravitational signatures on the wavetops. The military potential for these and other commercially developed technologies is substantial.

For many technologists eager to help the U.S. military though,the conversation is often tinged with asense of frustration that the military does not adopt commercial technology more readily than they believe it should. This frustration often focuses on the role of the Department of Defenses acquisition bureaucracy. Congress, for instance, has asked pointed questions to the Army about why it was reluctant to adopt the commercially developedPalantirintelligence analysis software system.And, formerGoogle CEO Eric Schmidtonce proclaimedto the head of U.S. Special Operations Command thatIf I got under your tent for a day, I could solve most of your problems. Tesla and SpaceX CEO Elon Musk confidently stated earlier this year that a manned F-35 would beno matchagainst a semi-autonomous drone in air-to-air combat. There is a strong sense among interested technologists that breakthroughs in the commercial sector will be critical to warfare in the future and that the overly restrictive Pentagon processes and stodgy culture are impediments to that future.

The frustrations of commercial technologists should concern the Department of Defense. The under secretary of defense for research and engineerings modernization priorities include artificial intelligence, biotechnology, and other technologies where the commercial sector is leading development efforts. It is clear that commercial technology companies will be an important part of an expanded defense industrial base, giving weight to technologists concerns. Some firms may find the defense sector to be an economically challenging market; a reputation for frustrating red tape may make it even less attractive. Most concerning, though, is that simple frustrations about Pentagon bureaucracy are an easy conclusion to draw that offers little hope about whether the situation will improve. Such a conclusion obscures deeper exploration into the reasons why commercial technology is not more readily adopted by the military.

Is The System the Only Obstacle?

There is no shortage of criticism of thedefense acquisition bureaucracy, butis that the only reason why troops arent calling in air strikes from iPhones and using artificial intelligence to control drone swarms? Two other reasons might also be considered:first, adapting commercial technology for military purposes is harder than it seems; and second, the military might not be fully convinced that available commercial technologies are what it wants.

Christian Broses new book, The Kill Chain: Defending America in the Future ofHigh-TechWarfare,is aninsightfulanalysisof the bureaucratic obstacles to the adoption of commercial technologies by the U.S. military.His years of experience on Capitol Hill imbue his book with a sense of context thatmoves the conversation past the frustrations that technologists have expressed.He deftly describes the bureaucratic and political power structures and incentives that keep the U.S. military from more readily integrating commercial technology.It is a powerful contribution to the conversation about technology and defense.

Broses critiques are more nuanced than those of many frustrated technologists. However, he still confines hisarguments toissues about the political incentive structure and acquisition bureaucracy. To keep advancing the conversation, we should consider these two possible obstacles alongwith Broses critique of the bureaucracy.

The Bureaucracy Is Imposing Obstacles

In his book, Brose argues that the Pentagons organization, process, and incentives are preventing commercial technology from taking root in the military. He argues that commercial information technologies such as artificial intelligence will define the future of conflict and that the United States is underinvesting both financially and organizationally in those technologies. Meanwhile, Brose argues, Americas adversaries have watched, learned, and stolen a march on new technologies, including artificial intelligence, quantum computing, biotechnology, and space systems.

Brose offers a well-thought-out diagnosis of why this underinvestment exists, even though the United States correctly envisioned the rolethatcommercial information technologies would playas far back as the early 1990s. Americas hubris about its supremacy made it slow to act, he argues, as did a two-decade counter-insurgency and counter-terrorism odysseythatdistracted the United Statesfrom making progress. Brose further argues that the Pentagon is incentivized to value stakeholder consensus over decisiveness, with a budgeting processthat favors incumbent programs over new ones and an acquisition system that favors process compliance over effective outcomes. The result, he believes, is a defense establishment that is unable to change course until it is too late.

Broses observations and arguments about the organizational hurdles to greater commercial technology adoption by the U.S. military forceus to reflecton the values for which the acquisition bureaucracy strives. For instance, his analysis of the acquisition systems prodigious regulatory burdens, which exist to ensure fair competition and save money, forces readers to question the purpose of all the red tape: Is saving pennies worth the trouble when the future of U.S. national security is at stake? Brose believes that, when it comes to confronting emerging great powers with chips on their shoulders and serious military technology ambitions, the United States has done what it did during the Cold War when it pick[ed] winners the people who could succeed where others could not, and the industrialists who could quickly build amazing technology that worked. Other concerns, such as fairness and efficiency, were of secondary importance.

However, there is something to be said for fairness and efficiency. Done right, fair competition yields a diversity of approaches that is more likely to prepare United States to endure the shocks and surprises of clever and adaptive adversaries. Even ballistic missile pioneer Bernard Schriever one of Brosespickedwinners hedged his bets bypursuing multiple approachesthat yielded the Atlas and Titan missiles. Cost-effectiveness is also underrated. Americas national resources are finite. And, theongoing COVID-19 pandemicis only one example that should encourage reflection on budget priorities. Brose correctly diagnoses the ills of the defense acquisition bureaucracy, but its goals are still worthwhile. Brose is right that mindless adherence to acquisition rules without considering the wider context wastes time and effort. One might be better served bycontinuing the hard workofreforming the bureaucracy not sidestepping it.

Defense Technology Is Harder Than It Looks

Another reason why emerging commercial technologies may not be more readily adopted by the Pentagon is that adapting such technology for military use may be harder than it seems. This chance is a distinct possibility. MaaikeVerbruggenargues that military expectations for artificial intelligence should be tempered. Artificial intelligence is not yet capable of performing subjective tasks where judgment is required; for instance, it still struggles to accurately flag disinformation. Recent strides in autonomous vehicles are encouraging, buttechnical challenges remain.And, making themcost effectiveenough for widespread military use will be a significant hurdle. Building a single, robust tactical network to link platforms also remains a much more difficult challenge than it seems.Commercial technologies being adapted for military use might be less technically risky since they are perfected in commercial settings. But, while military performance requirements are often more demanding than commercial ones, the fundamental challenge of being pitted against an actively plotting adversary remains. Brose does not seem to address these issues either.

Technology May Not Even Be the Answer

Finally, we must consider the possibility that the role of commercial technology within the U.S. military may not be desirable in the first place. Brose offers a very specific vision of how artificial intelligence, quantum computing, and networked systems should be wielded by the United States. He paints a detailed picture of sensors that locate adversaries with impunity, a battlefield cluttered with disposable unmanned systems, and networks that will accelerate the tempo of operations to new highs.

This optimistic vision is enthralling, but should it be the goal for which the U.S. military strives? A battlefield network that seamlessly links together sensors and shooterswill accelerate the operational tempo when it works. How will an adaptive adversary seek to disrupt that network and turn its advantage into a liability? How will commanders leverage such connectivity? What role should artificial intelligence play? Will technology enhance initiative and decision-making, further enable micromanagement, or something else?

Brose tangentially examines these issues but only as they concern artificial intelligence and the ethics of armed conflict. He offers a refreshingly nuanced vision of an artificial intelligence that would enhance the abilities of human decision-makers and refrain from making the decisions itself. He forthrightly acknowledges the technological challenges of achieving that ideal. He considers the role of trust and artificial intelligence in military decision-making.

But, Brose never really questions the role of commercial technology and its effect on war in the first place. He admits that the fog of war will never truly lift but still walks readers through a vision of networked warfare where he believes that it does.Some within the defense community urge greater caution about the enthralling vision of networked warfare.LauraSchousboe, B. A. Friedman, and Olivia Garardhave argued that the ultimate role of emerging technologies is still unclear. The interaction of humans both friendly and enemy and systems should be deliberately considered. Commercial technology is likely to play a significant role in future conflict, but the Pentagon should guard against too much optimism.

No Plan Survives First Contact With the Enemy

Brose has made an important contribution to the debate about commercial technology and the military. He sees the throughline between technologies, their military and political uses, and the domestic organizational and political landscapes. He understands that warfare is an inherently chaotic human endeavor that can defy the expectations of optimistic technologists. AsKill Chainpulls it all together in an admirable way, I hope Broseuses his deep knowledge of defense technology issuesto explore the obstacles outlined here as well other ones.

However, technologists and those who share their views should be cautious abouthow the future of armed conflict will play out.The vision of future war that Brose and others imagine is compelling, but the United States wont truly know how this situation will play out until a crisis arrives. The same is true for U.S. adversaries.Emergingcommercialtechnologies will play a role, but the military may wish to consider additional steps tomake their adoption more effectivein the face of such uncertainty.

For instance,the militarymight considerreforming therequirementsprocessto address the issues of desirability and implementation.Reforming requirementsmight help the Pentagonfully leveragethe flexibility offeredby the updated acquisition regulation.This sort of reform canbringclarityto themost useful intersections between emerging technologies and the military, which canalsokeepcost, schedule, and performance expectationsin line with reality.

The Department of Defense can alsopreparefor inevitable surprises.Richard Danzigobservedthatpredictions about the future of war are consistently wrong. It is better to be circumspect about the nature of future conflicts and prepare for predictive failures. The continued attention to rapid acquisition processes is an encouraging sign.Past experienceswith quick responses to unforeseen adversary capabilities also offer lessons to learn.

The radio that Harris Communications built was neither perfect nor the best radio that people could imagine at the time. However, it provided capabilities that were sorely lacking. Its designers accomplished this achievement by combining an understanding of what was technologically possible with a clear grasp of the performance requirements that were most important to users. As the Pentagon and commercial technologists continue to explore the potential of commercial technologies for the military and work towards greater adoption, they may wish to focus not only on lowering bureaucratic barriers but also on managing expectations about what technologies will be most beneficial and how they will be used.

Jonathan Wong is an associate policy researcher at the non-profit, non-partisan RAND Corporation and a non-resident fellow at Marine Corps Universitys Krulak Center for Innovation and Creativity. He can be found on Twitter @jonpwong.

Image: U.S. Air Force (Photo by Staff Sgt. Izabella Workman)

See the rest here:
Why You Can't Call in an Air Strike with an iPhone - War on the Rocks

Machine Learning Algorithm From RaySearch Enhances Workflow at Swedish Radiation Therapy Clinic – PRNewswire

STOCKHOLM, June 29, 2020 /PRNewswire/ -- RaySearch Laboratories AB (publ) has announced that by using a machine learning algorithm in treatment planning RayStation*, Mlar Hospital in Eskilstuna, Sweden, has made significant time savings in dose planning for radiation therapy. The algorithm in question is a deep learning method for contouring the patients' organs. The decision to implement this advanced technology was made to save time, thereby alleviating the prevailing shortage of doctors specialized in radiation therapy at the hospital - which was also exacerbated by the COVID-19 situation.

When creating a plan for radiation treatment of cancer, it is critical to carefully define the tumor volume. In order to avoid unwanted side-effects, it is also necessary to identify different organs in the tumor's environment, so-called organs at risk. This process is called contouring and is usually performed using manual or semi-automatic tools.

The deep learning contouring feature in RayStation uses machine learning models that have been trained and evaluated on previous clinical cases to create contours of the patient's organs automatically and quickly. Healthcare staff can review and, if necessary, adjust the contours. The final result is reached much faster than with other methods.

Andreas Johansson, physicist at Region Srmland, which runs Mlar Hospital, says: "We used deep learning to contour the first patient on May 26 and the treatment was performed on June 9. From taking 45-60 minutes per patient, the contouring now only takes 10-15 minutes, which means a huge time saving."

Johan Lf, founder and CEO, RaySearch, says: "Mlar Hospital was very quick to implement RayStation in 2015 and now it has shown again how quickly new technology can be adopted and brought into clinical use. The fact that this helps to resolve a situation where hospital resources are unusually strained is of course also very positive."

CONTACT:

For further information, please contact:Johan Lf, Founder and CEO, RaySearch Laboratories AB (publ)Telephone: +46-(0)-8-510-530-00[emailprotected]

Peter Thysell, CFO, RaySearch Laboratories AB (publ)Telephone: +46-(0)-70-661-05-59[emailprotected]

This information was brought to you by Cision http://news.cision.com

https://news.cision.com/raysearch-laboratories/r/machine-learning-algorithm-from-raysearch-enhances-workflow-at-swedish-radiation-therapy-clinic,c3144587

The following files are available for download:

SOURCE RaySearch Laboratories

See original here:
Machine Learning Algorithm From RaySearch Enhances Workflow at Swedish Radiation Therapy Clinic - PRNewswire

Fake data is great data when it comes to machine learning – Stacey on IoT

Its been a few years since Ilast wroteabout the idea of using synthetic data to train machine learning models.After having three recent discussions on the topic, I figured its time to revisit the technology, especially as it seems to be gaining ground in mainstream adoption.

Back in 2018, at Microsoft Build, I saw a demonstration of a drone flying over a pipeline as it inspected it for leaks or other damage. Notably, the drones visual inspection model was trained using both actual data and simulated data. Use of the synthetic data helped teach the machine learning model about outliers and novel conditions it wasnt able to encounter using traditional training. Italso allowed Microsoft researchers to train the model more quickly and without the need to embark on as many expensive, data-gathering flights as it would have had to otherwise.

The technology is finally starting to gain ground. In April, a startup calledAnyverse raised 3million ($3.37 million)for its synthetic sensor data,while another startup,AI.Reverie,published a paper about how it used simulated data to train a model to identify planes on airport runways.

After writing that initial story, I heard very little about synthetic data untilmy conversation earlier this month with Dan Jeavons, chief data scientist at Shell. When I asked him about Shells machine learning projects, using simulated data was one that he was incredibly excited about because it helps build models that can detect problems that occur only rarely.

I think its a really interesting way to get info on the edge cases that were trying to solve, he said. Even though we have a lot of data, the big problem that we have is that, actually, we often only had a very few examples of what were looking for.

In the oil business, corrosion in factories and pipelines is a big challenge, and one that can lead to catastrophic failures. Thats why companies are careful about not letting anything corrode to the point where it poses a risk. But that also means the machine learning models cant be trained on real-world examples of corrosion. So Shell uses synthetic data to help.

As Jeavons explained, Shell is also using synthetic data to try and solve the problem of people smoking at gas stations. Shelldoesnthave a lot of examples because the cameras dont always catch the smokers; in other cases, theyre too far away or arent facing the camera. So the company is working hard on combining simulated synthetic data with real data to build computer vision models.

Almost always the things were interested in are the edge cases rather than the general norm, said Jeavons. And its quite easy to detect the edge [deviating] from the standard pattern, but its quite hard to detect the specific thing that you want.

In the meantime, startup AI.Reverie endeavored to learn more about the accuracy of synthetic data. The paper it published, RarePlanes: Synthetic Data Takes Flight, lays out how its researchers combined satellite imagery of planes parked at airports that was annotated and validated by humans with synthetic data created by machine.

When using just synthetic data, the model was only about 55% percent accurate, whereas when it only used real-world data that number jumped to 73%. But by makingreal-world data 10% of the training sample and using synthetic data for the rest, the models accuracy came in at 69%.

Paul Walborsky, the CEO of AI.Reverie (and the former CEO at GigaOM; in other words, my former boss), says that synthetic datais going to be a big business. Companies using such data need to account for ways that their fake data can skew the model, but if they can do that, they can achieve robust models faster and at a lower cost than if they relied on real-world data.

So even though IoT sensors are throwing off petabytes of data, it would be impossible to annotate all of it and use it for training models. And as Jeavons points out, those petabytes of data may not have the situation you actually want the computer to look for. In other words, expect the wave of synthetic and simulated data to keep on coming.

Were convinced that, actually, this is going to be the future in terms of making things work well, said Jeavons, both in the cloud and at the edge for some of these complex use cases.

Related

Read the rest here:
Fake data is great data when it comes to machine learning - Stacey on IoT

How Does AIOps Integrate AI and Machine Learning into IT Operations? – Analytics Insight

Data is everywhere growing across variety and velocity in both structured and unstructured formats. Leveraging this chaotic data generated at ever-increasing speeds is often a mammoth task. Even powerful AI and machine learning capabilities lose their accuracy if they dont have the right data to support them. The rise in data complexity, makes it challenging for IT operations to get the best from Artificial Intelligence and ML algorithms for digital transformation.

The secret lies in acknowledging this data, to use its explosion as an opportunity to drive intelligence, automation, effectiveness and productivity with Artificial intelligence for IT operations (AIOps). In simple words, AIOps refers to the automation of IT operations artificial intelligence (AI), freeing enterprise IT operations by inputs of operational data to achieve the ultimate data automation goals.

AIOps of any enterprise stands firmly on four pillars, collectively referred to as the key dimensions of IT operations monitoring:

Data Selection & Filtering

Modern IT environments create noisy IT data, collating this data and filtering for Excel, AI and ML models is a tedious task. Taking massive amounts of redundant data selecting data elements of interest often means filtering out up to 99% of data.

Discovering Data Patterns

Unearthing data patterns implies to collate filtered data to establish meaningful relationships between the selected data groups for further analysis.

Data Collaboration

Data analysis fosters collaboration among interdisciplinary teams across global enterprises, besides preserving valuable data intelligence that can accelerate future synergies within the enterprise.

Solution Automation

This dimension relates to automating data responses and remediation, in a bid to more precise solutions achieved at a quicker TAT.

A responsible AIOps platform combines AI, machine learning and big data with a mature understanding of IT operations. It makes way to assimilate real-time and historical data from any source for cutting edge AI and ML capabilities. This makes it possible for enterprises to get a hold of problems before they even happen by leveraging on clustering, anomaly detection, prediction, statistical thresholding, predictive analytics, forecasting, and more.

IT environments have broken silos and currently exceeding the realms of the manual human scale of operations. Traditional approaches to managing IT find redundancy over the dynamic environments governed by technology.

1. Data pipelines that ITOps need to retain is exponentially increasing encompassing a larger number of events and alerts. With the introduction of APIs, digital or machine users, mobile applications, and IoT devices, modern enterprises receive higher service ticket volumes. A trend that is becoming too complex for manual reporting and analysis.

2. As organizations walk on the digital transformation path, seamless ITOps becomes indispensable. The accessibility of technology has changed user expectations across industries and vertices. This calls for an immediate reaction to IT events especially when an issue impacts user experience.

3. The introduction of edge computing and cloud infrastructure empowers the line of business (LOB) functions to build and host their own IT solutions and applications over the cloud to be accessed anytime anywhere. This calls for an increase in budgetary allocation increase and more computing power (that can be leveraged) to be added from outside core IT.

AIOps bridges the gap between service management, performance management, and automation within the IT eco-system to accomplish the continuous goal of IT operation improvements. AIOps creates a game plan that delivers within the new accelerated IT environments, to identify patterns in monitoring, service desk, capacity addition and data automation across hybrid on-premises and multi-cloud environments.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Kamalika Some is an NCFM level 1 certified professional with previous professional stints at Axis Bank and ICICI Bank. An MBA (Finance) and PGP Analytics by Education, Kamalika is passionate to write about Analytics driving technological change.

More here:
How Does AIOps Integrate AI and Machine Learning into IT Operations? - Analytics Insight

What a machine learning tool that turns Obama white can (and cant) tell us about AI bias – The Verge

Its a startling image that illustrates the deep-rooted biases of AI research. Input a low-resolution picture of Barack Obama, the first black president of the United States, into an algorithm designed to generate depixelated faces, and the output is a white man.

Its not just Obama, either. Get the same algorithm to generate high-resolution images of actress Lucy Liu or congresswoman Alexandria Ocasio-Cortez from low-resolution inputs, and the resulting faces look distinctly white. As one popular tweet quoting the Obama example put it: This image speaks volumes about the dangers of bias in AI.

But whats causing these outputs and what do they really tell us about AI bias?

First, we need to know a little a bit about the technology being used here. The program generating these images is an algorithm called PULSE, which uses a technique known as upscaling to process visual data. Upscaling is like the zoom and enhance tropes you see in TV and film, but, unlike in Hollywood, real software cant just generate new data from nothing. In order to turn a low-resolution image into a high-resolution one, the software has to fill in the blanks using machine learning.

In the case of PULSE, the algorithm doing this work is StyleGAN, which was created by researchers from NVIDIA. Although you might not have heard of StyleGAN before, youre probably familiar with its work. Its the algorithm responsible for making those eerily realistic human faces that you can see on websites like ThisPersonDoesNotExist.com; faces so realistic theyre often used to generate fake social media profiles.

What PULSE does is use StyleGAN to imagine the high-res version of pixelated inputs. It does this not by enhancing the original low-res image, but by generating a completely new high-res face that, when pixelated, looks the same as the one inputted by the user.

This means each depixelated image can be upscaled in a variety of ways, the same way a single set of ingredients makes different dishes. Its also why you can use PULSE to see what Doom guy, or the hero of Wolfenstein 3D, or even the crying emoji look like at high resolution. Its not that the algorithm is finding new detail in the image as in the zoom and enhance trope; its instead inventing new faces that revert to the input data.

This sort of work has been theoretically possible for a few years now, but, as is often the case in the AI world, it reached a larger audience when an easy-to-run version of the code was shared online this weekend. Thats when the racial disparities started to leap out.

PULSEs creators say the trend is clear: when using the algorithm to scale up pixelated images, the algorithm more often generates faces with Caucasian features.

It does appear that PULSE is producing white faces much more frequently than faces of people of color, wrote the algorithms creators on Github. This bias is likely inherited from the dataset StyleGAN was trained on [...] though there could be other factors that we are unaware of.

In other words, because of the data StyleGAN was trained on, when its trying to come up with a face that looks like the pixelated input image, it defaults to white features.

This problem is extremely common in machine learning, and its one of the reasons facial recognition algorithms perform worse on non-white and female faces. Data used to train AI is often skewed toward a single demographic, white men, and when a program sees data not in that demographic it performs poorly. Not coincidentally, its white men who dominate AI research.

But exactly what the Obama example reveals about bias and how the problems it represents might be fixed are complicated questions. Indeed, theyre so complicated that this single image has sparked heated disagreement among AI academics, engineers, and researchers.

On a technical level, some experts arent sure this is even an example of dataset bias. The AI artist Mario Klingemann suggests that the PULSE selection algorithm itself, rather than the data, is to blame. Klingemann notes that he was able to use StyleGAN to generate more non-white outputs from the same pixelated Obama image, as shown below:

These faces were generated using the same concept and the same StyleGAN model but different search methods to Pulse, says Klingemann, who says we cant really judge an algorithm from just a few samples. There are probably millions of possible faces that will all reduce to the same pixel pattern and all of them are equally correct, he told The Verge.

(Incidentally, this is also the reason why tools like this are unlikely to be of use for surveillance purposes. The faces created by these processes are imaginary and, as the above examples show, have little relation to the ground truth of the input. However, its not like huge technical flaws have stopped police from adopting technology in the past.)

But regardless of the cause, the outputs of the algorithm seem biased something that the researchers didnt notice before the tool became widely accessible. This speaks to a different and more pervasive sort of bias: one that operates on a social level.

Deborah Raji, a researcher in AI accountability, tells The Verge that this sort of bias is all too typical in the AI world. Given the basic existence of people of color, the negligence of not testing for this situation is astounding, and likely reflects the lack of diversity we continue to see with respect to who gets to build such systems, says Raji. People of color are not outliers. Were not edge cases authors can just forget.

The fact that some researchers seem keen to only address the data side of the bias problem is what sparked larger arguments about the Obama image. Facebooks chief AI scientist Yann LeCun became a flashpoint for these conversations after tweeting a response to the image saying that ML systems are biased when data is biased, and adding that this sort of bias is a far more serious problem in a deployed product than in an academic paper. The implication being: lets not worry too much about this particular example.

Many researchers, Raji among them, took issue with LeCuns framing, pointing out that bias in AI is affected by wider social injustices and prejudices, and that simply using correct data does not deal with the larger injustices.

Others noted that even from the point of view of a purely technical fix, fair datasets can often be anything but. For example, a dataset of faces that accurately reflected the demographics of the UK would be predominantly white because the UK is predominantly white. An algorithm trained on this data would perform better on white faces than non-white faces. In other words, fair datasets can still created biased systems. (In a later thread on Twitter, LeCun acknowledged there were multiple causes for AI bias.)

Raji tells The Verge she was also surprised by LeCuns suggestion that researchers should worry about bias less than engineers producing commercial systems, and that this reflected a lack of awareness at the very highest levels of the industry.

Yann LeCun leads an industry lab known for working on many applied research problems that they regularly seek to productize, says Raji. I literally cannot understand how someone in that position doesnt acknowledge the role that research has in setting up norms for engineering deployments.

When contacted by The Verge about these comments, LeCun noted that hed helped set up a number of groups, inside and outside of Facebook, that focus on AI fairness and safety, including the Partnership on AI. I absolutely never, ever said or even hinted at the fact that research does not play a role is setting up norms, he told The Verge.

Many commercial AI systems, though, are built directly from research data and algorithms without any adjustment for racial or gender disparities. Failing to address the problem of bias at the research stage just perpetuates existing problems.

In this sense, then, the value of the Obama image isnt that it exposes a single flaw in a single algorithm; its that it communicates, at an intuitive level, the pervasive nature of AI bias. What it hides, however, is that the problem of bias goes far deeper than any dataset or algorithm. Its a pervasive issue that requires much more than technical fixes.

As one researcher, Vidushi Marda, responded on Twitter to the white faces produced by the algorithm: In case it needed to be said explicitly - This isnt a call for diversity in datasets or improved accuracy in performance - its a call for a fundamental reconsideration of the institutions and individuals that design, develop, deploy this tech in the first place.

Update, Wednesday, June 24: This piece has been updated to include additional comment from Yann LeCun.

Follow this link:
What a machine learning tool that turns Obama white can (and cant) tell us about AI bias - The Verge

Eric and Wendy Schmidt back Cambridge University effort to equip researchers with A.I. skills – CNBC

Google Executive Chairman Eric Schmidt

Win McNamee | Getty Images

Schmidt Futures, the philanthropic foundation set up by billionaires Eric and Wendy Schmidt, is funding a new program at the University of Cambridge that's designed to equip young researchers with machine learning and artificial intelligence skills that have the potential to accelerate their research.

The initiative known as the Accelerate Program for Scientific Discovery will initially be aimed at researchers in science, technology, engineering, mathematics and medicine. However, it will eventually be available for those studying arts, humanities and social science.

Some 32 PhD students will receive machine-learning training through the program in the first year, the university said, adding that the number will rise to 160 over five years. The aim is to build a network of machine-learning experts across the university.

"Machine learning and AI are increasingly part of our day-to-day lives, but they aren't being used as effectively as they could be, due in part to major gaps of understanding between different research disciplines," Professor Neil Lawrence, a former Amazon director who will lead the program, said in a statement.

"This program will help us to close these gaps by training physicists, biologists, chemists and other scientists in the latest machine learning techniques, giving them the skills they need."

The scheme will be run by four new early-career specialists, who are in the process of being recruited.

The Schmidt Futures donation will be used partly to pay the salaries of this team, which will work with the university's Department of Computer Science and Technology and external companies.

Guest lectures will be provided by research scientists at DeepMind, the London-headquartered AI research lab that was acquired by Google.

The size of the donation from Schmidt Futures has not been disclosed.

"We are delighted to support this far-reaching program at Cambridge," said Stuart Feldman, chief scientist at Schmidt Futures, in a statement. "We expect it to accelerate the use of new techniques across the broad range of research as well as enhance the AI knowledge of a large number of early-stage researchers at this superb university."

Read more here:
Eric and Wendy Schmidt back Cambridge University effort to equip researchers with A.I. skills - CNBC

Deliver More Effective Threat Intelligence with Federated Machine Learning – SC Magazine

Cybercriminals never stop innovating. Their increased use of automated and scripted attacks that increase speed and scale makes them more sophisticated and dangerous than ever. And because of the volume, velocity and sophistication of todays global threat landscape, enterprises must respond in real-time and at machine speeds to effectively counter these aggressive attacks. Machine learning and artificial intelligence can help deliver better, more effective threat intelligence.

As we move through 2020, AI has started increasing its capacity to detect attack patterns using a combination of threat intelligence feeds delivered by a variety of external sources, ranging from vendors to industry consortiums, and distributed sensors and learning nodes that gather information about the threats and probes targeting the edges of the networks.

This new form of distributed AI relies on something called federated machine learning. Instead of relying on a single, centralized AI system to process data and initiate a response to threats (like in centralized AI), these regional machine learning nodes will respond to threats autonomously using existing threat intelligence. Just as white blood cells automatically react to an infection, and clotting systems respond to a cut without requiring the brain to initiate those responses, these interconnected systems can see, correlate, track, and prepare for threats as they move through cyberspace by sharing information across the network, enabling local nodes to respond with increasing accuracy and efficiency to events by leveraging continually updated response models.

Its all part of an iterative cycle, where in addition to the passive data collected by local learning nodes, the data gleaned from active responses, including how malware or attackers fight back, will also get shared across the network of local peers. This will let the entire system further refine its ability to identify additional unique characteristics of attack patterns and strategies, and formulate increasingly effective threat responses.

There are many encouraging implications for cybersecurity. Security pros will use this system of distributed nodes connected to a central AI brain to detect even the most subtle deviations in normal network traffic. Examples of this are already emerging in research and development labs, particularly in health care, where researchers are using federated learning to train algorithms without centralizing sensitive data and running afoul of HIPAA. When added to production networks, this technology will make it increasingly difficult for cybercriminals to hide.

Building from there, AI can share its locally collected data with other AI systems via an M2M interface, whether from peers in an industry, within a specific geography, or with law enforcement developing a more global perspective.

In addition to pulling from external feeds or analyzing internal traffic and data, federated machine learning will feed on the deluge of relevant information coming from new edge computing devices and environments being collected by local learning nodes.

For this to work, these local nodes will need to operate in a continuous learning mode and evolve from a hub-and-spoke model back to the central AI to a more interconnected system. Rather than operating as information islands, a federated learning system would let these data sets interconnect so that learning models could adapt to event trends and changing environments from the moment a threat gets detected.

That way, rather than waiting for information to make the round trip to the central AI once an attack sensor has been tripped, other local learning nodes and embedded security devices are immediately alerted. These regional elements could then create and coordinate an ad-hoc swarm of local, interactive components to autonomously respond to the threat in real-time, even in mid-attack by anticipating the next move of the attacker or malware, while waiting for refined intelligence from a supervised authoritative master AI node.

Finally, the systems would share these events with the master AI node and also local learner nodes so that an event at one location improves the intelligence of the entire system. This would let the system customize the intelligence to the unique configurations and solutions in place at a particular place in the network. This would help local nodes collect and process data more efficiently, and also enhance their first-tier response to local cyber events.

The security industry clearly needs more efficient ways to analyze threat intelligence. When combined with automation to assist with autonomous decision-making, the intelligence gathered with federated machine learning will help organizations more effectively fight the increasingly aggressive and damaging nature of todays cybercrime. Throughout 2020 and beyond, AI in its various forms will continue to move forward, helping to level the playing field and making it more possible to fend off the growing deluge of attacks.

Derek Manky, chief, Global Threat Alliances, FortiGuard Labs

Original post:
Deliver More Effective Threat Intelligence with Federated Machine Learning - SC Magazine

Key Trends Framing the State of AI and ML – insideBIGDATA

In this special guest feature, Rachel Roumeliotis, Vice President of Content Strategy at OReilly Media, provides a deep dive into what topics and terms are on the rise in the data science industry, and also touches on important technology trends and shifts in learning these technologies. Rachel leads an editorial team that covers a wide variety of programming topics, ranging from data and AI, to open source in the enterprise, to emerging programming languages. She has been working in technical publishing for 14+ years, acquiring content in many areas, including software development, UX, computer security and AI.

Theres no doubt that artificial intelligence continues to be swiftly adopted by companies worldwide. In just the last few years, most companies that were evaluating or experimenting with AI are now using it in production deployments. When organizations adopt analytic technologies like AI and machine learning (ML), it naturally prompts them to start asking questions that challenge them to think differently about what they know about their business across departments, from manufacturing, production and logistics, to sales, customer service and IT. An organizations use of AI and ML tools and techniques and the various contexts in which it uses them will change as they gain new knowledge.

OReillys learning platform is a treasure trove of information about the trends, topics, and issues tech and business leaders need to know to do their jobs and keep their businesses running. We recently analyzed the platforms user usage to take a closer look at the most popular and most-searched topics in AI and ML. Below are some of the key findings that show where the state of AI and ML is, and where it is headed.

Unrelenting Growth in AI and ML

First and foremost, our analysis found that interest in AI continues to grow. When comparing 2018 to 2019, engagement in AI increased by 58% far outpacing growth in the much larger machine learning topic, which increased only 5% in 2019. When aggregating all AI and ML topics, this accounts for nearly 5% of all usage activity on the platform. While this is just slightly less than high-level, well-established topics like data engineering (8% of usage activity) and data science (5% of usage activity), interest in these topics grew 50% faster than data science. Data engineering actually decreased about 8% over the same time due to declines in engagement with data management topics.

We also discovered early signs that organizations are experimenting with advanced tools and methods. Of our findings, engagement in unsupervised learning content is probably one of the most interesting. In unsupervised learning, an AI algorithm is trained to look for previously undetected patterns in a data set with no pre-existing labels or classification with minimum human supervision or guidance. In 2018, the usage for unsupervised learning topics grew by 53% and by 172% in 2019.

But whats driving this growth? While the names of its methods (clustering and association) and its applications (neural networks) are familiar, unsupervised learning isnt as well understood as its supervised learning counterpart, which serves as the default strategy for ML for most people and most use cases. This surge in unsupervised learning activity is likely driven by a lack of familiarity with its uses, benefits, and requirements by more sophisticated users who are faced with use cases not easily addressed with supervised methods.

Deep Learning Spurs Interest in Other Advanced Techniques

While deep learning cooled slightly in 2019, it still accounted for 22% of all AI and ML usage. We also suspect that its success has helped spur the resurrection of a number of other disused or neglected ideas. The biggest example of this is reinforcement learning. This topic experienced exponential growth, growing over 1,500% since 2017.

Even with engagement rates dropping by 10% in 2019, deep learning itself is one of the most popular ML methods among companies that are evaluating AI, with many companies choosing the technique to support production use cases. It might be that engagement with deep learning topics has plateaued because most people are already actively engaging with the technology, meaning growth could slow down.

Natural language processing is another topic that has showed consistent growth. While its growth rate isnt huge it grew by 15% in 2018 and 9% in 2019 natural language processing accounts for about 12% of all AI and ML usage on our platform. This is around 6x the share of unsupervised learning and 5x the share of reinforcement learning usage, despite the significant growth these two topics have experienced over the last two years.

Not all AI/ML methods are treated equally, however. For example, interest in chatbots seems to be waning, with engagement decreasing by 17% in 2018 and by 34% in 2019. This is likely because chatbots were one of the first application of AI and is probably a reflection of the relative maturity of its application.

The growing engagement in unsupervised learning and reinforcement learning demonstrates that organizations are experimenting with advanced analytics tools and methods. These tools and techniques open up new use cases for businesses to experiment and benefit from, including decision support, interactive games, and real-time retail recommendation engines. We can only imagine that organizations will continue to use AI and ML to solve problems, increase productivity, accelerate processes, and deliver new products and services.

Sign up for the free insideBIGDATAnewsletter.

Excerpt from:
Key Trends Framing the State of AI and ML - insideBIGDATA

Why AI bias can’t be solved with more AI – BusinessCloud

Alejandro Saucedosays hecould spend hours talking about solutions to bias in machine learning algorithms.

In fact, he has already spent countless hours on the topic via talks at events and in his day-to-day work.

Its an area he is uniquely qualified to tackle. He is engineering director of machine learning at London-based Seldon Technologies, and chief scientist at The Institute for Ethical AI and Machine Leaning.

His key thesis is that the bias which creeps into AI a problem farfrom hypotheticalcannotbe solved with more tech but with the reintroduction of human expertise.

In recent years countless stories detail how AI decisioning has resulted in women being less likely to qualify for loans, minorities being unfairly profiled by police, and facial recognition technology performing more accurately when analysing white, male faces.

You are affecting people's lives," hetellsBusinessCloud, in reference tothe magnitudeofthese automated decisionsin the security and defence space, and even in the judicial process.

Saucedoexplains that machine learning processes are, by definition, designed to be discriminatory but not like this.

"The purpose of machine learning is to discriminate toward a right answer, he said.

"Humans are not born racist, and similarly machine learning algorithms are not by default going to be racist. Theyare a reflection ofthedata ingested."

Ifalgorithms adopt human bias from our biased data, removing biastherefore suggeststhetechnology has great potential.

But the discussionoftenstopsat this theoretical level oracts asa cue for engineers to fine-tune the software in the hopes of a more equitable outcome.

Its not that simple, Saucedo suggests.

An ethical question of that magnitude shouldn't fall onto the shoulders of a single data scientist. They will not have the full picture in order to make a call that could have impact on individuals across generations, he says.

Instead the approach with the most promise takes one step further back from the problem.

Going beyond the algorithm, as he puts it, involves bringing in human experts, increasing regulation, and a much lighter touch when introducing the technology at all.

Instead of just dumping an entire encyclopaedia of an industry into a neural network to learn from scratch, you can bring in domain experts to understand how these machines learn, heexplains.

This approach allows those making the technology to better explain why an algorithm makes the choices it does something which is almost impossible with the black box of a neural network working on its own.

For instance, a lawyer could help with the building of a legal AI, to guide and review the machine learning's output for nuances even small things like words which are capitalised.

In this way, he says, the resulting machine learning becomes easier to understand.

This approach means automating a percentage of the process, and requiring a human for the remainder, or what he calls 'human augmentation' or 'human manual remediation'.

This could slow down the development of potentially lucrative technology battling to win the AI arms race but it was a choice he said would ultimately be good for business and people.

"You either take the slow and painful route which works, or you take the quick fix which doesn't, he says.

Saucedo is only calling for red tape which is proportionate to its potential impact. In short, a potential 'legal sentencing prediction system' needs more governance than a prototype being tested on a single user.

He saysanyone building machine learning algorithms with societal impact should be asking how they can build a process which still requires review from human domain expertise.

"If there is no way to introduce a human in to review, the question is: should you even be automating that process? If you should, you need to make sure that you have the ethics structure and some form of ethics board to approve those use cases."

And while his premise is that bias is not a single engineer's problem, he said that this does not make them now exempt.

"It is important as engineers, individuals and as people providing that data to be aware of the implications. Not only because of the bad usecases, butbeing aware that most of the incorrect applications of machine learning algorithms are not done through malice but lack of best practice."

This self-regulation might be tough for fast-paced AI firms hoping to make sales, but conscious awareness on the part of everyone building these systemsis a professional responsibility,he says.

And even self-regulation is only the first step. Good ethics alone does not guarantee a lack of blind spots.

That's why Saucedo also suggests external regulationandthis doesn't have to slow down innovation.

"When you introduce regulations that are embedded with what is needed, things are done the right way. And when they're done the right way, they're more efficient and there is more room for innovation."

For businesses looking to incorporate machine learning, rather than building it, he points to The Institute for Ethical AI & Machine Learnings AI-RFX Procurement Framework.

The idea is to abstract the initial high-level principles created at The Institute, such as the human augmentation mentioned earlier, and trust and privacy by design. It breaks these principles down into a security questionnaire.

"We've taken all of these principles, and we realised that understanding and agreeing on exact best-practice is very hard. What is universally agreed is what bad practice is."

This, along with access to the right stakeholders to evaluate the data and content,is enough to sort mature AI businesses from those "selling snake oil".

The institute is also contributing to some of the official industry standards that are being created for organisations like the police and the ISO, he explains.

And the work is far from done if a basic framework and regulation can be created with enough success to be adopted internationally, even differing Western and Eastern ethics need to be accounted for.

"In the West you have good and bad, and in theEastit is more about balance," he says.

There are also the differing concepts of theself versusthe community. The considerations quickly become philosophical and messy a sign that they are a little bit more human.

"If we want to reach international standards and regulation, we need to be able to align on those foundational components, to know where everyone is coming from, he says.

Read the original:
Why AI bias can't be solved with more AI - BusinessCloud

How Does Satellite Tracking of Wildlife Contribute to Environmental Causes?

Issues like global warming, forest fires, oil spills in oceans, and a bundle of other environmental hazards contributing to the endangerment of wildlife habitats and wildlife itself are not new to any of us. The damages done to our plant either through Australian bushfires, Amazon fire or any other similar unfortunate incident are undeniable, and the effected regions are hard to be restored.

However, kudos to the technology that has enabled us to lessen the consequences of such massive environmental degradations. Many inventions are done to protect and restore the natural environment of our planet and ensuring the safety of wildlife. For instance, just as a solar-powered security camera is contributing to saving some of the electricity, satellite tracking of wildlife is playing its part to conserve the environment.

How does the tracking system work?

To know the benefits of satellite tracking of wildlife, firstly we must understand how this technology works. The tracking system is introduce in the late ’90s to study animal population, their health issues, for tracking their migrations, etc. the information collects then used to know about the biodiversity, land, and climate changes. A tracking device or tag attach to the animal under study which then sends information to the satellite via radio transmission.

The tag can attach to the animal’s necks or limbs particularly ankles, via the collar. If an animal’s neck is not feasible for this purpose like of pigs the device is often harness so that the subject cant removes it. Likewise, the tracking devices are also direct attach to the animals either by being glue or by being tape. The data is often collect in pre-set intervals that are known as ‘duty cycles’ to save the battery of the device, thus, prolonging their lifespan. Some devices are program to drop off after a certain time period, otherwise, they are retrieving manually by recapturing the animal.

What Benefits does Satellite Tracking offer for the environmental causes?

top view photography of landscape

The data collect through satellite tracking systems aid environmentalists, biologists, and researches. In setting up new strategies for the well-being of wildlife and the planet. Be it marine life or we want to know about the habitat and species on the ground, this technology facilitates us in all domains.

Migration and Breeding:

The movements and migration of animals will be monitor closely. The change in habitat, if out of pattern can indicate a threat in their natural habitat timely. The measures to conserve it will be take beforehand. Another purpose is to see what McCauley calls ‘landscape risks’ i.e. the safety of the human population, residing. Near wildlife habitats can be ensur, keeping in view the increasing human developments. Likewise, the breeding patterns and the rate at which specific species of animals is increasing or decreasing can help us take precautionary measures to either control the population or to conserve the endanger ones while keeping the taste of wild intact.

Prevention of Diseases:

Animals transmit diseases through zoonotic pathogens to which humans are more likely to fell prey. Just as the recent coronavirus outbreak in China that is suspect to have been transmit by an animal. A bird and had size the entire country.  Another common example of such a massive threat includes bubonic plague. Which is calling to kill 30 to 60 percent of the European population. Satellite tracking of wildlife is essential to prevent any such outbreak by timely taking the measures to prevent it.

Monitoring Climate Change and Its Effects:

Climate change is a threat to both animals and humans equally. The way animals react to climate changes helps us to know what will be expect in the future.  The change in migration cycles has led scientists to study the condition of the ozone layer. Which part of the earth is more prone to damage due to the thinning of the ozone layer. Likewise, the pace with which icebergs are melting. Leading to a rise in sea levels. Wow it is endangering the wildlife in the arctic region can all be debated with evidential support.

Data collect through satellite tracking ultimately help us persuade. Warn everyone about climate change and genuinely destructive it can be if precautionary measures are not take. Therefore, awareness about environmental degradation can lead us to a better. Environment-friendly future, thus ensuring the safety of our future generations.

 

This Is the First Universal Language for Quantum Computers – Popular Mechanics

Przemyslaw Klos / EyeEmGetty Images

A quantum computing startup called Quantum Machines has released a new programming language called QUA. The language runs on the startups proprietary Quantum Orchestration Platform.

Quantum Machines says its goal is to complete the stack that includes quantum computing at the very bottom-most level. Yes, those physical interactions between quantum bits (qubits) are what set quantum computers apart from traditional hardwarebut you still need the rest of the hardware that will turn physical interactions into something that will run software.

And, of course, you need the software, too. Thats where QUA comes in.

The transition from having just specific circuitsphysical circuits for specific algorithmsto the stage at which the system is programmable is the dramatic point, CEO Itavar Siman told Tech Crunch. Basically, you have a software abstraction layer and then, you get to the era of software and everything accelerated.

The language Quantum Machine describes in its materials isnt what you think of when you imagine programming, unless youre a machine language coder. Whats machine language? Thats the lowest possible level of code, where the instructions arent in natural or human language and are instead in tiny bits of direct instruction for the hardware itself.

Coder Ben Eater made a great video that walks you through a sample program written in C, which is a higher and more abstract language, and how that information translates all the way down into machine code. (Essentially, everything gets much messier and much less readable to the human eye.)

This content is imported from YouTube. You may be able to find the same content in another format, or you may be able to find more information, at their web site.

Machine code acts as a reminder that, on a fundamental level, everything inside your computer is passing nano-Morse code back and forth to do everything you see on the screen as well as all the behind the scenes routines and coordination. Since quantum computers have a brand new paradigm for the idea of hardware itself, theres an opening for a new machine code.

Quantum Machines seems to want to build the entire quantum system, from hardware to all the software to control and highlight it. And if that sounds overly proprietary or like some unfair version of how to develop new technology, we have some bad news for you about the home PC wars of the 1980s or the market share Microsoft Windows still holds among operating systems.

By offering a package deal with something for everyone when quantum computing isnt even a twinkle in the eye of the average consumer, Quantum Machines could be making inroads that will keep it ahead for decades. A universal language, indeed.

QUA is what we believe the first candidate to become what we define as the quantum computing software abstraction layer, Sivan told TechCrunch. In 20 years, we might look back on QUA the way todays users view DOS.

This content is created and maintained by a third party, and imported onto this page to help users provide their email addresses. You may be able to find more information about this and similar content at piano.io

This commenting section is created and maintained by a third party, and imported onto this page. You may be able to find more information on their web site.

Originally posted here:
This Is the First Universal Language for Quantum Computers - Popular Mechanics

Is teleportation possible? Yes, in the quantum world – University of Rochester

Quantum teleportation is an important step in improving quantum computing.

Beam me up is one of the most famous catchphrases from the Star Trek series. It is the command issued when a character wishes to teleport from a remote location back to the Starship Enterprise.

While human teleportation exists only in science fiction, teleportation is possible in the subatomic world of quantum mechanicsalbeit not in the way typically depicted on TV. In the quantum world, teleportation involves the transportation of information, rather than the transportation of matter.

Last year scientists confirmed that information could be passed between photons on computer chips even when the photons were not physically linked.

Now, according to new research from the University of Rochester and Purdue University, teleportation may also be possible between electrons.

In a paper published in Nature Communications and one to appear in Physical Review X, the researchers, including John Nichol, an assistant professor of physics at Rochester, and Andrew Jordan, a professor of physics at Rochester, explore new ways of creating quantum-mechanical interactions between distant electrons. The research is an important step in improving quantum computing, which, in turn, has the potential to revolutionize technology, medicine, and science by providing faster and more efficient processors and sensors.

Quantum teleportation is a demonstration of what Albert Einstein famously called spooky action at a distancealso known as quantum entanglement. In entanglementone of the basic of concepts of quantum physicsthe properties of one particle affect the properties of another, even when the particles are separated by a large distance. Quantum teleportation involves two distant, entangled particles in which the state of a third particle instantly teleports its state to the two entangled particles.

Quantum teleportation is an important means for transmitting information in quantum computing. While a typical computer consists of billions of transistors, called bits, quantum computers encode information in quantum bits, or qubits. A bit has a single binary value, which can be either 0 or 1, but qubits can be both 0 and 1 at the same time. The ability for individual qubits to simultaneously occupy multiple states underlies the great potential power of quantum computers.

Scientists have recently demonstrated quantum teleportation by using electromagnetic photons to create remotely entangled pairs of qubits.

Qubits made from individual electrons, however, are also promising for transmitting information in semiconductors.

Individual electrons are promising qubits because they interact very easily with each other, and individual electron qubits in semiconductors are also scalable, Nichol says. Reliably creating long-distance interactions between electrons is essential for quantum computing.

Creating entangled pairs of electron qubits that span long distances, which is required for teleportation, has proved challenging, though: while photons naturally propagate over long distances, electrons usually are confined to one place.

In order to demonstrate quantum teleportation using electrons, the researchers harnessed a recently developed technique based on the principles of Heisenberg exchange coupling. An individual electron is like a bar magnet with a north pole and a south pole that can point either up or down. The direction of the polewhether the north pole is pointing up or down, for instanceis known as the electrons magnetic moment or quantum spin state. If certain kinds of particles have the same magnetic moment, they cannot be in the same place at the same time. That is, two electrons in the same quantum state cannot sit on top of each other. If they did, their states would swap back and forth in time.

The researchers used the technique to distribute entangled pairs of electrons and teleport their spin states.

We provide evidence for entanglement swapping, in which we create entanglement between two electrons even though the particles never interact, and quantum gate teleportation, a potentially useful technique for quantum computing using teleportation, Nichol says. Our work shows that this can be done even without photons.

The results pave the way for future research on quantum teleportation involving spin states of all matter, not just photons, and provide more evidence for the surprisingly useful capabilities of individual electrons in qubit semiconductors.

Read more here:
Is teleportation possible? Yes, in the quantum world - University of Rochester

Quantum Computing for Enterprise Market is thriving worldwide with Top Key Players like D-Wave Systems Inc. (Canada), QX Branch (US), International…

Quantum computers are based on the principle of superposition which allows them to achieve high computational power necessary for advanced applications such as cryptography, drug discovery, and machine learning. Presently, cryptography is based on public key algorithms such as AES-256, RSA, and ECDSA. These algorithms are secure as per the existing computing needs but are expected to be rendered useless as quantum computing advances. Quantum computing application developers have started testing encryption algorithms with quantum keys which is expected to offer secure encryption for the protection of data against the computational power of future systems.

The global Quantum Computing for Enterprise market is expected to expand at a CAGR of +24% over the forecast period 2020-2026.

The report, titled Global Quantum Computing for Enterprise Market defines and briefs readers about its products, applications, and specifications. The research lists key companies operating in the global market and also highlights the key changing trends adopted by the companies to maintain their dominance. By using SWOT analysis and Porters five force analysis tools, the strengths, weaknesses, opportunities, and threats of key companies are all mentioned in the report. All leading players in this global market are profiled with details such as product types, business overview, sales, manufacturing base, competitors, applications, and specifications.

Top Key Vendors in Market:

D-Wave Systems Inc. (Canada), QX Branch (US), International Business Machines Corporation (US), Cambridge Quantum Computing Limited (UK), 1QB Information Technologies (Canada), QC Ware, Corp. (US), StationQ Microsoft (US), Rigetti Computing (US), Google Inc. (US), River Lane Research (US)

Get Sample Copy of this Report @

https://www.a2zmarketresearch.com/sample?reportId=704

The Quantum Computing for Enterprise market comprises in-depth assessment of this sector. This statistical report also provides a detailed study of the demand and supply chain in the global sector. The competitive landscape has been elaborated by describing the various aspects of the leading industries such as shares, profit margin, and competition at the domestic and global level.

Different global regions such as North America, Latin America, Asia-Pacific, Europe, and India have been analyzed on the basis of the manufacturing base, productivity, and profit margin. This Quantum Computing for Enterprise market research report has been scrutinized on the basis of different practical oriented case studies from various industry experts and policymakers. It uses numerous graphical presentation techniques such as tables, charts, graphs, pictures and flowchart for easy and better understanding to the readers.

Different internal and external factors such as, Quantum Computing for Enterprise Market have been elaborated which are responsible for driving or restraining the progress of the companies. To discover the global opportunities different methodologies have been included to increase customers rapidly.

Get Upto 20% Discount on this Report @

https://www.a2zmarketresearch.com/discount?reportId=704

Table of Content:

Global Quantum Computing for Enterprise Market Research Report 2020-2026

Chapter 1: Industry Overview

Chapter 2: Quantum Computing for Enterprise Market International and China Market Analysis

Chapter 3: Environment Analysis of Quantum Computing for Enterprise.

Chapter 4: Analysis of Revenue by Classifications

Chapter 5: Analysis of Revenue by Regions and Applications

Chapter 6: Analysis of Quantum Computing for Enterprise Market Revenue Market Status.

Chapter 7: Analysis of Quantum Computing for Enterprise Industry Key Manufacturers

Chapter 8: Sales Price and Gross Margin Analysis

Chapter 9: Marketing Trader or Distributor Analysis of Quantum Computing for Enterprise.

Chapter 10: Development Trend of Quantum Computing for Enterprise Market 2020-2026.

Chapter 11: Industry Chain Suppliers of Quantum Computing for Enterprise with Contact Information.

Chapter 12: New Project Investment Feasibility Analysis of Market.

Chapter 13: Conclusion of the Quantum Computing for Enterprise Market Industry 2024 Market Research Report.

Buy This Report @

https://www.a2zmarketresearch.com/buy?reportId=704

About a2zmarketresearch:

The A2Z Market Research library provides syndication reports from market researchers around the world. Ready-to-buy syndication Market research studies will help you find the most relevant business intelligence.

Our Research Analyst Provides business insights and market research reports for large and small businesses.

The company helps clients build business policies and grow in that market area. A2Z Market Research is not only interested in industry reports dealing with telecommunications, healthcare, pharmaceuticals, financial services, energy, technology, real estate, logistics, F & B, media, etc. but also your company data, country profiles, trends, information and analysis on the sector of your interest.

Contact Us:

1887 WHITNEY MESA DR HENDERSON, NV 89014

+1 775 237 4147

[emailprotected]

Read more:
Quantum Computing for Enterprise Market is thriving worldwide with Top Key Players like D-Wave Systems Inc. (Canada), QX Branch (US), International...

Physicist Chen Wang Receives DOE Early Career Award – UMass News and Media Relations

The U.S. Department of Energy (DOE) announced this week that it has named 76 scientists from across the country, including assistant professor of physics Chen Wang, to receive significant funding for research with its Early Career Award. It provides university-based researchers with at least $150,000 per year in research support for five years.

DOE Under Secretary for Science Paul Dabbar says DOE is proud to support funding that will sustain Americas scientific workforce and create opportunities for our researchers to remain competitive on the world stage. By bolstering our commitment to the scientific community, we invest into our nations next generation of innovators.

Wang says, I feel very honored to receive this award. This is a great opportunity to explore a new paradigm of reducing error for emerging quantum technologies.

His project involves enhancing quantum bit (qubit) performance using a counter-intuitive new approach. He will harness friction usually an unwelcome source of error in quantum devices to make qubits perform with fewer errors. The work is most relevant for quantum computing, he says, but potential applications include also cryptography, communications and simulations.

One of the basic differences between classical and quantum computing which is not in practical use yet is that classical computers perform calculations and store data using stable bits labeled as zero or one that never unintendently change. Accidental change would introduce error.

By contrast, in quantum computing, qubits can flip from zero to one or anywhere between. This is a source of their great promise to vastly expand quantum computers ability to perform calculations and store data, but it also introduces errors, Wang explains.

The world is intrinsically quantum, he says, so using a classical computer to make predictions at the quantum level about the properties of anything composed of more than a few dozens of atoms is limited. Quantum computing increases the ability to process information exponentially. With every extra qubit you add, the amount of information you can process doubles.

Think of the state of a bit or a qubit as a position on a sphere, he says. For a classical bit, a zero or one is stable, maybe the north or south pole. But a quantum bit can be anywhere on the surface or be continuously tuned between zero and one.

To address potential errors, Wang plans to explore a new method to reduce qubit errors by introducing autonomous error correction the qubit corrects itself. In quantum computing, correcting errors is substantially harder than in classical computing because you are literally forbidden from reading your bits or making backups, he says.

Quantum error correction is a beautiful, surprising and complicated possibility that makes a very exciting experimental challenge. Implementing the physics of quantum error correction is the most fascinating thing I can think of in quantum physics.

We are already familiar with how friction helps in stabilizing a classical, non-quantum system, he says, such as a swinging pendulum. The pendulum will eventually stop due to friction the resistance of air dissipates energy and the pendulum will not randomly go anywhere, Wang points out.

In much the same way, introducing friction between a qubit and its environment puts a stabilizing force on it. When it deviates, the environment will give it a kick back in place, he says. However, the kick has to be designed in very special ways. Wang will experiment using a super-cooled superconducting device made of a sapphire chip on which he will deposit a very thin patterned aluminum film.

He says, Its a very difficult challenge, because to have one qubit correct its errors, by some estimates you need tens to even thousands of other qubits to help it, and they need to be in communication. But it is worthwhile because with them, we can do things faster and we can do tasks that are impossible with classical computing now.

See more here:
Physicist Chen Wang Receives DOE Early Career Award - UMass News and Media Relations

Global Quantum Computing for Enterprise Market Expected to Reach Highest CAGR by 2025 Top Players: 1QB Information Technologies, Airbus, Anyon…

This research report on the Global Quantum Computing for Enterprise Market provides an in-depth analysis of the market share, industry size, and current and future market trends. The Quantum Computing for Enterprise market report majorly sheds light on the market scope, growth prospects, potential, and the historical data of the market. TheQuantum Computing for Enterprise market report offers a complete segmentation depending on the factors such as end-use, type, application, and geographical regions that offer the assessment of every aspect of the Quantum Computing for Enterprise market. Similarly, the Quantum Computing for Enterprise report contains the market share on the basis of current as well as forecasted Quantum Computing for Enterprise market growth.

This study covers following key players:1QB Information TechnologiesAirbusAnyon SystemsCambridge Quantum ComputingD-Wave SystemsGoogleMicrosoftIBMIntelQC WareQuantumRigetti ComputingStrangeworksZapata Computing

Request a sample of this report @ https://www.orbismarketreports.com/sample-request/66126?utm_source=Puja

Furthermore, the Quantum Computing for Enterprise market broadly analyzes accurate estimations of the Quantum Computing for Enterprise market. Thisglobal market report also examines the market segments, ascendant contenders,competitive analysis, industry environment, and modern trends of the global Quantum Computing for Enterprise market. Thus, such factors are majorly considered the progress assessment of the Quantum Computing for Enterprise market. In addition, the Quantum Computing for Enterprise market study delivers a deepestimate of the global industrydemand, market share, and sales, industry revenue, and market size of the target industry.

Access Complete Report @ https://www.orbismarketreports.com/global-quantum-computing-for-enterprise-market-size-status-and-forecast-2019-2025-2?utm_source=Puja

Market segment by Type, the product can be split into HardwareSoftware

Market segment by Application, split into BFSITelecommunications and ITRetail and E-CommerceGovernment and DefenseHealthcareManufacturingEnergy and UtilitiesConstruction and EngineeringOthers

Moreover, the retailers, exporters, and the leading service providers over the globe are also provided in the Quantum Computing for Enterprise market report along with their data such as price, product capacity, company profile, product portfolio, market revenue, and the cost of the product. Likewise, the graphical description and suitable figures of the Quantum Computing for Enterprise industry are also featured in this report. This research report also gives data like sales revenue, industry value & volume, upstream & downstream buyers, and industry chain formation. Likewise, the Quantum Computing for Enterprise market study offers an extensiveview of the changing market dynamics, market trends, restraints, driving factors, changing patterns, as well as restrictions of the market. The Quantum Computing for Enterprise market study is designed through quantitative and qualitative research techniques that majorly shed light on the industry growth and various challenges facing by the leading competitors along with the gap analysis and beneficial opportunities provided by the Quantum Computing for Enterprise market.

Some Major TOC Points:1 Report Overview2 Global Growth Trends3 Market Share by Key Players4 Breakdown Data by Type and ApplicationContinued

The Quantum Computing for Enterprise research study is a helpful analysis which emphasizing on geographical analysis, primary & secondary research methodologies, market drivers, and leading segmentation and sub-segments analysis. With the whole overview of the Quantum Computing for Enterprise market, the studyprovides the overall viability of future projects and delivers the Quantum Computing for Enterprise report conclusion. The Quantum Computing for Enterprise market report also delivers market evaluationalong with the PESTEL, SWOT, and other necessary data. In addition, the Quantum Computing for Enterprise market study categorizes the global market data by using numerous factors such as application, region, manufacturers, and type.

For Enquiry before buying report @ https://www.orbismarketreports.com/enquiry-before-buying/66126?utm_source=Puja

About Us : With unfailing market gauging skills, has been excelling in curating tailored business intelligence data across industry verticals. Constantly thriving to expand our skill development, our strength lies in dedicated intellectuals with dynamic problem solving intent, ever willing to mold boundaries to scale heights in market interpretation.

Contact Us : Hector CostelloSenior Manager Client Engagements4144N Central Expressway,Suite 600, Dallas,Texas 75204, U.S.A.Phone No.: USA: +1 (972)-362-8199 | IND: +91 895 659 5155

More:
Global Quantum Computing for Enterprise Market Expected to Reach Highest CAGR by 2025 Top Players: 1QB Information Technologies, Airbus, Anyon...

Teleportation Is Indeed Possible At Least in the Quantum World – SciTechDaily

Quantum teleportation is an important step in improving quantum computing.

Beam me up is one of the most famous catchphrases from the Star Trek series. It is the command issued when a character wishes to teleport from a remote location back to the Starship Enterprise.

While human teleportation exists only in science fiction, teleportation is possible in the subatomic world of quantum mechanicsalbeit not in the way typically depicted on TV. In the quantum world, teleportation involves the transportation of information, rather than the transportation of matter.

Last year scientists confirmed that information could be passed between photons on computer chips even when the photons were not physically linked.

Now, according to new research from the University of Rochester and Purdue University, teleportation may also be possible between electrons.

A quantum processor semiconductor chip is connected to a circuit board in the lab of John Nichol, an assistant professor of physics at the University of Rochester. Nichol and Andrew Jordan, a professor of physics, are exploring new ways of creating quantum-mechanical interactions between distant electrons, promising major advances in quantum computing. Credit: University of Rochester photo / J. Adam Fenster

In a paper published in Nature Communications and one to appear in Physical Review X, the researchers, including John Nichol, an assistant professor of physics at Rochester, and Andrew Jordan, a professor of physics at Rochester, explore new ways of creating quantum-mechanical interactions between distant electrons. The research is an important step in improving quantum computing, which, in turn, has the potential to revolutionize technology, medicine, and science by providing faster and more efficient processors and sensors.

Quantum teleportation is a demonstration of what Albert Einstein famously called spooky action at a distancealso known as quantum entanglement. In entanglementone of the basic of concepts of quantum physicsthe properties of one particle affect the properties of another, even when the particles are separated by a large distance. Quantum teleportation involves two distant, entangled particles in which the state of a third particle instantly teleports its state to the two entangled particles.

Quantum teleportation is an important means for transmitting information in quantum computing. While a typical computer consists of billions of transistors, called bits, quantum computers encode information in quantum bits, or qubits. A bit has a single binary value, which can be either 0 or 1, but qubits can be both 0 and 1 at the same time. The ability for individual qubits to simultaneously occupy multiple states underlies the great potential power of quantum computers.

Scientists have recently demonstrated quantum teleportation by using electromagnetic photons to create remotely entangled pairs of qubits.

Qubits made from individual electrons, however, are also promising for transmitting information in semiconductors.

Individual electrons are promising qubits because they interact very easily with each other, and individual electron qubits in semiconductors are also scalable, Nichol says. Reliably creating long-distance interactions between electrons is essential for quantum computing.

Creating entangled pairs of electron qubits that span long distances, which is required for teleportation, has proved challenging, though: while photons naturally propagate over long distances, electrons usually are confined to one place.

In order to demonstrate quantum teleportation using electrons, the researchers harnessed a recently developed technique based on the principles of Heisenberg exchange coupling. An individual electron is like a bar magnet with a north pole and a south pole that can point either up or down. The direction of the polewhether the north pole is pointing up or down, for instanceis known as the electrons magnetic moment or quantum spin state. If certain kinds of particles have the same magnetic moment, they cannot be in the same place at the same time. That is, two electrons in the same quantum state cannot sit on top of each other. If they did, their states would swap back and forth in time.

The researchers used the technique to distribute entangled pairs of electrons and teleport their spin states.

We provide evidence for entanglement swapping, in which we create entanglement between two electrons even though the particles never interact, and quantum gate teleportation, a potentially useful technique for quantum computing using teleportation, Nichol says. Our work shows that this can be done even without photons.

The results pave the way for future research on quantum teleportation involving spin states of all matter, not just photons, and provide more evidence for the surprisingly useful capabilities of individual electrons in qubit semiconductors.

References:

Conditional teleportation of quantum-dot spin states by Haifeng Qiao, Yadav P. Kandel, Sreenath K. Manikandan, Andrew N. Jordan, Saeed Fallahi, Geoffrey C. Gardner, Michael J. Manfra and John M. Nichol, 15 June 2020, Nature Communications.DOI: 10.1038/s41467-020-16745-0

Coherent multi-spin exchange in a quantum-dot spin chain by Haifeng Qiao, Yadav P. Kandel, Kuangyin Deng, Saeed Fallahi, Geoffrey C. Gardner, Michael J. Manfra, Edwin Barnes, John M. Nichol, Accepted 12 May 2020, Physical Review X.arXiv: 2001.02277

Continue reading here:
Teleportation Is Indeed Possible At Least in the Quantum World - SciTechDaily

Quantum entanglement demonstrated on orbiting CubeSat – University of Strathclyde

25 June 2020

In a critical step toward creating a global quantum communications network, researchers have generated and detected quantum entanglement onboard a CubeSat nanosatellite weighing less than 2.6 kg and orbiting the Earth.

The University of Strathclyde is involved in an international team which has demonstrated that their miniaturised source of quantum entanglement can operate successfully in space aboard a low-resource, cost-effective CubeSat that is smaller than a shoebox. CubeSats are a standard type of nanosatellite made of multiples of 10 cm 10 cm 10 cm cubic units.

The quantum mechanical phenomenon known as entanglement is essential to many quantum communications applications. However, creating a global network for entanglement distribution is not possible with optical fibers because of the optical losses that occur over long distances. Equipping small, standardised satellites in space with quantum instrumentation is one way to tackle this challenge in a cost-effective manner.

The research, led by the National University of Singapore, has been published in the journal Optica.

Dr Daniel Oi, a Senior Lecturer in Strathclydes Department of Physics, is the Universitys lead on the research. He said: This research has tested next generation quantum communication technologies for use in space. With the results confirmed, its success bodes well for forthcoming missions, for which we are developing the next enhanced version of these instruments.

As a first step, the researchers needed to demonstrate that a miniaturised photon source for quantum entanglement could stay intact through the stresses of launch and operate successfully in the harsh environment of space within a satellite that can provide minimal power. To accomplish this, they exhaustively examined every component of the photon-pair source used to generate quantum entanglement to see if it could be made smaller or more rugged.

The new miniaturised photon-pair source consists of a blue laser diode that shines on nonlinear crystals to create pairs of photons. Achieving high-quality entanglement required a complete redesign of the mounts that align the nonlinear crystals with high precision and stability.

The researchers qualified their new instrument for space by testing its ability to withstand the vibration and thermal changes experienced during a rocket launch and in-space operation. The photon-pair source maintained very high-quality entanglement throughout the testing and crystal alignment was preserved, even after repeated temperature cycling from -10 C to 40 C.

The researchers incorporated their new instrument into SpooQy-1, a CubeSat that was deployed into orbit from the International Space Station on 17 June 2019. The instrument successfully generated entangled photon-pairs over temperatures from 16 C to 21.5 C.

The researchers are now working with RAL Space in the UK to design and build a quantum nanosatellite similar to SpooQy-1 with the capabilities needed to beam entangled photons from space to a ground receiver. This is slated for demonstration aboard a 2022 mission. They are also collaborating with other teams to improve the ability of CubeSats to support quantum networks.

Strathclyde is the only university which is a partner in all four of the UKs Quantum Technology Hubs, in Sensing and Timing, Quantum Enhanced Imaging, Quantum Computing and Simulation and Quantum Communications Technologies. Dr Oi is Strathclydes lead on a forthcoming CubeSat mission being developed by the Quantum Communications Technologies Hub.

Dr Oi is also Chief Scientific Officer with Craft Prospect, a space engineering practice that delivers mission-enabling products and develops novel mission applications for small space missions. The company is based in the Tontine Building in the Glasgow City Innovation District, which is transforming the way academia, business and industry collaborate to bring competitive advantage to Scotland.

Visit link:
Quantum entanglement demonstrated on orbiting CubeSat - University of Strathclyde

Docuseries takes viewers into the lives and labs of scientists – UChicago News

The camera crew was given full access to Earnest-Nobles research. In several scenes, Earnest-Noble is suited up in white PPE in the Pritzker Nanofabrication Facility in the Eckhardt Research Center. His scientific process and the breakthrough he seeks are depicted with animations and close-up footage of the state-of-the-art facilities. The filmmakers capture Earnest-Noble in the midst of a failed attempt or among his graveyard of failed quantum devices. As he embraces his doubts and is propelled by tenacity, viewers witness an emotional depiction of real science.

Earnest-Nobles lively interviews focus on the experience versus the result of his labors, providing a realistic portrayal of graduate studies and enabling viewers to follow him to his goal of identifying the ideal qubit for superpositiona phenomenon in quantum mechanics in which a particle can exist in several states at once.

When we were filming, I was trying to explain a qubit or something, and how much I was using jargon words was eye-opening to me. It helped me appreciate the challenge of making science understandable, said Earnest-Noble, who is now a quantum computing researcher at IBM. Science is a process far more than a series of facts. That became clear to me from working on this project.

Science communications typically takes a very long struggle of discovery and wraps it up into a pretty package, said Schuster. But something I found very special in this story is that you got to follow Nate for a couple of years. It accurately captured what Nates experience was like. And it focused on his experience, and not on the result, which is pretty amazing."

STAGEs director of science Sunanda Prabhu-Gaunkar originally joined the STAGE lab as a postdoc, and taught herself filmmaking in order to create the series. The scientific process inspires our filmmaking, she said. The workflow embraces failure, remains receptive to discoveries through iteration, and allows for risk-taking, all within a highly collaborative process.

Ellen Askey, the pilot episodes co-director, joined the project as a first-year student at UChicago with prior filmmaking experience. She worked on the series across her college career, graduating in June with a degree in cinema and media studies. Showing a story develop over time can be powerful, she said. We hope to get it out there to a lot of people who are and who are not yet interested in science.

Interested attendees can register through Eventbrite.

Adapted from an article by Maureen McMahon posted on the Physical Sciences Division website.

The rest is here:
Docuseries takes viewers into the lives and labs of scientists - UChicago News

Atos takes the most powerful quantum simulator in the world to the next level with Atos QLM E – Stockhouse

Paris, 23 June 2020 Atos, a global leader in digital transformation, extends its portfolio of quantum solutions with Atos QLM Enhanced (Atos QLM E), a new GPU-accelerated range of its Atos Quantum Learning Machine (Atos QLM) offer, the world's highest-performing commercially available quantum simulator. Offering up to 12 times more computation speed, Atos QLM E paves the way to optimized digital quantum simulation on the first, intermediate-scale quantum computers to be commercialized in the next few years (called NISQ - Noisy Intermediate-Scale Quantum).

By promising to apply, in the near-term, computation capabilities that are beyond the reach of even the most powerful existing computers to solve complex, real-life problems, NISQ devices will play an important role in determining the commercial potential of quantum computing. Herein lies a double challenge for the industry: developing NISQ-optimized algorithms is as important as building the machines, since both are required to identify concrete applications.

Integrating NVIDIA’s V100S PCIe GPUs, Atos QLM E has been optimized to drastically reduce the simulation time of hybrid classical-quantum algorithms simulations, leading to quicker progress in application research. It will allow researchers, students and engineers to leverage some of the most promising variational algorithms (like VQE or QAOA) to further explore models fostering new drugs discovery, tackling pollution with innovative materials or better anticipation of climate change and severe weather phenomena, etc.

Bob Sorensen, Chief Analyst for Quantum Computing at Hyperion Research, said: Atos’ continues to play a key role in the advancement of the quantum computing sector by offering yet another world-class digital quantum simulator with increasingly powerful capabilities, this time through the inclusion of leading-edge NVIDIA GPUs. This latest Atos QLM offering uses a quantum hardware agnostic architecture that is well suited to support faster development of new quantum systems and related architectures as well as new and innovative quantum algorithms, architectures, and use cases. Since launching the first commercially available quantum system in 2017, Atos has concentrated its efforts on helping an increasing base of users better explore a wide range of practical business and scientific applications, a critical requirement for the overall advancement and long-term viability of the quantum computing sector writ large. The launch of the Atos QLM E is an exciting step for Atos but also for its clients and potential new end users, both of whom could benefit from access to these leading-edge digital quantum simulation capabilities”.

Agns Boudot, Senior Vice President, Head of HPC & Quantum at Atos, explained: We are proud to help imagine tomorrow’s quantum applications. As we are entering the NISQ era, the search for concrete problems that can be solved by quantum computing technologies becomes critical, as it will determine the role they will play in helping society shape a better future. Combining unprecedented simulation performances and a programming and execution environment for hybrid algorithms, Atos QLM E represents a major step towards achieving near time breakthroughs”

Atos QLM E is available in six configurations, ranging from 2 to 32 NVIDIA V100S PCIe GPUs. Atos QLM customers have the possibility to upgrade to Atos QLM E at any moment.

The Atos QLM user community continues to grow. Launched in 2017, this platform is being used in numerous countries worldwide including Austria, Finland, France, Germany, India, Italy, Japan, the Netherlands, Senegal, UK and the United States, empowering major research programs in various sectors like industry or energy. Atos’ ambitious program to anticipate the future of quantum computing the Atos Quantum’ program was launched in November 2016. As a result of this initiative, Atos was the first organization to offer a quantum noisy simulation module within its Atos QLM offer.

***

About Atos Atos is a global leader in digital transformation with 110,000 employees in 73 countries and annual revenue of 12 billion. European number one in Cloud, Cybersecurity and High-Performance Computing, the Group provides end-to-end Orchestrated Hybrid Cloud, Big Data, Business Applications and Digital Workplace solutions. The Group is the Worldwide Information Technology Partner for the Olympic & Paralympic Games and operates under the brands Atos, Atos|Syntel, and Unify. Atos is a SE (Societas Europaea), listed on the CAC40 Paris stock index.

The purpose of Atos is to help design the future of the information space. Its expertise and services support the development of knowledge, education and research in a multicultural approach and contribute to the development of scientific and technological excellence. Across the world, the Group enables its customers and employees, and members of societies at large to live, work and develop sustainably, in a safe and secure information space.

Press contact

Marion Delmas | marion.delmas@atos.net | +33 6 37 63 91 99 |

Read more here:
Atos takes the most powerful quantum simulator in the world to the next level with Atos QLM E - Stockhouse

Predicting and elucidating the etiology of fatty liver disease: A machine learning modeling and validation study in the IMI DIRECT cohorts. – DocWire…

This article was originally published here

Predicting and elucidating the etiology of fatty liver disease: A machine learning modeling and validation study in the IMI DIRECT cohorts.

PLoS Med. 2020 Jun;17(6):e1003149

Authors: Atabaki-Pasdar N, Ohlsson M, Viuela A, Frau F, Pomares-Millan H, Haid M, Jones AG, Thomas EL, Koivula RW, Kurbasic A, Mutie PM, Fitipaldi H, Fernandez J, Dawed AY, Giordano GN, Forgie IM, McDonald TJ, Rutters F, Cederberg H, Chabanova E, Dale M, Masi F, Thomas CE, Allin KH, Hansen TH, Heggie A, Hong MG, Elders PJM, Kennedy G, Kokkola T, Pedersen HK, Mahajan A, McEvoy D, Pattou F, Raverdy V, Hussler RS, Sharma S, Thomsen HS, Vangipurapu J, Vestergaard H, t Hart LM, Adamski J, Musholt PB, Brage S, Brunak S, Dermitzakis E, Frost G, Hansen T, Laakso M, Pedersen O, Ridderstrle M, Ruetten H, Hattersley AT, Walker M, Beulens JWJ, Mari A, Schwenk JM, Gupta R, McCarthy MI, Pearson ER, Bell JD, Pavo I, Franks PW

AbstractBACKGROUND: Non-alcoholic fatty liver disease (NAFLD) is highly prevalent and causes serious health complications in individuals with and without type 2 diabetes (T2D). Early diagnosis of NAFLD is important, as this can help prevent irreversible damage to the liver and, ultimately, hepatocellular carcinomas. We sought to expand etiological understanding and develop a diagnostic tool for NAFLD using machine learning.METHODS AND FINDINGS: We utilized the baseline data from IMI DIRECT, a multicenter prospective cohort study of 3,029 European-ancestry adults recently diagnosed with T2D (n = 795) or at high risk of developing the disease (n = 2,234). Multi-omics (genetic, transcriptomic, proteomic, and metabolomic) and clinical (liver enzymes and other serological biomarkers, anthropometry, measures of beta-cell function, insulin sensitivity, and lifestyle) data comprised the key input variables. The models were trained on MRI-image-derived liver fat content (<5% or 5%) available for 1,514 participants. We applied LASSO (least absolute shrinkage and selection operator) to select features from the different layers of omics data and random forest analysis to develop the models. The prediction models included clinical and omics variables separately or in combination. A model including all omics and clinical variables yielded a cross-validated receiver operating characteristic area under the curve (ROCAUC) of 0.84 (95% CI 0.82, 0.86; p < 0.001), which compared with a ROCAUC of 0.82 (95% CI 0.81, 0.83; p < 0.001) for a model including 9 clinically accessible variables. The IMI DIRECT prediction models outperformed existing noninvasive NAFLD prediction tools. One limitation is that these analyses were performed in adults of European ancestry residing in northern Europe, and it is unknown how well these findings will translate to people of other ancestries and exposed to environmental risk factors that differ from those of the present cohort. Another key limitation of this study is that the prediction was done on a binary outcome of liver fat quantity (<5% or 5%) rather than a continuous one.CONCLUSIONS: In this study, we developed several models with different combinations of clinical and omics data and identified biological features that appear to be associated with liver fat accumulation. In general, the clinical variables showed better prediction ability than the complex omics variables. However, the combination of omics and clinical variables yielded the highest accuracy. We have incorporated the developed clinical models into a web interface (see: https://www.predictliverfat.org/) and made it available to the community.TRIAL REGISTRATION: ClinicalTrials.gov NCT03814915.

PMID: 32559194 [PubMed as supplied by publisher]

Read more from the original source:
Predicting and elucidating the etiology of fatty liver disease: A machine learning modeling and validation study in the IMI DIRECT cohorts. - DocWire...