Learning to Trust AI in Troubled Times – AiThority

As budgets tighten amidst a global crisis, marketers are scrambling to find better sources of truth. Whether its good prospecting performance, campaign management, or audience optimisation, there are many areas of success when it comes to the programmatic landscape. To meet this need, programmatic advertising is increasingly being driven by machine learning. So why would anyone doubt machine learning?

Machine learning models are, in many ways expert liars. Machine learning optimises by any means necessary, and if blurring the truth or taking into account irrelevant information helps to optimise, then this is what occurs. Its scary to think how much an unchecked model could get away with in the fast-paced world of programmatic, where seconds count.

In fact, Artificial Intelligence (AI) researcher, Sandra Wachter, actually calls machine learning algorithms black boxes saying: There is a lack of transparency around machine learning decisions and theyre not necessarily justified or legitimate.

So, how can anyone ensure a machine learning model is telling the truth? The best way is to treat the model like a job interview candidate; that is, any statements made should be treated with the due amount of scepticism, and facts must always be checked.

When it comes to performance, everyone wants it better. However, while a model might offer better performance on face value, its important to ask how exactly is that measured:

Machine learning technology can be time consuming and expensive, and its remarkably easy to waste money on a bad algorithm. Having good, solid proof a model works is a great way to avoid wasting budget. Fact-checking and asking for more evidence is vital if unsure of results, and if the model vendor cant offer access to an analyst who can back up the numbers with the work, move on.

Just because all data is accessible, doesnt mean it should be used, or that each point of data is as important as another.

Is knowing whether someone has bought a product before as important as the colour of their socks? If all data in the machine learning model is being used, marketers must ask how and why. Why is all of the data used? Why is all this important? What tests were run to prove it? Is the model even allowed to use all the data?

Everyones familiar with the concept and purpose of GDPR and similar global legislation. So, you must make sure you ask the question about how data is being used, or run the risk of severe fines.

Brands have clear metrics to hit and its the job of client services, together with data engineering, to ensure the machine learning optimises towards the KPIs. However, the beauty of machine learning is it frees up the client services team to do more than just achieve the brands KPI; it can help brands achieve business goals, too.

With thousands of successful campaigns under their belts, client services know what works and what doesnt. Users should expect to be able to contact a specialist at any time to make sure its doing what the clients want.

When talking about purchasing machine learning with a vendor who cant (or wont) answer your questions, its time to bail. Marketers must feel empowered to ask any and all questions of vendors, and just like a job interview, if the answer isnt a good fit then neither is the candidate.

Not knowing about or not understanding machine learning is accepted. However, whats not acceptable is to not be allowed to question machine learning just does it. In order to innovate, especially in volatile environments, everyone needs to better understand machine learning and to achieve this, a two-way conversation is vital.

Silverbullet is the new breed of data-smart marketing services, designed to empower businesses to achieve through a unique hybrid of data services, insight-informed content and programmatic. Our blend of artificial intelligence and human experience

More about Silver Bullet: http://www.wearesivlerbullet.com

Share and Enjoy !

The rest is here:
Learning to Trust AI in Troubled Times - AiThority

AI used to predict Covid-19 patients’ decline before proven to work – STAT

Dozens of hospitals across the country are using an artificial intelligence system created by Epic, the big electronic health record vendor, to predict which Covid-19 patients will become critically ill, even as many are struggling to validate the tools effectiveness on those with the new disease.

The rapid uptake of Epics deterioration index is a sign of the challenges imposed by the pandemic: Normally hospitals would take time to test the tool on hundreds of patients, refine the algorithm underlying it, and then adjust care practices to implement it in their clinics.

Covid-19 is not giving them that luxury. They need to be able to intervene to prevent patients from going downhill, or at least make sure a ventilator is available when they do. Because it is a new illness, doctors dont have enough experience to determine who is at highest risk, so they are turning to AI for help and in some cases cramming a validation process that often takes months or years into a couple weeks.

advertisement

Nobody has amassed the numbers to do a statistically valid test of the AI, said Mark Pierce, a physician and chief medical informatics officer at Parkview Health, a nine-hospital health system in Indiana and Ohio that is using Epics tool. But in times like this that are unprecedented in U.S. health care, you really do the best you can with the numbers you have, and err on the side of patient care.

Epics index uses machine learning, a type of artificial intelligence, to give clinicians a snapshot of the risks facing each patient. But hospitals are reaching different conclusions about how to apply the tool, which crunches data on patients vital signs, lab results, and nursing assessments to assign a 0 to 100 score, with a higher score indicating an elevated risk of deterioration. It was already used by hundreds of hospitals before the outbreak to monitor hospitalized patients, and is now being applied to those with Covid-19.

advertisement

At Parkview, doctors analyzed data on nearly 100 cases and found that 75% of hospitalized patients who received a score in a middle zone between 38 and 55 were eventually transferred to the intensive care unit. In the absence of a more precise measure, clinicians are using that zone to help determine who needs closer monitoring and whether a patient in an outlying facility needs to be transferred to a larger hospital with an ICU.

Meanwhile, the University of Michigan, which has seen a larger volume of patients due to a cluster of cases in that state, found in an evaluation of 200 patients that the deterioration index is most helpful for those who scored on the margins of the scale.

For about 9% of patients whose scores remained on the low end during the first 48 hours of hospitalization, the health system determined they were unlikely to experience a life-threatening event and that physicians could consider moving them to a field hospital for lower-risk patients. On the opposite end of the spectrum, it found 10% to 12% of patients who scored on the higher end of the scale were much more likely to need ICU care and should be closely monitored. More precise data on the results will be published in coming days, although they have not yet been peer-reviewed.

Clinicians in the Michigan health system have been using the score thresholds established by the research to monitor the condition of patients during rounds and in a command center designed to help manage their care. But clinicians are also considering other factors, such as physical exams, to determine how they should be treated.

This is not going to replace clinical judgement, said Karandeep Singh, a physician and health informaticist at the University of Michigan who participated in the evaluation of Epics AI tool. But its the best thing weve got right now to help make decisions.

Stanford University has also been testing the deterioration index on Covid-19 patients, but a physician in charge of the work said the health system has not seen enough patients to fully evaluate its performance. If we do experience a future surge, we hope that the foundation we have built with this work can be quickly adapted, said Ron Li, a clinical informaticist at Stanford.

Executives at Epic said the AI tool, which has been rolled out to monitor hospitalized patients over the past two years, is already being used to support care of Covid-19 patients in dozens of hospitals across the United States. They include Parkview, Confluence Health in Washington state, and ProMedica, a health system that operates in Ohio and Michigan.

Our approach as Covid was ramping up over the last eight weeks has been to evaluate does it look very similar to (other respiratory illnesses) from a machine learning perspective and can we pick up that rapid deterioration? said Seth Hain, a data scientist and senior vice president of research and development at Epic. What we found is yes, and the result has been that organizations are rapidly using this model in that context.

Some hospitals that had already adopted the index are simply applying it to Covid-19 patients, while others are seeking to validate its ability to accurately assess patients with the new disease. It remains unclear how the use of the tool is affecting patient outcomes, or whether its scores accurately predict how Covid-19 patients are faring in hospitals. The AI system was initially designed to predict deterioration of hospitalized patients facing a wide array of illnesses. Epic trained and tested the index on more than 100,000 patient encounters at three hospital systems between 2012 and 2016, and found that it could accurately characterize the risks facing patients.

When the coronavirus began spreading in the United States, health systems raced to repurpose existing AI models to help keep tabs on patients and manage the supply of beds, ventilators and other equipment in their hospitals. Researchers have tried to develop AI models from scratch to focus on the unique effects of Covid-19, but many of those tools have struggled with bias and accuracy issues, according to a review published in the BMJ.

The biggest question hospitals face in implementing predictive AI tools, whether to help manage Covid-19 or advanced kidney disease, is how to act on the risk score it provides. Can clinicians take actions that will prevent the deterioration from happening? If not, does it give them enough warning to respond effectively?

In the case of Covid-19, the latter question is the most relevant, because researchers have not yet identified any effective treatments to counteract the effects of the illness. Instead, they are left to deliver supportive care, including mechanical ventilation if patients are no longer able to breathe on their own.

Knowing ahead of time whether mechanical ventilation might be necessary is helpful, because doctors can ensure that an ICU bed and a ventilator or other breathing assistance is available.

Singh, the informaticist at the University of Michigan, said the most difficult part about making predictions based on Epics system, which calculates a score every 15 minutes, is that patients ratings tend to bounce up and down in a sawtooth pattern. A change in heart rate could cause the score to suddenly rise or fall. He said his research team found that it was often difficult to detect, or act on, trends in the data.

Because the score fluctuates from 70 to 30 to 40, we felt like its hard to use it that way, he said. A patient whos high risk right now might be low risk in 15 minutes.

In some cases, he said, patients bounced around in the middle zone for days but then suddenly needed to go to the ICU. In others, a patient with a similar trajectory of scores could be managed effectively without need for intensive care.

But Singh said that in about 20% of patients it was possible to identify threshold scores that could indicate whether a patient was likely to decline or recover. In the case of patients likely to decline, the researchers found that the system could give them up to 40 hours of warning before a life-threatening event would occur.

Thats significant lead time to help intervene for a very small percentage of patients, he said. As to whether the system is saving lives, or improving care in comparison to standard nursing practices, Singh said the answers will have to wait for another day. You would need a trial to validate that question, he said. The question of whether this is saving lives is unanswerable right now.

See the original post here:
AI used to predict Covid-19 patients' decline before proven to work - STAT

Rashed Ali Almansoori emphasizes on how Artificial Intelligence and Machine Learning will turn out to be game-changers – IBG NEWS

Rashed Ali Almansoori emphasizes on how Artificial Intelligence and Machine Learning will turn out to be game-changers

To be in the race, it is important to evolve with time. Technology is booming and the new-age era has seen many changes. In the past, since the world met the internet, things changed and how. From the time of cellphones to smartphones, computers to portable laptops, things have seamlessly changed with social media taking over everyone. Earlier Facebook was considered only for chatting and now it has become a medium to make money by creating content. Besides this, there are many other platforms like YouTube, TikTok, and Instagram to earn in millions. One of the key social media players, Rashed Ali Almansoori is a digital genius with years of experience.

He is a tech blogger who believes to cope up with the latest trends. Being a digital creator, Rashed loves to create meaningful yet informative content about technology. Authenticity is the key to establish your target audience over the web, says the blogger. His other expertise includes web development, web designing, SEO building, and promoting brands over the digital domain. Rashed states that many businesses have taken the digital route considering the popular social media has given in the last decade. The coming decade will see many other innovations out of which Artificial Intelligence will be the main highlight among all.

The digital expert is currently learning the fundamentals of Artificial Intelligence (AI) and Machine Learning (ML). It would not be a surprise if machines perform tasks effectively than humans in the coming time. Upgrading yourself to stay in the game is the only solution, quoted Rashed. By learning the courses, he aims to integrate them into his works. Bringing novelty in his work is what the blogger is doing and it will benefit him in the future. The past year, the 29-year old techie built a strong image of himself on social media and his website is garnering millions of visitors from the Middle East and other countries.

See more here:
Rashed Ali Almansoori emphasizes on how Artificial Intelligence and Machine Learning will turn out to be game-changers - IBG NEWS

Global Machine Learning in Education Market 2020 Study of Growing Trends, Future Scope, New investment, Regional Analysis, Upcoming Business…

The report entitled Machine Learning in Education Market: Global Industry Analysis 2020-2026is a comprehensive research study presenting significant data By Reportspedia.com

Global Machine Learning in Education Market 2020 Industry Research Report offers you market size, industry growth, share, investment plans and strategies, development trends, business idea and forecasts to 2026. The report highlights the exhaustive study of the major market along with present and forecast market scenario with useful business decisions.

Machine Learning in Education business report includes primary research together with the all-inclusive investigation of subjective as well as quantitative perspectives by different industry specialists, key supposition pioneers to gain a more profound understanding of the industry execution.[Request The COVID 19 Impact On This Market]. The report gives the sensible picture of the current industrial situation which incorporates authentic and anticipated market estimate in terms of value and volume, technological advancement, macroeconomic and governing factors in the market.

Top Key Manufacturers of Machine Learning in Education industry Report:-

IBMMicrosoftGoogleAmazonCognizanPearsonBridge-UDreamBox LearningFishtreeJellynoteQuantum Adaptive Learning

For Better Insights Go With This Free Sample Report Enabled With Respective Tables and Figures: https://www.reportspedia.com/report/technology-and-media/global-machine-learning-in-education-market-2019-by-company,-regions,-type-and-application,-forecast-to-2024/30939 #request_sample

Machine Learning in Education Market segment by Type-

Cloud-BasedOn-Premise

Machine Learning in Education Market segment by Application-

Intelligent Tutoring SystemsVirtual FacilitatorsContent Delivery SystemsInteractive WebsitesOthers

(***Our FREE SAMPLE COPY of the report gives a brief introduction to the research report outlook, TOC, list of tables and figures, an outlook to key players of the market and comprising key regions.***)

The report offers a multi-step view of the Global Machine Learning in Education Market. The first approach focuses through an impression of the market. This passage includes several definitions, arrangements, the chain assembly of the industry in one piece, and various segmentation on the basis of solution, product and regionalong with different geographic regions for the global market. This part of the section also integrates an all-inclusive analysis of the different government strategies and enlargement plans that influence the market, its cost assemblies and industrialized processes. The second subdivision of the report includes analytics on the Global Machine Learning in Education Market based on its revenue size in terms of value and volume.

Machine Learning in Education Market Segmentation Analysis:-

Global Machine Learning in Education market segmentation, by solution: Biological, Chemical, Mechanical, Global Machine Learning in Education market segmentation, by product: Stress Protection, Scarification, Pest Protection

Machine Learning in Education Market Regional Analysis:-North America(United States, Canada),Europe(Germany, Spain, France, UK, Russia, and Italy),Asia-Pacific(China, Japan, India, Australia, and South Korea),Latin America(Brazil, Mexico, etc.),The Middle East and Africa(GCC and South Africa).

We have designed the Machine Learning in Education report with a group of graphical representations, tables, and figures which portray a detailed picture of Machine Learning in Education industry. In addition, the report has a clear objective to mark probable shareholders of the company. Highlighting business chain framework explicitly offers an executive summary of market evolution. Thus it becomes easy to figure out the obstacles and uplifting profit stats. In accordance with a competitive prospect, this Machine Learning in Education report dispenses a broad array of features essential for measuring the current Machine Learning in Education market performance along with technological advancements, business abstract, strengths and weaknesses of market position and hurdles crossed by the leading Machine Learning in Education market players to gain leading position.

For more actionable insights into the competitive landscape of global Machine Learning in Education market, get a customized report here: https://www.reportspedia.com/report/technology-and-media/global-machine-learning-in-education-market-2019-by-company,-regions,-type-and-application,-forecast-to-2024/30939 #inquiry_before_buying

Some Notable Report Offerings:

Report Table of Content Overview Gives Exact Idea About International Machine Learning in Education Market Report:

Chapter 1 describes Machine Learning in Education report important market inspection, product cost structure, and analysis, Machine Learning in Education market size and scope forecast from 2017 to 2026. Although, Machine Learning in Education market gesture, factors affecting the expansion of business also deep study of arise and existing market holders.

Chapter 2 display top manufacturers of Machine Learning in Education market with sales and revenue and market share. Furthermore, report analyses the import and export scenario of industry, demand and supply ratio, labor cost, raw material supply, production cost, marketing sources, and downstream consumers of market.

Chapter 3, 4, 5 analyses Machine Learning in Education report competitive analysis based on product type, their region wise depletion and import/export analysis, the composite annual growth rate of market and foretell study from 2017 to 2026.

Chapter 6 gives an in-depth study of Machine Learning in Education business channels, market sponsors, vendors, dispensers, merchants, market openings and risk.

Chapter 7 gives Machine Learning in Education market Research Discoveries and Conclusion

Chapter 8 gives Machine Learning in Education Appendix

To Analyze Details Of Table Of Content (TOC) of Machine Learning in Education Market Report, Visit Here: https://www.reportspedia.com/report/technology-and-media/global-machine-learning-in-education-market-2019-by-company,-regions,-type-and-application,-forecast-to-2024/30939 #table_of_contents

Continue reading here:
Global Machine Learning in Education Market 2020 Study of Growing Trends, Future Scope, New investment, Regional Analysis, Upcoming Business...

‘Err On The Side Of Patient Care’: Doctors Turn To Untested Machine Learning To Monitor Virus – Kaiser Health News

Physicians are prematurely relying on Epic's deterioration index, saying they're unable to wait for a validation process that can take months to years. The artificial intelligence gives them a snapshot of a patient's illness and helps them determine who needs more careful monitoring. News on technology is from Verily, Google, MIT, Livongo and more, as well.

Stat:AI Used To Predict Covid-19 Patients' Decline Before Proven To WorkDozens of hospitals across the country are using an artificial intelligence system created by Epic, the big electronic health record vendor, to predict which Covid-19 patients will become critically ill, even as many are struggling to validate the tools effectiveness on those with the new disease. The rapid uptake of Epics deterioration index is a sign of the challenges imposed by the pandemic: Normally hospitals would take time to test the tool on hundreds of patients, refine the algorithm underlying it, and then adjust care practices to implement it in their clinics. Covid-19 is not giving them that luxury. (Ross, 4/24)

Modern Healthcare:Verily, Google Cloud Develop COVID-19 Chatbot For HospitalsGoogle's sister company Verily Life Sciences has joined the mix of companies offering COVID-19 screening tools that hospitals can add to their websites. The screener, called the COVID-19 Pathfinder, takes the form of a chatbot or voicebotessentially personified computer programs that can instant-message or speak to human users in plain English. (Cohen, 4/23)

Boston Globe:Tech From MIT May Allow Caregivers To Monitor Coronavirus Patients From A DistanceA product developed at the Massachusetts Institute of Technology is being used to remotely monitor patients with COVID-19, using wireless signals to detect breathing patterns of people who do not require hospitalization but who must be watched closely to ensure their conditions remain stable. The device, developed at MITs Computer Science and Artificial Intelligence Laboratory by professor Dina Katabi and her colleagues, could in some situations lower the risk of caregivers becoming infected while treating patients with the coronavirus. (Rosen, 4/23)

Stat:A Gulf Emerges In Health Tech: Some Companies Surge, Others Have LayoffsYou might expect them to be pandemic-proof: Theyre the companies offering glimpses of the future in which you dont have to go to the doctors office, ones that would seem to be insulated from a crisis in which people arent leaving their homes. Yet theres a stark divide emerging among the companies providing high-demand virtual health care, triage, and testing services. While some are hiring up and seeing their stock prices soar, others are furloughing and laying off their workers. (Robbins and Brodwin, 4/24)

Go here to read the rest:
'Err On The Side Of Patient Care': Doctors Turn To Untested Machine Learning To Monitor Virus - Kaiser Health News

One Supercomputers HPC And AI Battle Against The Coronavirus – The Next Platform

Normally, supercomputers installed at academic and national laboratories get configured once, acquired as quickly as possible before the money runs out, installed and tested, qualified for use, and put to work for a four or five or possibly longer tour of duty. It is a rare machine that is upgraded even once, much less a few times.

But that is not he case with the Corona system at Lawrence Livermore National Laboratory, which was commissioned in 2017 when North America had a total solar eclipse and hence its nickname. While this machine, procured under the Commodity Technology Systems (CTS-1) to not only do useful work, but to assess the CPU and GPU architectures provided by AMD, was not named after the coronavirus pandemic that is now spreading around the Earth, the machine is being upgraded one more time to be put into service as a weapon against the SARS-CoV-2 virus which caused the COVID-19 illness that has infected at least 2.75 million people (confirmed by test, with the number very likely being higher) and killed at least 193,000 people worldwide.

The Corona system was built by Penguin Computing, which has a long-standing relationship with Lawrence Livermore National Laboratory, Los Alamos National Laboratory, and Sandia National Laboratories the so-called Tri-Labs that are part of the US Department of Energy and that coordinate on their supercomputer procurements. The initial Corona machine installed in 2018 had 164 compute nodes, each equipped with a pair of Naples Epyc 7401 processors, which have 24 cores each running at 2 GHz with an all core turbo boost of 2.8 GHz. The Penguin Tundra Extreme servers that comprise this cluster have 256 GB of main memory and 1.6 TB of PCI-Express flash. When the machine was installed in November 2018, half of the nodes were equipped with four of AMDs Radeon Instinct MI25 GPU accelerators, which had 16 GB of HBM2 memory each and which had 768 gigaflops of FP64 performance, 12.29 teraflops of FP32 performance, and 24.6 teraflops of FP16 performance. The 7,872 CPU cores in the system delivered 126 teraflops at FP64 double precision all by themselves, and the Radeon Instinct MI25 GPU accelerators added another 251.9 teraflops at FP64 double precision. The single precision performance for the machine was obviously much higher, at 4.28 petaflops across both the CPUs and GPUs. Interestingly, this machine was equipped with 200 Gb/sec HDR InfiniBand switching from Mellanox Technologies, which was obviously one of the earliest installations of this switching speed.

In November last year, just before the coronavirus outbreak or, at least we think that was before the outbreak, that may turn out to not be the case AMD and Penguin worked out a deal to installed four of the much more powerful Radeon Instinct MI60 GPU accelerators, based on the 7 nanometer Vega GPUs, in the 82 nodes in the system that didnt already have GPU accelerators in them. The Radeon Instinct MI60 has 32 GB of HBM2 memory, and has 6.6 teraflops of FP64 performance, 13.3 teraflops of FP32 performance, and 26.5 teraflops of FP16 performance. Now the machine has 8.9 petaflops of FP32 performance and 2.54 petaflops of FP64 performance, and this is a much more balanced 64-bit to 32-bit performance, and it makes these nodes more useful for certain kinds of HPC and AI workloads. Which turns out to be very important to Lawrence Livermore in its fight against the COVID-19 disease.

To find out more about how the Corona system and others are being deployed in the fight against COVID-19, and how HPC and AI workloads are being intertwined in that fight, we talked to Jim Brase, deputy associate director for data science at Lawrence Livermore.

Timothy Prickett Morgan: It is kind of weird that this machine was called Corona. Foreshadowing is how you tell the good literature from the cheap stuff. The doubling of performance that just happened late last year for this machine could not have come at a better time.

Jim Brase: It pretty much doubles the overall floating point performance of the machine, which is great because what we are mainly running on Corona is both the molecular dynamics calculations of various viral and human protein components and then machine learning algorithms for both predictive models and design optimization.

TPM: Thats a lot more oomph. So what specifically are you doing with it in the fight against COVID-19?

Jim Brase: There are two basic things were doing as part of the COVID-19 response, and this machine is almost entirely dedicated to this although several of our other clusters at Lawrence Livermore are involved as well.

We have teams that are doing both antibody and vaccine design. They are mainly focused on therapeutic antibodies right now. They are basically designing proteins that will interact with the virus or with the way the virus interacts with human cells. That involves hypothesizing different protein structures and computing what those structures actually look like in detail, then computing using molecular dynamics the interaction between those protein structures and the viral proteins or the viral and human cell interactions.

With this machine, we do this iteratively to basically design a set of proteins. We have a bunch of metrics that we try to optimize on binding strength, the stability of the binding, stuff like that and then we do a detailed molecular dynamics calculations to figure out the effective energy of those binding events. These metrics determine the quality of the potential antibody or vaccine that we design.

TPM: To wildly oversimplify, this SARS-CoV-2 virus is a ball of fat with some spikes on it that wreaks havoc as it replicates using our cells as raw material. This is a fairly complicated molecule at some level. What are we trying to do? Stick goo to it to try to keep it from replicating or tear it apart or dissolve it?

Jim Brase: In the case of in the case of antibodies, which is what were mostly focusing on right now, we are actually designing a protein that will bind to some part of the virus, and because of that the virus then changes its shape, and the change in shape means it will not be able to function. These are little molecular machines that they depend on their shape to do things.

TPM: Theres not something that will physically go in and tear it apart like a white blood cell eats stuff.

Jim Brase: No. Thats generally done by biology, which comes in after this and cleans up. What we are trying to do is what we call neutralizing antibodies. They go in and bind and then the virus cant do its job anymore.

TPM: And just for a reference, what is the difference between a vaccine and an antibody?

Jim Brase: In some sense, they are the opposite of each other. With a vaccine, we are putting in a protein that actually looks like the virus but it doesnt make you sick. It stimulates the human immune system to create its own antibodies to combat that virus. And those antibodies produced by the body do exactly the same thing we were just talking about Producing antibodies directly is faster, but the effect doesnt last. So it is more of a medical treatment for somebody who is already sick.

TPM: I was alarmed to learn that for certain coronaviruses, immunity doesnt really last very long. With the common cold, the reason we get them is not just because they change every year, but because if you didnt have a bad version of it, you dont generate a lot of antibodies and therefore you are susceptible. If you have a very severe cold, you generate antibodies and they last for a year or two. But then youre done and your body stops looking for that fight.

Jim Brase: The immune system is very complicated and for some things it creates antibodies that remembers them for a long time. For others, its much shorter. Its sort of a combination of the of the what we call the antigen the thing about that, the virus or whatever that triggers it and then the immune system sort of memory function together, cause the immunity not to last as long. Its not well understood at this point.

TPM: What are the programs youre using to do the antibody and protein synthesis?

Jim Brase: We are using a variety of programs. We use GROMACS, we use NAMD, we use OpenMM stuff. And then we have some specialized homegrown codes that we use as well that operate on the data coming from these programs. But its mostly the general, open source molecular mechanics and molecular dynamics codes.

TPM: Lets contrast this COVID-19 effort with like something like SARS outbreak in 2003. Say you had the same problem. Could you have even done the things you are doing today with SARS-CoV-2 back then with SARS? Was it even possible to design proteins and do enough of them to actually have an impact to get the antibody therapy or develop the vaccine?

Jim Brase: A decade ago, we could do single calculations. We could do them one, two, three. But what we couldnt do was iterate it as a design optimization. Now we can run enough of these fast enough that we can make this part of an actual design process where we are computing these metrics, then adjusting the molecules. And we have machine learning approaches now that we didnt have ten years ago that allow us to hypothesize new molecules and then we run the detailed physics calculations against this, and we do that over and over and over.

TPM: So not only do you have a specialized homegrown code that takes the output of these molecular dynamics programs, but you are using machine learning as a front end as well.

Jim Brase: We use machine learning in two places. Even with these machines and we are using our whole spectrum of systems on this effort we still cant do enough molecular dynamics calculations, particularly the detailed molecular dynamics that we are talking about here. What does the new hardware allow us to do? It basically allows us to do a higher percentage of detailed molecular dynamics calculations, which give us better answers as opposed to more approximate calculations. So you can decrease the granularity size and we can compute whole molecular dynamics trajectories as opposed to approximate free energy calculations. It allows us to go deeper on the calculations, and do more of those. So ultimately, we get better answers.

But even with these new machines, we still cant do enough. If you think about the design space on, say, a protein that is a few hundred amino acids in length, and at each of those positions you can put in 20 different amino acids, you on the order of 20200 in the brute force with the possible number of proteins you could evaluate. You cant do that.

So we try to be smart about how we select where those simulations are done in that space, based on what we are seeing. And then we use the molecular dynamics to generate datasets that we then train machine learning models on so that we are basically doing very smart interpolation in those datasets. We are combining the best of both worlds and using the physics-based molecular dynamics to generate data that we use to train these machine learning algorithms, which allows us to then fill in a lot of the rest of the space because those can run very, very fast.

TPM: You couldnt do all of that stuff ten years ago? And SARS did not create the same level of outbreak that SARS-CoV-2 has done.

Jim Brase: No, these are all fairly new early new ideas.

TPM: So, in a sense, we are lucky. We have the resources at a time when we need them most. Did you have the code all ready to go for this? Were you already working on this kind of stuff and then COVID-19 happened or did you guys just whip up these programs?

Jim Brase: No, no, no, no. Weve been working on this kind of stuff for her for a few years.

TPM: Well, thank you. Id like to personally thank you.

Jim Brase: It has been an interesting development. Its both been both in the biology space and the physics space, and those two groups have set up a feedback loop back and forth. I have been running a consortium called Advanced Therapeutic Opportunities in Medicine, or ATOM for short, to do just this kind of stuff for the last four years. It started up as part of the Cancer Moonshot in 2016 and focused on accelerating cancer therapeutics using the same kinds of ideas, where we are using machine learning models to predict the properties, using both mechanistic simulations like molecular dynamics, but all that combined with data, but then also using it other the other way around. We also use machine learning to actually hypothesize new molecules given a set of molecules that we have right now and that we have computed properties on them that arent quite what we want, how do we just tweak those molecules a little bit to adjust their properties in the directions that we want?

The problem with this approach is scale. Molecules are atoms that are bonded with each other. You could just take out an atom, add another atom, change a bond type, or something. The problem with that is that every time you do that randomly, you almost always get an illegal molecule. So we train these machine learning algorithms these are generative models to actually be able to generate legal molecules that are close to a set of molecules that we have but a little bit different and with properties that are probably a little bit closer to what we what we want. And so that allows us to smoothly adjust the molecular designs to move towards the optimization targets that we want. If you think about optimization, what you want are things with smooth derivatives. And if you do this in sort of the discrete atom bond space, you dont have smooth derivatives. But if you do it in these, these are what we call learned latent spaces that we get from generative models, then you can actually have a smooth response in terms of the molecular properties. And thats what we want for optimization.

The other part of the machine learning story here is these new types of generative models. So variational autoencoders, generative adversarial models the things you hear about that generate fake data and so on. Were actually using those very productively to imagine new types of molecules with the kinds of properties that we want for this. And so thats something we were absolutely doing before COVID-19 hit. We have taken these projects like ATOM cancer project and other work weve been doing with DARPA and other places focused on different diseases and refocused those on COVID-19.

One other thing I wanted to mention is that we havent just been applying biology. A lot of these ideas are coming out of physics applications. One of our big things at Lawrence Livermore is laser fusion. We have 192 huge lasers at the National Ignition Facility to try to create fusion in a small hydrogen deuterium target. There are a lot of design parameters that go into that. The targets are really complex. We are using the same approach. Were running mechanistic simulations of the performance of those targets, we are then improving those with real data using machine learning. So now we now have a hybrid model that has physics in it and machine learning data models, and using that to optimize the designs of the laser fusion target. So thats led us to a whole new set of approaches to fusion energy.

Those same methods actually are the things were also applying to molecular design for medicines. And the two actually go back and forth and sort of feed on each other and support each other. In the last few weeks, some of the teams that have been working on the physics applications have actually jumped over onto the biology side and are using some of the same sort of complex workflows that were using on these big parallel machines that theyve developed for physics and applying those to some of the biology applications and helping to speed up the applications on these on this new hardware thats coming in. So it is a really nice synergy going back and forth.

TPM: I realize that machine learning software uses the GPUs for training and inference, but is the molecular dynamics software using the GPUs, too?

Jim Brase: All of the molecular dynamics software has been set up to use GPUs. The code actually maps pretty naturally onto the GPU.

TPM: Are you using the CUDA variants of the molecular dynamics software, and I presume that it is using the Radeon Open Compute, or ROCm, stack from AMD to translate that code so it can run on the Radeon Instinct accelerators?

Jim Brase: There has been some work to do, but it works. Its getting its getting to be pretty solid now, thats one of the reasons we wanted to jump into the AMD technology pretty early, because you know, any time you do first-in-kind machines its not always completely smooth sailing all the way.

TPM: Its not like Lawrence Livermore has a history of using novel designs for supercomputers. [Laughter]

Jim Brase: We seldom work with machines that are not Serial 00001 or Serial 00002.

TPM: Whats the machine learning stack you use? I presume it is TensorFlow.

Jim Brase: We use TensorFlow extensively. We use PyTorch extensively. We work with the DeepChem group at Stanford University that does an open chemistry package built on TensorFlow as well.

TPM: If you could fire up an exascale machine today, how much would it help in the fight against COVID-19?

Jim Brase: It would help a lot. Theres so much to do.

I think we need we need to show the benefits of computing for drug design and we are concretely doing that now. Four years ago, when we started up ATOM, everybody thought this was nuts, the general idea that we could lead with computing rather than experiment and do the experiments to focus on validating the computational models rather than the other way around. Everybody thought we were nuts. As you know, with the growth of data, the growth of machine learning capabilities, more accessibility to sophisticated molecular dynamics, and so on its much more accepted that computing is a big part of this. But we still have a long way to go on this.

The fact is, machine learning is not magic. Its a fancy interpolator. You dont get anything new out of it. With the physics codes, you actually get something new out of it. So the physics codes are really the foundation of this. You supplement them with experimental data because theyre not right necessarily, either. And then you use the machine learning on top of all that to fill in the gaps because you havent been able to sample that huge chemical and protein space adequately to really understand everything at either the data level or the mechanistic level.

So thats how I think of it. Data is truth sort of and what you also learn about data is that it is not always the same as you go through this. But data is the foundation. Mechanistic modeling allows us to fill in where we just cant measure enough data it is too expensive, it takes too long, and so on. We fill in with mechanistic modeling and then above that we fill in that then with machine learning. We have this stack of experimental truth, you know, mechanistic simulation that incorporates all the physics and chemistry we can, and then we use machine learning to interpolate in those spaces to support the design operation.

For COVID-19, there are there are a lot of groups doing vaccine designs. Some of them are using traditional experimental approaches and they are making progress. Some of them are doing computational designs, and that includes the national labs. Weve got 35 designs done and we are experimentally validating those now and seeing where we are with them. It will generally take two to three iterations of design, then experiment, and then adjust the designs back and forth. And were in the first round of that right now.

One thing were all doing, at least on the public side of this, is we are putting all this data out there openly. So the molecular designs that weve proposed are openly released. Then the validation data that we are getting on those will be openly released. This is so our group working with other lab groups, working with university groups, and some of the companies doing this COVID-19 research can contribute. We are hoping that by being able to look at all the data that all these groups are doing, we can learn faster on how to sort of narrow in on the on the vaccine designs and the antibody designs that will ultimately work.

Continued here:
One Supercomputers HPC And AI Battle Against The Coronavirus - The Next Platform

Automation, AI, and ML The Heroes in the World of Payment Fraud Detection – EnterpriseTalk

How organizations are leveraging AI to track a fraudulent activity (for example, in the financial industry) and what tools are available to the enterprises right now?

Machine Learning is not new in the world of payment fraud. In fact, one of the pioneers of Machine Learning is Professor Leon Cooper, Director of Brown Universitys Centre for Neural Science. The Centre was founded in 1973 to study animal nervous systems and the human brain. However, if you follow his career, Dr. Coopers machine learning technology was adapted for spotting fraud on credit cards and is still used today for identifying payment fraud within many financial institutions around the world.

Firms Need to be Secure to the Core Before Considering Digital Transformation

Machine learning technologies improve when they are presented with more and more data. Since there is a lot of payment data around today, payment fraud prevention has become an excellent use case for AI. To date, machine learning technologies have been used mainly by banks. Still, today more and more merchants are taking advantage of this technology to help automate fraud detection, including many retailers and telecommunications companies.

What are the interesting developments in this space for enterprises?

There is a lot of information on how machine learning is helping to understand human behavior and, more specifically, false/positive detection. However, it is our view that there is not enough focus on how automation could benefit the whole, end to end process, particularly within day to day fraud management business processes.

Until now, the fraud detection industry has focused on detecting fraud reactively; but it has not focussed on proactively evaluating the impact of automation on the whole end to end fraud management process. Clearly, the interdependencies on these two activity streams are significant, so the question remains why fraud prevention suppliers arent considering both.

Automation Is Booming Robots Are Taking Over Amid Lock-downs

Fraud is increasing, so at what point do we recognize that the approach of throwing budget and increasing the number of analysts in our teams is not working and that we need to consider automating more of the process? Machines dont steal data, so why are the manual processes/interventions not attracting more attention?

It isnt a stretch to imagine most of the fraud risk strategy process becoming automated. Instead of the expanding teams of today performing the same manual task continually, those same staff members could be used to spot enhancements in customer insight. This would enable analysts to thoroughly investigate complex fraud patterns, which a machine has not identified, or to assist in other tasks outside of risk management, which provide added business value.

Process automation is continuing to innovate and provide increased efficiency and profit gains in the places its implemented. The automation revolution isnt coming; its here.

What are the major concerns?

One major concern is the lack of urgency in adopting new ways of working there is a need to be more agile and innovative to stop the fraudsters continuing to win. We need to act fast and innovate, but many organizations are struggling to keep up, and the fraudsters are winning.

The use cases are well defined for the use of machine learning and AI, with big data sets, etc. but machine learning will not fix poor data management processes alone. Machines dont steal data. People do.

With the number of digital payments being made across the globe increasing dramatically, how can organizations ensure maximum sales conversion and payment acceptance, whilst mitigating any risk exposure?

Strategy alignment for taking digital payments is critical. The more organizations can operate holistically and not get caught out by silos and operational gaps, the better. Put simply; if key stakeholders in both the sales and marketing and risk teams are working to the same set of key performance indicators (KPIs), then mistakes will be mitigated. Many issues arise due to operational gaps, and those gaps will be exploited by the highly sophisticated and technically advanced modern-day fraudster.

Artificial Intelligence Infused with Big Data Creating a Tech-driven World

The reality is that technology is accelerating the convergence business activities. Managing that convergence and adapting your organization to ensure it remains competitive becomes more and more important. Successful organizations with a competitive future will continue to ensure maximum sales conversions and payment acceptance, whilst mitigating any risk exposure, by exploiting best of breed technology as much as possible.

Excerpt from:
Automation, AI, and ML The Heroes in the World of Payment Fraud Detection - EnterpriseTalk

The Dell EMC PowerEdge R7525 Saved Time During Machine Learning Preparation Tasks and Achieved Faster Image Processing Than a HPE ProLiant DL380…

Principled Technologies (PT) ran analytics and synthetic, containerized workloads on a ~$40K Dell EMC PowerEdge R7525 and a similarly priced HPE ProLiant DL380 Gen10 to gauge performance and performance/cost ratio.

To explore the performance on certain machine learning tasks of a ~$40K Dell EMC PowerEdge R7525 server powered by AMD EPYC 7502 processors, the experts at PT set up two testbeds and compared its performance results to those of a similarly priced HPE ProLiant DL380 Gen10 powered by Intel Xeon Gold 6240 processors.

The first study, Finish machine learning preparation tasks on Kubernetes containers in less time with the Dell EMC PowerEdge R7525, utilizes a workload that emulates simple image processing tasks that a company might run in the preparation phase of machine learning.

According to the first study, we found that the Dell EMC server: Processed 3.3 million images in 55.8% less time Processed 2.26x the images each second Had 2.32x the value in terms of image processing rate vs. hardware cost.

The second study, Get better k-means analytics workload performance for your money with the Dell EMC PowerEdge R7525, utilizes a learning algorithm used to mimic data mining that a company might use to improve the customer experience or prevent fraud.

According to the second study, we found that the Dell EMC solution: Completed a k-means clustering workload in 40 percent less time Processed 67 percent more data per second Carried a 74 percent better performance/cost ratio in terms of data processing performance vs. hardware price.

To explore the results PT found when comparing the two current-gen ~$40K server solutions, read the Kubernetes study here facts.pt/rfcwex2 and the k-means study here facts.pt/0jyo64h.

About Principled Technologies, Inc.Principled Technologies, Inc. is the leading provider of technology marketing and learning & development services.

Principled Technologies, Inc. is located in Durham, North Carolina, USA. For more information, please visit http://www.principledtechnologies.com.

Company ContactPrincipled Technologies, Inc.1007 Slater Road, Suite #300Durham, NC 27703press@principledtechnologies.com

See the article here:
The Dell EMC PowerEdge R7525 Saved Time During Machine Learning Preparation Tasks and Achieved Faster Image Processing Than a HPE ProLiant DL380...

Google’s Head of Quantum Computing Hardware Resigns – WIRED

In late October 2019, Google CEO Sundar Pichai likened the latest result from the companys quantum computing hardware lab in Santa Barbara, California, to the Wright brothers first flight.

One of the labs prototype processors had achieved quantum supremacyevocative jargon for the moment a quantum computer harnesses quantum mechanics to do something seemingly impossible for a conventional computer. In a blog post, Pichai said the milestone affirmed his belief that quantum computers might one day tackle problems like climate change, and the CEO also name-checked John Martinis, who had established Googles quantum hardware group in 2014.

Heres what Pichai didnt mention: Soon after the team had first got its quantum supremacy experiment working a few months earlier, Martinis says, he had been reassigned from a leadership position to an advisory one. Martinis tells WIRED that the change led to disagreements with Hartmut Neven, the longtime leader of Googles quantum project.

Martinis resigned from Google early this month. Since my professional goal is for someone to build a quantum computer, I think my resignation is the best course of action for everyone, he adds.

A Google spokesman did not dispute this account, and says that the company is grateful for Martinis contributions and that Neven continues to head the companys quantum project. Parent company Alphabet has a second, smaller, quantum computing group at its X Labs research unit. Martinis retains his position as a professor at the UC Santa Barbara, which he held throughout his tenure at Google, and says he will continue to work on quantum computing.

Googles quantum computing project was founded by Neven, who pioneered Googles image search technology, in 2006, and initially focused on software. To start, the small group accessed quantum hardware from Canadian startup D-Wave Systems, including in collaboration with NASA.

Everything you ever wanted to know about qubits, superpositioning, and spooky action at a distance.

The project took on greater scale and ambition when Martinis joined in 2014 to establish Googles quantum hardware lab in Santa Barbara, bringing along several members of his university research group. His nearby lab at UC Santa Barbara had produced some of the most prominent work in the field over the past 20 years, helping to demonstrate the potential of using superconducting circuits to build qubits, the building blocks of quantum computers.

Qubits are analogous to the bits of a conventional computer, but in addition to representing 1s and 0s, they can use quantum mechanical effects to attain a third state, dubbed a superposition, something like a combination of both. Qubits in superposition can work through some very complex problems, such as modeling the interactions of atoms and molecules, much more efficiently than conventional computer hardware.

How useful that is depends on the number and reliability of qubits in your quantum computing processor. So far the best demonstrations have used only tens of qubits, a far cry from the hundreds or thousands of high quality qubits experts believe will be needed to do useful work in chemistry or other fields. Googles supremacy experiment used 53 qubits working together. They took minutes to crunch through a carefully chosen math problem the company calculated would take a supercomputer on the order of 10,000 years, but does not have a practical application.

Martinis leaves Google as the company and rivals that are working on quantum computing face crucial questions about the technologys path. Amazon, IBM, and Microsoft, as well as Google offer their prototype technology to companies such as Daimler and JP Morgan so they can run experiments. But those processors are not large enough to work on practical problems, and it is not clear how quickly they can be scaled up.

When WIRED visited Googles quantum hardware lab in Santa Barbara last fall, Martinis responded optimistically when asked if his hardware team could see a path to making the technology practical. I feel we know how to scale up to hundreds and maybe thousands of qubits, he said at the time. Google will now have to do it without him.

More Great WIRED Stories

Continue reading here:
Google's Head of Quantum Computing Hardware Resigns - WIRED

Muquans and Pasqal partner to advance quantum computing – Quantaneo, the Quantum Computing Source

This partnership is an opportunity to leverage a unique industrial and technological expertise for the design, integration and validation of advanced quantum solutions that has been applied for more than a decade to quantum gravimeters and atomic clocks. It will speed up the development of Pasqals processors and will bring them to an unprecedented maturity level.

Muquans will supply several key technological building blocks and a technical assistance to Pasqal, that will offer an advanced computing and simulation capability towards quantum advantage for real life applications.

We have the strong belief that the neutral atoms technology developed by Pasqal has a unique potential and this agreement is a wonderful opportunity for Muquans to participate on the great adventure of quantum computing. It will also help us find new opportunities for our technologies. We expect this activity to significantly grow in the coming years and this partnership will allow us to become a key stakeholder in the supply chain of quantum computers., Bruno Desruelle, CEO Muquans

Muquans laser solutions combine extreme performance, advanced functionalities and industrial reliability. When you develop the next generation of quantum computers, you need to rely on strong bases and build trust with your partners. Being able to embed this technology in our processors will be a key factor for our company to consolidate our competitive advantage and bring quantum processors to the market., Georges-Olivier Reymond, CEO Pasqal

Here is the original post:
Muquans and Pasqal partner to advance quantum computing - Quantaneo, the Quantum Computing Source

Orquestra, an end-to-end, unified Quantum Operating Environment is now in early access – Neowin

Zapata, a firm whose primary focus is on quantum computing and software, launched early access to Orquestra today. Orquestra, dubbed as a novel end-to-end, unified Quantum Operating Environment (QOE), is meant for designing, manipulating, optimizing, and running quantum circuits. These quantum circuits are then generalized to run across different quantum computers, simulators, and HPC resources.

Orquestra enables advanced technology, R&D and academic teams to acceleratequantum solutions for complex computational problems in optimization, machinelearning and simulation across a variety of industries.

Some of the noteworthy features of Orquestra are as follows. First, it provides an extensive library supplying optimized open-source (VQE, QAOA) and proprietary (VQF) algorithms. The environment allows users to combine modules written in different libraries, some of which include Cirq, Qiskit, PennyLane and PyQuil.

In addition, it also offers hardware-interoperable layering and is the only quantum platform that goes beyond hardware-agnostic capabilities. This allows users to compare various devices in the context of particular computational problems and benchmark how workflows perform across them.

Users can also submit these workflows to the Orquestra Quantum Engine (OQE) servers with command-line tools and orchestrate workflow tasks across a variety of backends that include gate model devices, quantum annealers, quantum simulators, and HPC resources. Automatedparallelization through container orchestration and management of complex records is offered as well.

Orquestra is currently in early-access and is aimed at users with backgrounds in software engineering, machine learning, physics, computational chemistry or quantum information theory. To be a part of the program, and request further information, you can send an e-mail to Zapata.

Visit link:
Orquestra, an end-to-end, unified Quantum Operating Environment is now in early access - Neowin

The Force is With Physicist Andy Howell as He Discusses Star Trek Science With Cast and Crew – Noozhawk

In the most recent episode of his YouTube series Science vs. Cinema, UC Santa Barbara physicist Andy Howell takes on Star Trek: Picard, exploring how the CBS offerings presentation of supernovae and quantum computing stack up against real world science.

For Howell, the series that reviews the scientific accuracy and portrayal of scientists in Hollywoods top sci-fi films is as much an excuse to dive into exciting scientific concepts and cutting edge research.

Science fiction writers are fond of grappling with deep philosophical questions, he said. I was really excited to see that UCSB researchers were thinking about some of the same things in a more grounded way.

For the Star Trek episode, Howell spoke with series creators Alex Kurtzman and Michael Chabon, as well as a number of cast members, including Patrick Stewart. Joining him to discuss quantum science and consciousness were John Martinis a quantum expert at UCSB and chief scientist of the Google quantum computing hardware group and fellow UCSB physics professor

Matthew Fisher. Fishers group is studying whether quantum mechanics plays a role in the brain, a topic taken up in the new Star Trek series.

Howell also talked supernovae and viticulture with friend and colleague Brian Schmidt, vice chancellor of the Australian National University. Schmidt won the 2011 Nobel Prize in Physics for helping to discover that the expansion of the universe is accelerating.

"We started Science vs. Cinema to use movies as a jumping-off point to talk science Howell said. Star Trek Picard seemed like the perfect fit. Star Trek has a huge cultural impact and was even one of the things that made me want to study astronomy.

Previous episodes of Science vs. Cinema have separated fact from fiction in films such as Star Wars, The Current War, Ad Astra, Arrival and The Martian. The success of prior episodes enabled Howell to get early access to the show and interview the cast and crew.

"What most people think about scientific subjects probably isn't what they learned in a university class, but what they saw in a movie, Howell remarked. That makes movies an ideal springboard for introducing scientific concepts. And while I can only reach dozens of students at a time in a classroom, I can reach millions on TV or the internet.

Our professional journalists are working round the clock to make sure you have the news and information you need in these uncertain times.

If you appreciate Noozhawks coronavirus coverage, and the rest of the local Santa Barbara County news we deliver to you 24/7, please become a member of our Hawks Club today.

You need us more than ever, and we need your support.

We provide special member benefits to show how much we appreciate your confidence.

Continue reading here:
The Force is With Physicist Andy Howell as He Discusses Star Trek Science With Cast and Crew - Noozhawk

Quantum Computing Market to Witness Robust Expansion by 2024: Intel Corporation, Google Inc., Evolutionq Inc Cole Reports – Cole of Duty

Quantum Computing Market Competitive Insights 2020, professional and in-depth study on the Quantum Computing industry with a focus on the Profit Margin Analysis, Market Value Chain Analysis, Market Entry Strategies, recent developments & their impact on the market, Roadmap of Quantum Computing Market, Opportunities, Challenges, SWOT analysis, and PESTEL analysis, Market estimates, size, and forecast for product segments from 2020 to 2024. An In-depth analysis of newer growth tactics influenced by the market-leading companies shows the global competitive scale of this market sector. The industry growth outlook is captured by ensuring ongoing process improvements of players and optimal investment strategies.

Get Sample Copy of Quantum Computing Report 2020: http://www.researchreportsinc.com/report-sample/593583

The research report studies the market Mergers & Acquisitions, Geographic Scope, company profile, Contracts, New Product Launches, Competitive Situation & Trends, key players market shares(2020), and its growth prospects during the forecast period. The Quantum Computing market report provides detailed data to mentor market key players while forming important business decisions. The given report has focused on the key aspects of the markets to ensure maximum benefit and growth potential for our readers and our extensive analysis of the market will help them achieve this much more efficiently.

The Major Companies Covered In This Report:

Intel Corporation, Google Inc., Evolutionq Inc, Magiq Technologies Inc., Nippon Telegraph And Telephone Corporation (NTT), QC Ware Corp., Accenture, Hitachi Ltd, QxBranch, LLC, Rigetti Computing, International Business Machines Corporation (IBM), 1QB Information Technologies Inc., Hewlett Packard Enterprise (HP), D-Wave Systems Inc., Northrop Grumman Corporation, Station Q Microsoft Corporation, Cambridge Quantum Computing Ltd, Quantum Circuits, Inc, Fujitsu, University Landscape, Toshiba Corporation

The Quantum Computing report covers the following Types:

On the basis of applications, the market covers:

Grab Your Report at an Impressive Discount @ http://www.researchreportsinc.com/check-discount/593583

Why You Should Buy This Report?

The report offers effective guidelines and recommendations for vendors to secure a position of strength in the Quantum Computing industry. The newly arrived key players in the market can up their growth potential by a great amount and also the current dominators of the market can keep up their dominance for a longer time by the use of our report. The Quantum Computing Market Report mentions the key geographies, market landscapes alongside the product value, revenue, volume, production, supply, demand, market growth rate, and trends, etc. This report also provides Porters Five Forces analysis, investment feasibility analysis, and investment return analysis.

Major Points Covered in The Report:

Read more here:
Quantum Computing Market to Witness Robust Expansion by 2024: Intel Corporation, Google Inc., Evolutionq Inc Cole Reports - Cole of Duty

The new decade and the rise of AutoML – ITProPortal

In 2019, The World Economic Forum forecasted that data analysts would be in high demand by 2020, and so far this year were seeing the prediction become a reality. The fact is, as much as companies would love to hire dozens or even hundreds of highly trained data scientists - even in todays challenging economic climate - the skill set is so highly sought after that it can be both difficult and costly to find and integrate the right people.

This is where the role of the data analyst comes in. Many companies have invested in automated machine learning (AutoML), which has enabled them to automate the process of applying machine learning to solve business challenges. What this means is that a wider variety of data analysts, who are not necessarily highly trained data scientists and who may have broader business skill sets, can access and use data more freely.

The move to AutoML is also being driven by the fact that its becoming increasingly recognised that organisations using AI cannot improve the business-led insight generated from that AI without improving the access to it. More people need access to data sources, the models being fed by data, and to data-driven analytics.

Data needs to be democratised. Were past a point where its acceptable for data access to be restricted only to highly trained data scientists well-versed in manipulating it. If we want to see the mass business benefits of data-driven analytics, data in all its various guises needs to make it outside of the confines of the data science lab and into the hands of a new generation of data analysts and business users.

In this article, we discuss how AutoML and new businesses operational models are influencing and accelerating the rise of the data analyst in this new decade.

The shift has meant that AutoML now has a broader scope to help democratise data science in general, meaning that its becoming easier for data analysts to get involved in the data-to-insights pipeline. While AutoML is not going to replace data scientists, it does mean that data analysts can be self-guided through feature creation, feature selection, model creation and comparison, and even operationalisation. What this means is that AutoML drives self-serve, augmented analytics, which can add efficiency to large swaths of the data pipeline.

At a very high level, AutoML is about automating the process of applying machine learning. Early on, AutoML was almost exclusively used for the automatic selection of the best-performing algorithms for a given task and for tuning the hyperparameters of said algorithms.

While this has been very helpful for data scientists, until recently, it hadnt improved data access or insights for data analysts or business users, who still may be reliant on data scientists to build machine learning based models in code. However, the emphasis on AutoML has shifted to making machine learning more accessible by automatically building models without the help of data scientists.

In the last two years of the previous decade, one of the biggest operational shifts that became apparent in technology-driven businesses was the continued convergence of data science and business intelligence. It was certainly a far cry from more traditional operational models, where organisations employed separate teams standard business intelligence (dashboards, reports, data visualisation, SQL) and data science (statistical models, R/Python.)

Their reasoning is logical: in bringing data science and business intelligence practices together, companies effectively form real-time, centralised access to what may have previously been disparate sources of data. This growing convergence and/or closer collaboration between data science and analytics teams has empowered more people to become data analysts, often referred to as citizen data scientists.

But dont let the term fool you: citizen data scientists come in many forms, and their data analysis skills are empowering business insight in very important ways. Their roles can include the Data Translator, who is bridging the technical expertise of data engineers and data scientists with the operational expertise of marketing, supply chain, manufacturing, risk, and other industry domains.

We are also seeing Data Explorers, who focus on identifying and connecting to new data sources, merging and preparing data, and building production-ready data pipelines. Data Modellers are responsible for building predictive models and generating either a product or a service from those models, and then implementing them.

Regardless of the nature of these new roles, there is a common theme: unlike the data scientists of the previous decade, analysts dont need to master all the intricacies of advanced machine learning and feature engineering. What they bring to the table is an intimate knowledge of the problems at hand and the business questions that need to be answered.

Heads of business units have traditionally had a more difficult time accessing data analytics, and have to specifically request reports and analysis from the data scientists on a case-by-case basis. The next evolution will be for machine learning itself to become more self-serviced. Deployment and maintenance of models will become more and more easy and automated, as will many analytic tasks.

By integrating self-service machine learning into their core business strategies, innovative companies are enabling data analysts to use real-time data at scale to make better and faster decisions throughout their organisations.

Its clear that AI maturity and its resulting data-driven insight cannot improve without expanding the breadth of people that have access to and work with data on a day-to-day basis. Its exciting to see companies prioritise a cultural shift toward a data-driven culture and the economic imperative of data insights. As the new decade progresses, were set to see this continue as one of the more powerful analytics trends that are already transforming business in 2020.

Alexis Fournier, Director of AI Strategy, Dataiku

Follow this link:
The new decade and the rise of AutoML - ITProPortal

Tesla releases impressive videos of cars avoiding running over pedestrians – Electrek

Tesla has released a few impressive videos of its Autopilot-powered emergency braking feature helping to avoid running over inattentive pedestrians.

What might be even more impressive is that the automaker says that it sees those events happen every day.

Theres a lot of talk about Tesla Autopilot, but one of the least reported aspects of Teslas semi-autonomous driver-assist system is that it powers a series of safety features that Tesla includes for free in all cars.

One of those features is Emergency Automatic Braking.

We saw the Autopilot-powered safety feature stop for pedestrians in impressive tests by Euro NCAP last year, but now we see it perform in real-world scenarios and avoiding potentially really dangerous situations.

Tesla has now released some examples of its system braking just in time to save pedestrians.

The new videos were released by Andrej Karpathy, Teslas head of AI and computer vision, in a new presentation at the Scaled Machine Learning Conference.

It was held at the end of February, but a video of the presentation was just released (starting when he shows the videos):

In the three video examples, you can see pedestrians emerging from the sides, out of the field of view, and Teslas vehicles braking just in time.

Tesla is able to capture and save those videos, thanks to its integrated TeslaCam dashcam feature.

Karpathy says:

This car might not even have been on the Autopilot, but we continuously monitor the environment around us. We saw that there was a person in front and we slammed on the brake.

The engineer added that Tesla is seeing a lot of those events being prevented by its system:

We see a lot of these tens to hundreds of these per day where we are actually avoiding a collision and not all of them are true positive, but a good fraction of them are.

In the rest of the presentation, Karpathy explains how Tesla is applying machine learning to its system in order to improve it enough to lead to a fully self-driving system.

I think its important to bring attention to these examples considering if an accident happens on Autopilot, it gathers so much attention from the media.

Lets see how many of them run with this story.

But I get it. People love crashes a lot more than a near-miss.

On another note, I really like how Karpathy communicates Teslas self-driving effort. His presentations are always super clear and informative, even for people who are not super experienced in machine learning.

In order for TeslaCam and Sentry Mode to work on a Tesla, you need a few accessories. We recommendJedas Model 3 USB hub(now also available for Model Y) to be able to still use the other plugs and hide your Sentry Mode drive. For the drive, Im now usinga Samsung portable SSD, which you need to format, but it gives you a ton of capacity, and it can be easily hidden in the Jeda hub.

What do you think? Let us know in the comment section below.

FTC: We use income earning auto affiliate links. More.

Subscribe to Electrek on YouTube for exclusive videos and subscribe to the podcast.

The rest is here:
Tesla releases impressive videos of cars avoiding running over pedestrians - Electrek

How to Use Hemp Oil for Maximum Benefits?

Hemp oil just like add medicine is known for its therapeutic effects for the human body. You may not find any formal research about it but can find several anecdotes that can help to validate this stance.

Many people often confuse hemp oil with marijuana because both the plants belong to the same Cannabis Sativa plant family but the major difference between both is the difference in the amount of THC (tetrahydrocannabinol) in them. Marijuana has a very high amount of THC which makes it a ‘drug for recreational purposes’ while, hemp oil has only 0.3% of THC. However, both of them can be used for treating people with a psychological disorder, such as medical marijuana anxiety treatment is being used widely and is found to be effective too just as hemp oil usage.

Why hemp oil is beneficial

You might be wondering why hemp oil is beneficial for the human body or why is it found to be effective. The reason is that our body has an Endocannabinoid System (ECS) i.e. it has a series of receptors that can be found throughout our body. These receptors are highly responsive to cannabinoid intakes such as hemp oil, thus helps in making it effective for the human body. Hemp oil is found to be effective for:

  • Boosting up Metabolism
  • Digestive System
  • Immune System
  • Regulating Sleep Cycle
  • Mood and Anxiety
  • Neuro Protection
  • Skin Disorders

There are many ways in which hemp oil can be used effectively. Here is how to use hemp oil so that its benefits can be maximized:

Applying Hemp Oil Topically:

Hemp oil is widely known for its pain-relieving qualities, particularly for arthritis patients. Moreover, it is found to be effective for controlling acne. Applying the hemp oil directly on the affected area is highly beneficial. For example, if you have pain in your joints, apply hemp oil on that area and it will work like a charm and your pain will fizzle away.

Oral Consumptions:

Hemp oil has different oral applications based upon the purpose of its usage or choice of the consumer. For example, it can be used as a tincture, a liquid that can be taken sublingually (under the tongue) using a dropper to monetarize the dosage. Certain flavors can be added to the hemp oil. Such as coconut oil or grapeseed oil to make its taste more palatable.

Secondly, the pure extract of hemp oil can be taken as well. Pure extracts are much thicker and unflavored and can be taken using the oral syringe. Thirdly, hemp oil can be taken in the form of edibles such as chews, chocolates or lozenges. People won't even know that they are taking the supplements. We can pick from whatever suits them the best just as we can pick our favorite candy out of so many.

Capsules are one of the easiest ways to take the hemp oil orally. They can be easily swallowed and won’t seem like an extra burden to you as we take other supplements regularly. Capsules are made with the pure extract of hemp oil, but you must make sure to buy them from an authorized dealer to be assured of the authenticity of the product.

Oral consumption of hemp oil is highly recommended to improve your digestive system and boost up immunity. The effective methods of hemp oil usage differ with its purpose of usage.

What is the preferred dosage of hemp oil consumption?

You must not tr hemp oil on your own if you want to use it for purposes. Other than skin disorders. You must consult your doctor about it because he/she can guide you as per your medical history. The medicines that you might already be taking.

Hemp oil is found to have more benefits than side effects, which makes it worth trying it. Do give it a try as per your doctor’s guidance!

 

 

Quantum computing heats up down under as researchers reckon they know how to cut costs and improve stability – The Register

Boffins claim to have found path to 'real-world applications' by running hot

Dr Henry Yang and Professor Andrew Dzurak: hot qubits are a game-changer for quantum computing development. Pic credit: Paul Henderson-Kelly

Scientists in Australia are claiming to have made a breakthrough in the field of quantum computing which could ease the technology's progress to affordability and mass production.

A paper by researchers led by Professor Andrew Dzurak at Sydney's University of New South Wales published in Nature today says they have demonstrated quantum computing at temperatures 15 times warmer than previously thought possible.

Temperature is important to quantum computing because quantum bits (qubits) the equivalent classical computing bits running the computer displaying this story can exist in superconducting circuits or form within semiconductors only at very low temperatures.

Most quantum computers being developed by the likes of IBM and Google form qubits at temperatures within 0.1 degrees above absolute zero or -273.15C (-459.67F). These solid-state platforms require cooling to extremely low temperatures because vibrations generated by heat disrupt the qubits, which can impede performance. Getting this cold requires expensive dilution refrigerators.

Artistic representation of quantum entanglement. Pic credit: Luca Petit for QuTech

But Dzurak's team has shown that they can maintain stable "hotbits" at temperatures up to 15 times higher than existing technologies. That is a sweltering 1.5 Kelvin (-271.65C). It might not seem like much, but it could make a big difference when it comes to scaling quantum computers and getting them one step closer to practical applications.

"For most solid-state qubit technologies for example, those using superconducting circuits or semiconductor spins scaling poses a considerable challenge because every additional qubit increases the heat generated, whereas the cooling power of dilution refrigerators is severely limited at their operating temperature. As temperatures rise above 1 Kelvin, the cost drops substantially and the efficiency improves. In addition, using silicon-based platforms is attractive, as this can assist integration into classical systems that use existing silicon-based hardware," the paper says.

Keeping temperature at around 1.5 Kelvin can be achieved using a few thousand dollars' worth of refrigeration, rather than the millions of dollars needed to cool chips to 0.1 Kelvin, Dzurak said.

"Our new results open a path from experimental devices to affordable quantum computers for real-world business and government applications," he added.

The researchers used "isotopically enriched silicon" but the proof of concept published today promises cheaper and more robust quantum computing which can be built on hardware using conventional silicon chip foundries, they said.

Nature published another independent study by Dr Menno Veldhorst and colleagues at Delft University of Technology in the Netherlands which details a quantum circuit that operates at 1.1 Kelvin, confirming the breakthrough.

If made more practical and cheaper, quantum computers could represent a leap forward in information science. Whereas the bit in classical computing either represents a one or a zero, qubits superimpose one and zero, representing both states at the same time. This creates an exponential improvement in performances such that so eight qubits theoretically have two to eight times the performance of eight bits. For example, Google and NASA have demonstrated that a quantum computer with 1,097 qubits outperformed existing supercomputers by more than 3,600 times and personal computers by 100 million.

While the experimental nature and cost of quantum computing means it is unlikely to make it into any business setup soon, anything to make the approach more practical could make a big difference to scientific computational challenges such as protein folding. The problem of how to predict the structure of a protein from its amino acid sequence is important for understanding how proteins function in a wide range of biological processes and could potentially help design better medicines.

Sponsored: Practical tips for Office 365 tenant-to-tenant migration

Original post:
Quantum computing heats up down under as researchers reckon they know how to cut costs and improve stability - The Register

The future of quantum computing in the cloud – TechTarget

AWS, Microsoft and other IaaS providers have jumped on the quantum computing bandwagon as they try to get ahead of the curve on this emerging technology.

Developers use quantum computing to encode problems as qubits, which compute multiple combinations of variables at once rather than exploring each possibility discretely. In theory, this could allow researchers to quickly solve problems involving different combinations of variables, such as breaking encryption keys, testing the properties of different chemical compounds or simulating different business models. Researchers have begun to demonstrate real-world examples of how these early quantum computers could be put to use.

However, this technology is still being developed, so experts caution that it could take more than a decade for quantum computing to deliver practical value. In the meantime, there are a few cloud services, such as Amazon Bracket and Microsoft Quantum, that aim to get developers up to speed on writing quantum applications.

Quantum computing in the cloud has the potential to disrupt industries in a similar way as other emerging technologies, such as AI and machine learning. But quantum computing is still being established in university classrooms and career paths, said Bob Sutor, vice president of IBM Quantum Ecosystem Development. Similarly, major cloud providers are focusing primarily on education at this early stage.

"The cloud services today are aimed at preparing the industry for the soon-to-arrive day when quantum computers will begin being useful," said Itamar Sivan, co-founder and CEO of Quantum Machines, an orchestration platform for quantum computing.

There's still much to iron out regarding quantum computing and the cloud, but the two technologies appear to be a logical fit, for now.

Cloud-based quantum computing is more difficult to pull off than AI, so the ramp up will be slower and the learning curve steeper, said Martin Reynolds, distinguished vice president of research at Gartner. For starters, quantum computers require highly specialized room conditions that are dramatically different from how cloud providers build and operate their existing data centers.

Reynolds believes practical quantum computers are at least a decade away. The biggest drawback lies in aligning the quantum state of qubits in the computer with a given problem, especially since quantumcomputersstill haven't been proven to solve problems better than traditional computers.

Coders also must learn new math and logic skills to utilize quantum computing. This makes it hard for them since they can't apply traditional digital programming techniques. IT teams need to develop specialized skills to understand how to apply quantum computing in the cloud so they can fine tune the algorithms, as well as the hardware, to make this technology work.

Current limitations aside, the cloud is an ideal way to consume quantum computing, because quantum computing has low I/O but deep computation, Reynolds said. Because cloud vendors have the technological resources and a large pool of users, they will inevitably be some of the first quantum-as-a-service providers and will look for ways to provide the best software development and deployment stacks.

Quantum computing could even supplement general compute and AI services cloud providers currently offer, said Tony Uttley, president of Honeywell Quantum Solutions.In that scenario, the cloud would integrate with classical computing cloud resources in a co-processing environment.

The cloud plays two key roles in quantum computing today, according to Hyoun Park, CEO and principal analyst at Amalgam Insights. The first is to provide an application development and test environment for developers to simulate the use of quantum computers through standard computing resources.

The second is to offer access to the few quantum computers that are currently available, in the way mainframe leasing was common a generation ago. This improves the financial viability of quantum computing, since multiple users can increase machine utilization.

It takes significant computing power to simulate quantum algorithm behavior from a development and testing perspective. For the most part, cloud vendors want to provide an environment to develop quantum algorithms before loading these quantum applications onto dedicated hardware from other providers, which can be quite expensive.

However, classical simulations of quantum algorithms that use large numbers of qubits are not practical. "The issue is that the size of the classical computer needed will grow exponentially with the number of qubits in the machine," said Doug Finke, publisher of the Quantum Computing Report.So, a classical simulation of a 50-qubit quantum computer would require a classical computer with roughly 1 petabyte of memory. This requirement will double with every additional qubit.

Nobody knows which approach is best, or which materials are best. We're at the Edison light bulb filament stage. Martin ReynoldsDistinguished vice president of research at Gartner

But classical simulations for problems using a smaller number of qubits are useful both as a tool to teach quantum algorithms to students and also for quantum software engineers to test and debug algorithms with "toy models" for their problem, Finke said.Once they debug their software, they should be able to scale it up to solve larger problems on a real quantum computer.

In terms of putting quantum computing to use, organizations can currently use it to support last-mile optimization, encryption and other computationally challenging issues, Park said. This technology could also aid teams across logistics, cybersecurity, predictive equipment maintenance, weather predictions and more. Researchers can explore multiple combinations of variables in these kinds of problems simultaneously, whereas a traditional computer needs to compute each combination separately.

However, there are some drawbacks to quantum computing in the cloud. Developers should proceed cautiously when experimenting with applications that involve sensitive data, said Finke. To address this, many organizations prefer to install quantum hardware in their own facilities despite the operational hassles, Finke said.

Also, a machine may not be immediately available when a quantum developer wants to submit a job through quantum services on the public cloud. "The machines will have job queues and sometimes there may be several jobs ahead of you when you want to run your own job," Finke said. Some of the vendors have implemented a reservation capability so a user can book a quantum computer for a set time period to eliminate this problem.

IBM was first to market with its Quantum Experience offering, which launched in 2016 and now has over 15 quantum computers connected to the cloud. Over 210,000 registered users have executed more than 70 billion circuits through the IBM Cloud and published over 200 papers based on the system, according to IBM.

IBM also started the Qiskit open source quantum software development platform and has been building an open community around it. According to GitHub statistics, it is currently the leading quantum development environment.

In late 2019, AWS and Microsoft introduced quantum cloud services offered through partners.

Microsoft Quantum provides a quantum algorithm development environment, and from there users can transfer quantum algorithms to Honeywell, IonQ or Quantum Circuits Inc. hardware. Microsoft's Q# scripting offers a familiar Visual Studio experience for quantum problems, said Michael Morris, CEO of Topcoder, an on-demand digital talent platform.

Currently, this transfer involves the cloud providers installing a high-speed communication link from their data center to the quantum computer facilities, Finke said. This approach has many advantages from a logistics standpoint, because it makes things like maintenance, spare parts, calibration and physical infrastructure a lot easier.

Amazon Braket similarly provides a quantum development environment and, when generally available, will provide time-based pricing to access D-Wave, IonQ and Rigetti hardware. Amazon says it will add more hardware partners as well. Braket offers a variety of different hardware architecture options through a common high-level programming interface, so users can test out the machines from the various partners and determine which one would work best with their application, Finke said.

Google has done considerable core research on quantum computing in the cloud and is expected to launch a cloud computing service later this year. Google has been more focused on developing its in-house quantum computing capabilities and hardware rather than providing access to these tools to its cloud users, Park said. In the meantime, developers can test out quantum algorithms locally using Google's Circ programming environment for writing apps in Python.

In addition to the larger offerings from the major cloud providers, there are several alternative approaches to implementing quantum computers that are being provided through the cloud.

D-Wave is the furthest along, with a quantum annealer well-suited for many optimization problems. Other alternatives include QuTech, which is working on a cloud offering of its small quantum machine utilizing its spin qubits technology. Xanadu is another and is developing a quantum machine based on a photonic technology.

Researchers are pursuing a variety of approaches to quantum computing -- using electrons, ions or photons -- and it's not yet clear which approaches will pan out for practical applications first.

"Nobody knows which approach is best, or which materials are best. We're at the Edison light bulb filament stage, where Edison reportedly tested thousands of ways to make a carbon filament until he got to one that lasted 1,500 hours," Reynolds said. In the meantime, recent cloud offerings promise to enable developers to start experimenting with these different approaches to get a taste of what's to come.

Continue reading here:
The future of quantum computing in the cloud - TechTarget

Quantum Technologies 2020: Impact on COVID-19, Ecosystem & Supply Chain Analysis, Industry Best Practices, Technology Roadmap and Growth…

DUBLIN, April 20, 2020 /PRNewswire/ -- The "Emerging Opportunities of Quantum Technologies in Electronics Industry" report has been added to ResearchAndMarkets.com's offering.

Research and Markets Logo

Key Questions Addressed

Quantum technology, which enables the manipulation of atoms and sub-atomic particles, will allow for a new class of ultra-sensitive devices with key potential to profoundly impact and disrupt significant applications in areas such as defense, aerospace, industrial, commercial, infrastructure, transportation and logistics markets.

The ability to control and predict the behavior of atoms and ions has key opportunities to enable exquisitely sensitive sensors for application such as ultra-precise navigation, improved location of buried objects, enhanced geophysical or resource exploration, as well as ultra-precise measurement of time, computers able to solve very complex problems much faster than classical computers, considerably more secure and rapid data communications, and imaging in previously impossible conditions with greatly enhanced resolution.

Quantum technology is also driving advancements in more compact lasers, microfabricated atom/ion traps and diffraction gratings for trapping and cooling atoms, single-photon detectors for applications such as enhanced imaging and quantum cryptography, microfabricated vapor cells containing atomic vapors or optically cooled atoms.

Key Topics Covered

1. Executive Summary1.1 Scope of Research1.2 Research Methodology1.3 Research Methodology Explained1.4 Key Findings - Quantum Electronics Finds Applications in Submarines and Satellites1.5 Key Findings - Quantum Magnetometers Generate Interest in Navigation

2. Quantum Electronics Technology Landscape - Status Review2.1 Quantum Electronics will Disrupt Industrial, Defense, Security, and Healthcare Markets2.2 Applications of Different Types of Quantum Electronics2.3 Factors Driving the Adoption of Quantum Electronics2.4 Miniaturization is a Major Challenge for Adoption of Quantum Electronics

3. Quantum Inertial Sensors3.1 Quantum Gyroscopes and Accelerometers Provide Enhanced Sensitivity3.2 Quantum Inertial Sensors Have Opportunities to Disrupt Conventional Navigation Systems and MEMS Sensors3.3 Application Impact of Quantum Inertial Sensors3.4 Recent Developments with Stakeholders - Quantum Inertial Sensors3.5 Quantum Inertial Sensors are Gaining Investments

4. Quantum Gravity Sensors4.1 Quantum Gravity Sensors - Overview4.2 Gravity Sensing: An Earlier Opportunity for Quantum Accelerometers4.3 Application Landscape of Quantum Gravity Sensors4.4 Gap Analysis: Quantum Gravity Sensors Opportunities and Challenges4.5 Recent Developments with Stakeholders - Quantum Gravity Sensors

5. Quantum Magnetometers5.1 Quantum Magnetometers - Overview5.2 Application Diversity of Quantum Magnetometers5.3 Quantum Magnetometers find Applications in Precision Location Detection5.4 Opportunities Driving Adoption of Quantum Magnetometers5.5 Factors Hindering Adoption of Quantum Magnetometers5.6 Stakeholder Developments - Quantum Magnetometers

6. Quantum Clocks6.1 Quantum Clocks Enable Precision Timing6.2 Opportunities of Quantum Clocks6.3 Challenges Hindering Adoption of Quantum Atomic Clocks6.4 Applications for Quantum Atomic Clocks6.5 Stakeholder Developments - Quantum Magnetometers6.6 Stakeholders are Collaborating with Universities for Quantum Developments

7. Quantum Computing7.1 Quantum Computers have Unprecedented Computational Power7.2 Opportunities of Quantum Computing7.3 Factors Hindering Adoption of Quantum Computing7.4 Applications of Quantum Computing Across Different Industries7.5 Stakeholder Developments and Recent Research in Quantum Computing7.6 Kagome Metal finds Applications in Quantum Computers7.7 Nitrogen-Vacancy Diamonds have the Potential to Retain Quantum Information

8. Quantum Communications8.1 Quantum Repeaters and Quantum Key Distribution play Key Roles in Enabling Quantum Communication8.2 Opportunities Driving Quantum Communications8.3 Factors Hindering Adoption of Quantum Communications8.4 Stakeholder Developments - Quantum Computing8.5 Recent Research in Quantum Computing Enables Development of Quantum Random Number Generator

9. Impact of Quantum Technologies on COVID-199.1 Opportunities to Combat Coronavirus (COVID-19)9.2 Use of Supercomputers to Study COVID-19 Impact Creates Potential Applications of Quantum Computing

10. Quantum Electronics Ecosystem and Supply Chain Analysis10.1 Quantum Technology Ecosystem Components10.2 Key Types of Participants in the Quantum Supply Chain10.3 Other Participants in the Quantum Supply Chain

11. Industry Best Practices - Assessment of Partnerships/Alliances and Recent Developments11.1 Advancements in Quantum Entanglement Pave the Way for Quantum Internet11.2 Recent Partnerships Drive Developments in Quantum Computing

12. Technology Roadmap & Growth Opportunities12.1 Quantum Electronics Roadmap12.2 Strategic Investments Drive Adoption of Quantum Technologies

13. Industry Contacts13.1 Key Industry Contacts

Story continues

Originally posted here:
Quantum Technologies 2020: Impact on COVID-19, Ecosystem & Supply Chain Analysis, Industry Best Practices, Technology Roadmap and Growth...

Alison Pill on the Devs Finale and Whats Next for Katie – Variety

The wild finale of Alex Garlands sci-fi series Devs managed to do the impossible and reveal the full capabilities for the mysterious super computer. But like all good Garland creations, the answers only lead to more questions from the audience. Looking for clarity, we turned to the woman who runs the Devs operation, Katie, played by Alison Pill, for a conversation about whats next for the resolute right-hand of the Devs operation.

A longtime fan of Garlands work, Pill had previously read his novels The Beach and The Tesseract. In fact, the actress actually auditioned for a part in his 2018 film Annihilation, but it would seem that her track was destined to lead her to Devs.

Carmen Cuba texted my husband and said, Alex Garland is interested in your wife for this movie, can he email her?' Pill explained to Variety over the phone. After sending over all eight scripts of the mini-series, Pill was hooked. The next step? Become an expert on quantum mechanics.

I read A Briefer History of Time, which is excellent. I guess I can say it now, we were really trying to keep the multi-verse out of the conversation for the first little bit, but David Wallace, whos an incredible philosopher of science, wrote a great book about the many worlds theory. Its a tome.

Then I read a David Foster Wallace book on infinity, but Briefer History of Time was the most helpful in terms of the history of the science, the development of the ideas. In every single one of these books I could sort of grasp [the concepts] in the beginning and toward the end as the concepts become more and more abstract, the tethering to the physical world is just is gone. [Laughs] Its like, What are you talking about? Thats crazy! You cant have particles in two places at once, doing two things at once at the same time. My brain wont accept that! But then thats what physicists have been struggling with for a long time.

Alex was definitely helpful. Sonoya [Mizuno] was too because she had been on board a little bit before me, and she had done a lot of reading as well. Her recommendations were really helpful. The David Wallace books I just read on my own. His lectures are great. I think hes a fantastic speaker. Also in terms of character studies it was really interesting, too. What does somebody who thinks about this type of thing all the time do with their body? Interesting little character things. Theyre pretty eccentric folks considering that they think about the nature of reality and the unnatural nature of quantum physics. Theyre odd ducks.

The first decision I made was physical. I didnt want her to move very much because Im thinking about the self-consciousness that must be involved in every move you make, having already been decided. If thats your worldview you cant just unconsciously take your hand out of your pocket anymore, because you know that was what the universe was always going to have you do. That was my way in, just trying to keep her as still as possible.

Its also a really interesting show of power. As a woman, I often find myself smiling and talking a lot to try and make other people comfortable, to make myself comfortable. I will just fill in those gaps and just be a little too out there sometimes.

I imagined being somebody so completely sure of their place in the world, their own power, their brains, and just not giving a f. And that was really thrilling to play. Shes not going to smile politely. Shes not going to do the polite thing that we expect of womenMaking her not, not unemotional, but in control of her emotions was really important to me. Shes not heartless at all, the furthest thing from it. I think shes one of the most generous people Ive ever played. Shes not obsequious, shes not polite in the way that people expect, but she is emotional. And I think those complications are really interesting.

Alison Pill as Katie, Cailee Spaeny as Lyndon on the edge of the Crystal Springs Dam.

Her belief system is that theres not a world in which Lyndon doesnt stand on the other side of the dam. So its not that she wants him to, its not that she wishes that it happens, its just the way it is. And thats why in the finale, the idea that freewill could exist destroys her whole moral outlook. Once that becomes true, if determinism is real, but that freewill is possible if you know what is determined, then her whole moral view just crumbles.

Then I think she is at fault and I think she realizes that. But I think in the moment, its not something that she desires, its something that she wishes wouldnt happen. I think she really likes Lyndon. But also [she] believes that there are truly a near infinite number of universes where Lyndon doesnt fall, truly, and comes back to work at Devs Its a real moral quandary only if free will exists. And it also begs the question, could we do anything wrong in a truly deterministic universe? Could we be blamed for anything we did in a deterministic universe if there was never going to be another action taken?

Yeah. I dont think Katie believes shes morally culpable. I think shes really very sad about Lyndon. And its Cailee [Spaeny] who was the best, it was so hard.

Oh, no, hes very much up for those discussions. Luckily hes also a big believer in rehearsal, so I could come in and sit down and be like, Alex, physics of the observer, what are we going to do about it? You know, the statistics of near infinite variations, what do you do with statistics!?

So yes, he would allow me some time to go over stuff. We would have classroom moments with different groupings of us [the lab people] and have those discussion. And then on set to a degree because sometimes I would come in and just have read another excerpt from the Wallace book just going, What does it mean! [Laughs]

He is one of the more generous listeners Ive ever met. He doesnt play down anything, he doesnt make you feel dumb and hell take any question seriously and take time to consider it. Its a real gift. Yes, we had fascinating conversations about politics and gender and also quantum physics just because were all in that headspace of giving consideration. And I think thats what he expects as an audience.

Being around this group of people, to some degree, primed for this. I think were just curious people and I think most humans are incredibly curious. I dont think we give ourselves enough benefit of the doubt in terms of our intelligence. In terms of our ability to really challenge our brains with new concepts. I think its something that we just sort of give up on unnecessarily, because I think were better at it than we might suppose.

Alison Pills reading list for playing Katie in Hulus Devs.

Hmmmm many worlds is just, yeah, it doesnt make a difference? Which is to say, Im in this one, I can only live one. Would it feel better if I knew that in some [world] I might make a different choice? To a degree, but also the maths involved are so extraordinary. When you think about branching and the amount of branching that every person, every thing, every blade of grass, the wind blowing one way this time it becomes meaningless to me.

Determinism, however, its a really interesting concept that I still do struggle with and I dont think that human brains are particularly capable of dealing with it. We really are wired to believe that we have choice. So in both cases, while I consider determinism pretty likely which is a crazy thing to think about but you know what else is crazy? Space is a thing. You know what I mean? Space is a thing, the universe is expanding. Spanning into what? Space is already an existing thing. Its not nothing. It cant be nothing. Which is in itself a real quandary anyway.

The multiverse map, I just cant even. So each person, every instant of their life branching however many times. Infinity is crazy, I cant make my brain understand that to such a degree, I find it interesting to read about. But its not going to effect my everyday life neither is quantum physics because we cant see it. We could have been pretty happy humans with just Newtonian physics. It wouldnt have been entirely true, but its fine, I mean, it gets us by.

I was looking forward to it and had been thinking about it since I first got the scripts because what a rarity to have, I was just so excited to have this scene between two women. First that shes like, I want answers, let me talk to Forest. And I say, No, if you want answers, you should talk to me. Just the power dynamics, its such an interesting way to start a scene.

We shot it over two days. I love working with Sonoya. Its such an interesting thing because that much dialogue doesnt happen that often on TV, but at the same time its not theatrical dialogue. Its not as though someone said, Oh, it should be a play! Because so much of it is about the cinematography and the setting. Its something that you dont often get to see, which is just people thinking through their thoughts and taking their time and having these slower scenes. That whole episode is basically just a series of two-person scenes. Its just so ballsy to do that in the midst of this like techno thriller, plot-heavy, cool action thing and youre just like, and then we sit.

I was so aware of the importance of the scene in terms of explaining things but also wanting to make it real for Katie. There are emotional stakes for her too, because she does like Lily. She thinks shes really cool, she spent hours watching her. Its like meeting a celebrity, finally, in our kitchen. Alex and I talked, did she always set the pen out there before because its there, its on the table. [Was Katie] thinking about her the day before she goes to bed thinking, I have to put the pen there because Lilys going to be by soon.

Its not the expectation. The expectation is partly that Forest is going to sit down with Lilly.

Well its the only way he survives. She asks [Forest] whether he wants to deal with the fact that there will be a near infinite number, where Amaya will have died, where all of this will still happen. But that there will be somewhere he does get to be with Amaya. That has been his ultimate goal. That has been their ultimate goal. And thats what I mean about Katie being generous. She literally needs this man and says, yeah, Ill bring back your daughter for you.

I think thats also the happiness of working on this project. Its not her life goal until it becomes it over time. I do think Forest makes the decision to be part of the sim, and therefore I think Katie doesnt want to let it go. Because she wants to keep those two alive as long as she can.

Its so hard to know what the power will actually mean in the world. Even five years on, if the U.S. government knew everything, what would the world look like? Pretty fing scary! So I dont know if she would want to be part of the machine at that point. If Katie died of natural causes 40 years after the events that we see in episode 8, the changes that would have occurred in both quantum computing, would everybody have an everything machine? I dont know. Im cleaning out my office and Im just getting rid of a bunch of DVDs, and those are not that old. Thinking about 40 years from now, in terms of tech, especially in terms of quantum computing, I dont even know what it would mean? But if it was under government control Id say probably, no, she doesnt really like authority.

This interview has been edited and condensed for clarity.

View original post here:
Alison Pill on the Devs Finale and Whats Next for Katie - Variety