This Twist on Schrdinger’s Cat Paradox Has Major Implications for Quantum Theory – Scientific American

What does it feel like to be both alive and dead?

That question irked and inspired Hungarian-American physicist Eugene Wigner in the 1960s. He was frustrated by the paradoxes arising from the vagaries of quantum mechanicsthe theory governing the microscopic realm that suggests, among many other counterintuitive things, that until a quantum system is observed, it does not necessarily have definite properties. Take his fellow physicist Erwin Schrdingers famous thought experiment in which a cat is trapped in a box with poison that will be released if a radioactive atom decays. Radioactivity is a quantum process, so before the box is opened, the story goes, the atom has both decayed and not decayed, leaving the unfortunate cat in limboa so-called superposition between life and death. But does the cat experience being in superposition?

Wigner sharpened the paradox by imagining a (human) friend of his shut in a lab, measuring a quantum system. He argued it was absurd to say his friend exists in a superposition of having seen and not seen a decay unless and until Wigner opens the lab door. The Wigners friend thought experiment shows that things can become very weird if the observer is also observed, says Nora Tischler, a quantum physicist at Griffith University in Brisbane, Australia.

Now Tischler and her colleagues have carried out a version of the Wigners friend test. By combining the classic thought experiment with another quantum head-scratcher called entanglementa phenomenon that links particles across vast distancesthey have also derived a new theorem, which they claim puts the strongest constraints yet on the fundamental nature of reality. Their study, which appeared in Nature Physics on August 17, has implications for the role that consciousness might play in quantum physicsand even whether quantum theory must be replaced.

The new work is an important step forward in the field of experimental metaphysics, says quantum physicist Aephraim Steinberg of the University of Toronto, who was not involved in the study. Its the beginning of what I expect will be a huge program of research.

Until quantum physics came along in the 1920s, physicists expected their theories to be deterministic, generating predictions for the outcome of experiments with certainty. But quantum theory appears to be inherently probabilistic. The textbook versionsometimes called the Copenhagen interpretationsays that until a systems properties are measured, they can encompass myriad values. This superposition only collapses into a single state when the system is observed, and physicists can never precisely predict what that state will be. Wigner held the then popular view that consciousness somehow triggers a superposition to collapse. Thus, his hypothetical friend would discern a definite outcome when she or he made a measurementand Wigner would never see her or him in superposition.

This view has since fallen out of favor. People in the foundations of quantum mechanics rapidly dismiss Wigners view as spooky and ill-defined because it makes observers special, says David Chalmers, a philosopher and cognitive scientist at New York University. Today most physicists concur that inanimate objects can knock quantum systems out of superposition through a process known as decoherence. Certainly, researchers attempting to manipulate complex quantum superpositions in the lab can find their hard work destroyed by speedy air particles colliding with their systems. So they carry out their tests at ultracold temperatures and try to isolate their apparatuses from vibrations.

Several competing quantum interpretations have sprung up over the decades that employ less mystical mechanisms, such as decoherence, to explain how superpositions break down without invoking consciousness. Other interpretations hold the even more radical position that there is no collapse at all. Each has its own weird and wonderful take on Wigners test. The most exotic is the many worlds view, which says that whenever you make a quantum measurement, reality fractures, creating parallel universes to accommodate every possible outcome. Thus, Wigners friend would split into two copies and, with good enough supertechnology, he could indeed measure that person to be in superposition from outside the lab, says quantum physicist and many-worlds fan Lev Vaidman of Tel Aviv University.

The alternative Bohmian theory (named for physicist David Bohm) says that at the fundamental level, quantum systems do have definite properties; we just do not know enough about those systems to precisely predict their behavior. In that case, the friend has a single experience, but Wigner may still measure that individual to be in a superposition because of his own ignorance. In contrast, a relative newcomer on the block called the QBism interpretation embraces the probabilistic element of quantum theory wholeheartedly (QBism, pronounced cubism, is actually short for quantum Bayesianism, a reference to 18th-century mathematician Thomas Bayess work on probability.) QBists argue that a person can only use quantum mechanics to calculate how to calibrate his or her beliefs about what he or she will measure in an experiment. Measurement outcomes must be regarded as personal to the agent who makes the measurement, says Ruediger Schack of Royal Holloway, University of London, who is one of QBisms founders.According to QBisms tenets, quantum theory cannot tell you anything about the underlying state of reality, nor can Wigner use it to speculate on his friends experiences.

Another intriguing interpretation, called retrocausality, allows events in the future to influence the past. In a retrocausal account, Wigners friend absolutely does experience something, says Ken Wharton, a physicist at San Jose State University, who is an advocate for this time-twisting view. But that something the friend experiences at the point of measurement can depend upon Wigners choice of how to observe that person later.

The trouble is that each interpretation is equally goodor badat predicting the outcome of quantum tests, so choosing between them comes down to taste. No one knows what the solution is, Steinberg says. We dont even know if the list of potential solutions we have is exhaustive.

Other models, called collapse theories, do make testable predictions. These models tack on a mechanism that forces a quantum system to collapse when it gets too bigexplaining why cats, people and other macroscopic objects cannot be in superposition. Experiments are underway to hunt for signatures of such collapses, but as yet they have not found anything. Quantum physicists are also placing ever larger objects into superposition: last year a team in Vienna reported doing so with a 2,000-atom molecule. Most quantum interpretations say there is no reason why these efforts to supersize superpositions should not continue upward forever, presuming researchers can devise the right experiments in pristine lab conditions so that decoherence can be avoided. Collapse theories, however, posit that a limit will one day be reached, regardless of how carefully experiments are prepared. If you try and manipulate a classical observera human, sayand treat it as a quantum system, it would immediately collapse, says Angelo Bassi, a quantum physicist and proponent of collapse theories at the University of Trieste in Italy.

Tischler and her colleagues believed that analyzing and performing a Wigners friend experiment could shed light on the limits of quantum theory. They were inspired by a new wave of theoretical and experimental papers that have investigated the role of the observer in quantum theory by bringing entanglement into Wigners classic setup. Say you take two particles of light, or photons, that are polarized so that they can vibrate horizontally or vertically. The photons can also be placed in a superposition of vibrating both horizontally and vertically at the same time, just as Schrdingers paradoxical cat can be both alive and dead before it is observed.

Such pairs of photons can be prepared togetherentangledso that their polarizations are always found to be in the opposite direction when observed. That may not seem strangeunless you remember that these properties are not fixed until they are measured. Even if one photon is given to a physicist called Alice in Australia, while the other is transported to her colleague Bob in a lab in Vienna, entanglement ensures that as soon as Alice observes her photon and, for instance, finds its polarization to be horizontal, the polarization of Bobs photon instantly syncs to vibrating vertically. Because the two photons appear to communicate faster than the speed of lightsomething prohibited by his theories of relativitythis phenomenon deeply troubled Albert Einstein, who dubbed it spooky action at a distance.

These concerns remained theoretical until the 1960s, when physicist John Bell devised a way to test if reality is truly spookyor if there could be a more mundane explanation behind the correlations between entangled partners. Bell imagined a commonsense theory that was localthat is, one in which influences could not travel between particles instantly. It was also deterministic rather than inherently probabilistic, so experimental results could, in principle, be predicted with certainty, if only physicists understood more about the systems hidden properties. And it was realistic, which, to a quantum theorist, means that systems would have these definite properties even if nobody looked at them. Then Bell calculated the maximum level of correlations between a series of entangled particles that such a local, deterministic and realistic theory could support. If that threshold was violated in an experiment, then one of the assumptions behind the theory must be false.

Such Bell tests have since been carried out, with a series of watertight versions performed in 2015, and they have confirmed realitys spookiness. Quantum foundations is a field that was really started experimentally by Bells [theorem]now over 50 years old. And weve spent a lot of time reimplementing those experiments and discussing what they mean, Steinberg says. Its very rare that people are able to come up with a new test that moves beyond Bell.

The Brisbane teams aim was to derive and test a new theorem that would do just that, providing even stricter constraintslocal friendliness boundson the nature of reality. Like Bells theory, the researchers imaginary one is local. They also explicitly ban superdeterminismthat is, they insist that experimenters are free to choose what to measure without being influenced by events in the future or the distant past. (Bell implicitly assumed that experimenters can make free choices, too.) Finally, the team prescribes that when an observer makes a measurement, the outcome is a real, single event in the worldit is not relative to anyone or anything.

Testing local friendliness requires a cunning setup involving two superobservers, Alice and Bob (who play the role of Wigner), watching their friends Charlie and Debbie. Alice and Bob each have their own interferometeran apparatus used to manipulate beams of photons. Before being measured, the photons polarizations are in a superposition of being both horizontal and vertical. Pairs of entangled photons are prepared such that if the polarization of one is measured to be horizontal, the polarization of its partner should immediately flip to be vertical. One photon from each entangled pair is sent into Alices interferometer, and its partner is sent to Bobs. Charlie and Debbie are not actually human friends in this test. Rather, they are beam displacers at the front of each interferometer. When Alices photon hits the displacer, its polarization is effectively measured, and it swerves either left or right, depending on the direction of the polarization it snaps into. This action plays the role of Alices friend Charlie measuring the polarization. (Debbie similarly resides in Bobs interferometer.)

Alice then has to make a choice: She can measure the photons new deviated path immediately, which would be the equivalent of opening the lab door and asking Charlie what he saw. Or she can allow the photon to continue on its journey, passing through a second beam displacer that recombines the left and right pathsthe equivalent of keeping the lab door closed. Alice can then directly measure her photons polarization as it exits the interferometer. Throughout the experiment, Alice and Bob independently choose which measurement choices to make and then compare notes to calculate the correlations seen across a series of entangled pairs.

Tischler and her colleagues carried out 90,000 runs of the experiment. As expected, the correlations violated Bells original boundsand crucially, they also violated the new local-friendliness threshold. The team could also modify the setup to tune down the degree of entanglement between the photons by sending one of the pair on a detour before it entered its interferometer, gently perturbing the perfect harmony between the partners. When the researchers ran the experiment with this slightly lower level of entanglement, they found a point where the correlations still violated Bells bound but not local friendliness. This result proved that the two sets of bounds are not equivalent and that the new local-friendliness constraints are stronger, Tischler says. If you violate them, you learn more about reality, she adds. Namely, if your theory says that friends can be treated as quantum systems, then you must either give up locality, accept that measurements do not have a single result that observers must agree on or allow superdeterminism. Each of these options has profoundand, to some physicists, distinctly distastefulimplications.

The paper is an important philosophical study, says Michele Reilly, co-founder of Turing, a quantum-computing company based in New York City, who was not involved in the work. She notes that physicists studying quantum foundations have often struggled to come up with a feasible test to back up their big ideas. I am thrilled to see an experiment behind philosophical studies, Reilly says. Steinberg calls the experiment extremely elegant and praises the team for tackling the mystery of the observers role in measurement head-on.

Although it is no surprise that quantum mechanics forces us to give up a commonsense assumptionphysicists knew that from Bellthe advance here is that we are a narrowing in on which of those assumptions it is, says Wharton, who was also not part of the study. Still, he notes, proponents of most quantum interpretations will not lose any sleep. Fans of retrocausality, such as himself, have already made peace with superdeterminism: in their view, it is not shocking that future measurements affect past results. Meanwhile QBists and many-worlds adherents long ago threw out the requirement that quantum mechanics prescribes a single outcome that every observer must agree on.

And both Bohmian mechanics and spontaneous collapse models already happily ditched locality in response to Bell. Furthermore, collapse models say that a real macroscopic friend cannot be manipulated as a quantum system in the first place.

Vaidman, who was also not involved in the new work, is less enthused by it, however, and criticizes the identification of Wigners friend with a photon. The methods used in the paper are ridiculous; the friend has to be macroscopic, he says. Philosopher of physics Tim Maudlin of New York University, who was not part of the study, agrees. Nobody thinks a photon is an observer, unless you are a panpsychic, he says. Because no physicist questions whether a photon can be put into superposition, Maudlin feels the experiment lacks bite. It rules something outjust something that nobody ever proposed, he says.

Tischler accepts the criticism. We dont want to overclaim what we have done, she says. The key for future experiments will be scaling up the size of the friend, adds team member Howard Wiseman, a physicist at Griffith University. The most dramatic result, he says, would involve using an artificial intelligence, embodied on a quantum computer, as the friend. Some philosophers have mused that such a machine could have humanlike experiences, a position known as the strong AI hypothesis, Wiseman notes, though nobody yet knows whether that idea will turn out to be true. But if the hypothesis holds, this quantum-based artificial general intelligence (AGI) would be microscopic. So from the point of view of spontaneous collapse models, it would not trigger collapse because of its size. If such a test was run, and the local-friendliness bound was not violated, that result would imply that an AGIs consciousness cannot be put into superposition. In turn, that conclusion would suggest that Wigner was right that consciousness causes collapse. I dont think I will live to see an experiment like this, Wiseman says. But that would be revolutionary.

Reilly, however, warns that physicists hoping that future AGI will help them home in on the fundamental description of reality are putting the cart before the horse. Its not inconceivable to me that quantum computers will be the paradigm shift to get to us into AGI, she says. Ultimately, we need a theory of everything in order to build an AGI on a quantum computer, period, full stop.

That requirement may rule out more grandiose plans. But the team also suggests more modest intermediate tests involving machine-learning systems as friends, which appeals to Steinberg. That approach is interesting and provocative, he says. Its becoming conceivable that larger- and larger-scale computational devices could, in fact, be measured in a quantum way.

Renato Renner, a quantum physicist at the Swiss Federal Institute of Technology Zurich (ETH Zurich), makes an even stronger claim: regardless of whether future experiments can be carried out, he says, the new theorem tells us that quantum mechanics needs to be replaced. In 2018 Renner and his colleague Daniela Frauchiger, then at ETH Zurich, published a thought experiment based on Wigners friend and used it to derive a new paradox. Their setup differs from that of the Brisbane team but also involves four observers whose measurements can become entangled. Renner and Frauchiger calculated that if the observers apply quantum laws to one another, they can end up inferring different results in the same experiment.

The new paper is another confirmation that we have a problem with current quantum theory, says Renner, who was not involved in the work. He argues that none of todays quantum interpretations can worm their way out of the so-called Frauchiger-Renner paradox without proponents admitting they do not care whether quantum theory gives consistent results. QBists offer the most palatable means of escape, because from the outset, they say that quantum theory cannot be used to infer what other observers will measure, Renner says. It still worries me, though: If everything is just personal to me, how can I say anything relevant to you? he adds. Renner is now working on a new theory that provides a set of mathematical rules that would allow one observer to work out what another should see in a quantum experiment.

Still, those who strongly believe their favorite interpretation is right see little value in Tischlers study. If you think quantum mechanics is unhealthy, and it needs replacing, then this is useful because it tells you new constraints, Vaidman says. But I dont agree that this is the casemany worlds explains everything.

For now, physicists will have to continue to agree to disagree about which interpretation is best or if an entirely new theory is needed. Thats where we left off in the early 20th centurywere genuinely confused about this, Reilly says. But these studies are exactly the right thing to do to think through it.

Disclaimer: The author frequently writes for the Foundational Questions Institute, which sponsors research in physics and cosmologyand partially funded the Brisbane teams study.

See original here:
This Twist on Schrdinger's Cat Paradox Has Major Implications for Quantum Theory - Scientific American

Scientists Have Shown There’s No ‘Butterfly Effect’ in the Quantum World – VICE

Of all the reasons for wanting to time-travelsaving someone from a fatal mistake, exploring ancient civilizations, gathering evidence about unsolved crimesrecovering lost information isnt the most exciting. But even if a quest to recover the file that didnt auto-save doesn't sound like a Hollywood movie plot, weve all had moments when weve longed to go back in time for exactly that reason.

Theories of time and time-travel have highlighted an apparent stumbling block: time travel requires changing the past, even simply by adding in the time traveller. The problem, according to chaos theory, is that the smallest of changes can cause radical consequences in the future. In this conception of time travel, it wouldnt be advisable to recover your unsaved document since this act would have huge knock-on effects on everything else.

New research in quantum physics from Los Alamos National Laboratory has shown that the so-called butterfly effect can be overcome in the quantum realm in order to unscramble lost information by essentially reversing time.

In a paper published in July, researchers Bin Yan and Nikolai Sinitsyn write that a thought experiment in unscrambling information with time-reversing operations would be expected to lead to the same butterfly effect as the one in the famous Ray Bradburys story A Sound of Thunder In that short story, a time traveler steps on an insect in the deep past and returns to find the modern world completely altered, giving rise to the idea we refer to as the butterfly effect.

In contrast," they wrote, "our result shows that by the end of a similar protocol the local information is essentially restored.

"The primary focus of this work is not 'time travel'physicists do not have an answer yet to tell whether it is possible and how to do time travel in the real world, Yan clarified.

[But] since our protocol involves a 'forward' and a 'backward' evolution of the qubits, achieved by changing the orders of quantum gates in the circuit, it has a nice interpretation in terms of Ray Bradbury's story for the butterfly effect. So, it is an accurate and useful way to understand our results."

What is the butterfly effect?

The world does not behave in a neat, ordered way. If it did, identical events would always produce the same patterns of knock-on effects, and the future would be entirely predictable, or deterministic. Chaos theory claims that the opposite: total randomness is not our situation either. We exist somewhere in the middle, in a world that often appears random but in fact obeys rules and patterns.

Patterns within chaos are hidden because they are highly sensitive to tiny changes, which means similar but not identical situations can produce wildly different outcomes. Another way of putting it is that in a chaotic world, effects can be totally out of proportion to their causes, like the metaphor of a flap of butterfly wings causing a tornado on the other side of the world. On the tornado side of the world, the storm would seem random, because the connection between the butterfly-flap and the tornado is too complex to be apparent. While this butterfly effect is the classic poetic metaphor illustrating chaos theory, chaotic dynamics also play out in real-world contexts, including population growth in the Canadian lynx species and the rotation of Plutos moons.

Another feature of chaos is that, even though the rules are deterministic, the future is not predictable in the long-term. Since chaos is so sensitive to small variations, there are near-infinite ways the rules could play out and we would need to know an impossible amount of detail about the present and past to map out exactly how the world will evolve.

Similarly, you cant reverse-engineer some piece of information about the past simply by knowing the current and even future situations; time-travel doesnt help retrieve past information, because even moving backwards in time, the chaotic system is still in play and will produce unpredictable effects.

Information scrambling

Unscrambling information which has previously been scrambled is not straightforward in a chaotic system. Yan and Sinitsyns key discovery is that it is nonetheless possible in quantum computing to get enough information via time-reversal which will then enable information unscrambling.

According to Yan, the fact that the butterfly effect does not occur in quantum realms is not a surprising result, but demonstrating information unscrambling is both novel and important.

In quantum information theory, scrambling occurs when the information encoded in each quantum particle is split up and redistributed across multiple quantum particles in the same quantum system. The scrambling is not random, since information redistribution relies on quantum entanglement, which means that the states of some quantum particles are dependent on each other. Although the scrambled result is seemingly chaotic, the information can be put back together, at least in principle, using the entangled relationships.

Importantly, information scrambling is not the same as information loss. To continue the earlier analogy: information loss occurs when a document is permanently deleted from your computer. For information scrambling, imagine cutting and pasting tiny bits of one computer file into every other file on your machine. Each file now contains a mess of information snippets. You could reconstruct the original files, if you remembered exactly which bits were cut and pasted, and did the entire process in reverse.

Physicists are interested in information scrambling for two main reasons. On the theoretical side, its been proposed as a way to explain what happens to information sucked into a black hole. On the more applied side, it could be an important mechanism for quantum computers to store and hide information, and could produce fast and efficient quantum simulators, which are used already to perform complex experiments including new drug discovery.

Yan and Sinitsyn fall into the second camp, and construct what they call a practically accessible scenario to test unscrambling by time-travel. This scenario is still hypothetical, but explores the mathematics of the actual quantum processor used by Google to demonstrate quantum supremacy in 2019.

Yan says: Another potential application is to use this effect to protect information. A random evolution on a quantum circuit can make the qubit robust to perturbations. One may further exploit the discovered effect to design protocols in quantum cryptography.

The set-up

In Yan and Sinitsyn's quantum thought experiment, Alice and Bob are the protagonists. Alice is using a simplified version of Googles quantum processor to hide just one part of the information stored on the computer (called the central qubit) by scrambling this qubits state across all the other qubits (called the qubit bath). Bob is cast as the intruder, much like a malicious computer hacker. He wants the important information originally stored on the central qubit, now distributed across entangled quantum particles in the bath.

Unfortunately, Bobs hack, while successful in getting the information he wanted, leaves a trail of destruction.

If her processor has already scrambled the information, Alice is sure that Bob cannot get anything useful, the authors write. However, Bobs measurement changes the state of the central qubit and also destroys all quantum correlations between this qubit and the rest of the system.

Bob's method of information theft has altered the computer state so that Alice can also no longer access the hidden information. In this case, the damage occurs because quantum states contain all possible values they could have, with assigned probabilities of each value, but these possibilities (represented by the wave function) collapse down to just one value when a measurement is taken. Quantum computing relies on unmeasured quantum systems to store even more information in multiple possible states, and Bobs intrusion has totally altered the computer system.

Reversing time

Theoretically, the behaviour of a quantum system moving backwards in time can be demonstrated mathematically using whats called a time-reversed evolution operator, which is exactly what Alice uses to de-scramble the information.

Her time-reversal is not actually time travel the way we understand it from science fiction, it is literally a reversal of times direction; the system evolves backwards following whatever dynamics are in play, rather than Alice herself revisiting an earlier time. If the butterfly effect held in the quantum world, then this backwards evolution would actually increase the damage Bob had caused, and Alice would only be able to retrieve the hidden information if she knew exactly what that damage was and could correct her calculations accordingly.

Luckily for Alice, quantum systems behave totally differently to non-quantum (classical or semiclassical) chaotic systems. What Yan and Sinitsyn found is that she can apply her time-reversal operation and end up at an "earlier" state which will not be identical with the initial system she set up, but it will also not have increased the damage which occurred later. Alice can then reconstruct her initial system using a method of quantum unscrambling called quantum state tomography.

What this means is that a quantum system can effectively heal and even recover information that was scrambled in the past, without the chaos of the butterfly effect.

Classical chaotic evolution magnifies any state damage exponentially quickly, which is known as the butterfly effect, explain Yan and Sinitsyn. The quantum evolution, however, is

linear. This explains why, in our case, the uncontrolled damage to the state is not magnified by the subsequent complex evolution. Moreover, the fact that Bobs measurement does not damage the useful information follows from the property of entanglement correlations in the scrambled state.

Hypothetical though this scenario may be, the result already has a practical use: verifying whether a quantum system has achieved quantum supremacy. Quantum processors can simulate time-reversal in a way that classical computers cannot, which could provide the next important test for the quantum race between Google and IBM.

So, while time travel is still not in the cards, the quantum world continues to mess with our classical conception of how the world evolves in time, and pushes the limits of computing information.

See more here:
Scientists Have Shown There's No 'Butterfly Effect' in the Quantum World - VICE

What’s the point: Ansible, Datadog, Amazon, Lens, Rust, and DeepMind DEVCLASS – DevClass

The team behind Red Hats IT automation tool Ansible is on track for the 2.10 release on September 22nd, and has just finished work on the base component for the upcoming version. Ansible 2.10 is the first to have the Ansible engine, which is made up of some core programs (ansible-galaxy, ansible-test, etc), a subset of modules and plugins, and some documentation, in a separate ansible-base repository.

The rest of the plugins and modules have been pushed into a variety of collections, a format for bundling Ansible artifacts. Collections are independently developed and updated, with some sought out ones becoming bundled with ansible-base for the final Ansible package. To make sure moved components wont break setups, Ansible 2.10 comes with appropriate routing data.

At Datadogs yearly user conference last week, the monitoring company introduced some additions to its portfolio that are well worth a look. One of the most sought after enhancements seems to be the Datadog mobile app for iOS and Android devices. The application is meant to provide on-call workers with dashboard and alert access. It also allows users to check the new Incidents UI, which grants a central overview of the state of all incidents. Other enhancements to the Datadog platform include investigation dashboards and threat intelligence for Security Monitoring, and compliance monitoring.

A good eight month after introducing devs interested in quantum computing to its Braket service, AWS has decided its time to make it generally available. The product aims to support researchers by providing them with a development environment to explore and build quantum algorithms, test them on quantum circuit simulators, and run them on different quantum hardware technologies. Amazon Braket comes packed with pre-built quantum computing algorithms, though implementing some from scratch is promised to be an option as well, and simulators for testing and troubleshooting different approaches.

Mirantis, recent home of Docker Enterprise, has continued on its cloud native acquisition journey by buying Kubernetes integrated development environment Lens from its authors. Lens is a MIT-licensed project which was launched in March 2020 and is supposed to run on MacOS, Windows, and Linux. It was originally developed by Kontena, whose team also became part of Mirantis earlier this year. In its announcement, Mirantis promised to keep Lens free and open source and invest in the future development of the tool.

Lovers of programming language Rust might have started to worry given the string of Mozilla layoffs announced last week. The language team therefore took to Twitter to assure users that Rust isnt in existential danger, ensuring to share more information on the topic in the coming weeks.

Developers working with just-in-time compiler JAX in their machine learning projects can now add two more helpers to their open-source toolbelt. Optax and Chex both stem from Googles DeepMind team and are meant to support users in properly using JAX, which funnily enough is also a Google research project.

Chex includes utils to instrument, test, and debug JAX code in order to make it more reliable. Meanwhile Optax was dreamt up to provide simple, well-tested, efficient implementations of gradient processing and optimisation approaches. Both projects can be found on GitHub, where the projects are protected under a Apache-2.0 License.

Original post:
What's the point: Ansible, Datadog, Amazon, Lens, Rust, and DeepMind DEVCLASS - DevClass

Quantum Computing Market Size and Growth By Leading Vendors, By Types and Application, By End Users and Forecast to 2027 – Bulletin Line

New Jersey, United States,- This detailed market research covers the growth potential of the Quantum Computing Market, which can help stakeholders understand the key trends and prospects of the Quantum Computing market and identify growth opportunities and competitive scenarios. The report also focuses on data from other primary and secondary sources and is analyzed using a variety of tools. This will help investors better understand the growth potential of the market and help investors identify scope and opportunities. This analysis also provides details for each segment of the global Quantum Computing market.

The report was touted as the most recent event hitting the market due to the COVID-19 outbreak. This outbreak brought about a dynamic change in the industry and the overall economic scenario. This report covers the analysis of the impact of the COVID-19 pandemic on market growth and revenue. The report also provides an in-depth analysis of the current and future impacts of the pandemic and post-COVID-19 scenario analysis.

The report covers extensive analysis of the key market players in the market, along with their business overview, expansion plans, and strategies. The key players studied in the report include:

The market is further segmented on the basis of types and end-user applications. The report also provides an estimation of the segment expected to lead the market in the forecast years. Detailed segmentation of the market based on types and applications along with historical data and forecast estimation is offered in the report.

Furthermore, the report provides an extensive analysis of the regional segmentation of the market. The regional analysis covers product development, sales, consumption trends, regional market share, and size in each region. The market analysis segment covers forecast estimation of the market share and size in the key geographical regions.

The report further studies the segmentation of the market based on product types offered in the market and their end-use/applications.

Quantum Computing Market, By Offering

Consulting solutions Systems

Quantum Computing Market, By Application

Machine Learning Optimization Material Simulation

Quantum Computing Market, By End-User

Automotive Healthcare Space and Defense Banking and Finance Others

On the basis of regional segmentation, the market is bifurcated into major regions ofNorth America, Europe, Asia-Pacific, Latin America, and the Middle East & Africa.The regional analysis further covers country-wise bifurcation of the market and key players.

The research report offered by Verified Market Research provides an updated insight into the global Quantum Computing market. The report covers an in-depth analysis of the key trends and emerging drivers of the market likely to influence industry growth. Additionally, the report covers market characteristics, competitive landscape, market size and growth, regional breakdown, and strategies for this market.

Highlights of the TOC of the Quantum Computing Report:

Overview of the Global Quantum Computing Market

Market competition by Players and Manufacturers

Competitive landscape

Production, revenue estimation by types and applications

Regional analysis

Industry chain analysis

Global Quantum Computing market forecast estimation

This Quantum Computing report umbrellas vital elements such as market trends, share, size, and aspects that facilitate the growth of the companies operating in the market to help readers implement profitable strategies to boost the growth of their business. This report also analyses the expansion, market size, key segments, market share, application, key drivers, and restraints.

Key Questions Addressed in the Report:

What are the key driving and restraining factors of the global Quantum Computing market?

What is the concentration of the market, and is it fragmented or highly concentrated?

What are the major challenges and risks the companies will have to face in the market?

Which segment and region are expected to dominate the market in the forecast period?

What are the latest and emerging trends of the Quantum Computing market?

What is the expected growth rate of the Quantum Computing market in the forecast period?

What are the strategic business plans and steps were taken by key competitors?

Which product type or application segment is expected to grow at a significant rate during the forecast period?

What are the factors restraining the growth of the Quantum Computing market?

Thank you for reading our report. The report is available for customization based on chapters or regions. Please get in touch with us to know more about customization options, and our team will ensure you get the report tailored according to your requirements.

About us:

Verified Market Research is a leading Global Research and Consulting firm servicing over 5000+ customers. Verified Market Research provides advanced analytical research solutions while offering information enriched research studies. We offer insight into strategic and growth analyses, Data necessary to achieve corporate goals, and critical revenue decisions.

Our 250 Analysts and SMEs offer a high level of expertise in data collection and governance use industrial techniques to collect and analyze data on more than 15,000 high impact and niche markets. Our analysts are trained to combine modern data collection techniques, superior research methodology, expertise, and years of collective experience to produce informative and accurate research.

Contact us:

Mr. Edwyne Fernandes

US: +1 (650)-781-4080UK: +44 (203)-411-9686APAC: +91 (902)-863-5784US Toll-Free: +1 (800)-7821768

Email: [emailprotected]

More:
Quantum Computing Market Size and Growth By Leading Vendors, By Types and Application, By End Users and Forecast to 2027 - Bulletin Line

Designing the computers of tomorrow – The Science Show – ABC News

Robyn Williams: The Science Show on RN, and time once more for some quantum guitar.

[Music]

Professor David Reilly with one of his pieces, and we'll hear about his diamonds in a minute. And from Donna Strickland, who was only the third woman in the world to win a Nobel Prize for physics.

But, before we do, something from The Money, the program presented by Richard Aedy, which last week confirmed what we've just heard from Jayne Thompson.

Phil Morle: For one reason or another this country has a concentration of some of the most talented, globally in-demand quantum computing experts, and there is an opportunity right now today to build the Silicon Valley of quantum computing and to do that here in Australia, and that's truly the next generation of computing, which will unfold over the next ten years and live for decades after that.

And the other side of that same equation is the great migration out of Silicon Valley, which is happening. My brother, for example, works at Facebook where everyone has been told they don't need to come back and work in the office, and so he doesn't live in Silicon Valley anymore. That's one result of the pandemic. So I think the world of innovation is afoot, it's in motion, it's going to land different to where it was in 2019.

Richard Aedy: Yes, how big is your fund? Are you able to give me a kind of dollar amount?

Phil Morle: Yes, our first fund is $240 million.

Richard Aedy: The implication is first fund. Are there going to be more?

Phil Morle: That's right, we are getting close to closing our second fund and that will be the same sort of quantum.

Richard Aedy: So, overall, Phil, you actually sound not buoyant but definitely optimistic, despite what we've been going through with the pandemic.

Phil Morle: I suppose I am. I am worried, nevertheless, and let's say vigilant. I'm vigilant, I'm watching very, very carefully. I meet with our start-up founders, the CEOs of our companies every week or two to say what's happening, what's changing, what do we need to know, how do we adapt. So there's very real-time adapting happening, and anything could happen in the weeks and months to come, but there is still a massive planet with lots of people on it with an endless amount of problems to solve which companies can solve, and there is no reason why the venture supported start-up world can't be bigger than it has ever been.

Richard Aedy: Phil Morle is a partner with Main Sequence Ventures.

Robyn Williams: Richard Aedy from The Money program on RN every Thursday, 5:30. Yes, Australia certainly has a reputation for quantum work and needs to prepare a qualified workforce.

Now let's meet that guitarist at the University of Sydney, David Reilly. He also works with diamonds and has a position with Microsoft.

First of all, you haven't brought a guitar with you.

David Reilly: I should have done so.

Robyn Williams: You should have done so because you remind me of the kind of Brian May of Australian physics.

David Reilly: Not quite as tall or as talented.

Robyn Williams: He's amazing, isn't he. What do you play?

David Reilly: At the moment I really can't get the Fender Stratocaster out of my hand, but it depends on the style of music.

Robyn Williams: I remember your playing in fact at the opening of this department, the nano research outfit five years ago or four years ago, whatever it was. But have you brought any tiny diamonds with you?

David Reilly: I have not. Although, they are probably around on the floor and in the air to some very small amount.

Robyn Williams: They are that small?

David Reilly: Yes, they're tiny, nanometres in size. The ones that we focus on are synthetic.

Robyn Williams: And these are ones that are in the body and they are spotted by the MRI, in other words the machine that looks through you to see what's going on inside the body. But what do they tell you as a person who wants to find out what's wrong with the body or not?

David Reilly: Well, the motivation is really trying to track something in the body. We wanted to make a lighthouse, and what you attach that lighthouse to, well, that's really at the discretion of medical research. But, for instance, if you wanted to know where certain drugs went, maybe chemotherapy drugs, anyone who has been in a very challenging circumstance of having to undergo chemotherapy knows that it's a horrific process, in part because those drugs go everywhere and they attack healthy tissue as much as they do cancerous tissue. A lot of the reason for that just blanket approach to treatment is because there are still a lot of open fundamental questions about how do we target certain types of pharmaceuticals to certain particular functions or parts in the body. And from a physics point of view, I mean, I'm obviously a physicist not a medical researcher, but it's a physics problem, how do you create a beacon or a lighthouse that is going to be useful in MRI, not require you to be opened up, not require us to go and biopsy an organ but just to take a somewhat regular MRI, and then have certain regions light up where the drugs are or where they aren't or cancer is or cancer isn't. So that was the long-term motivation, a really challenging physics problem, how to make diamond effectively light up in an MRI.

Robyn Williams: Does it work?

David Reilly: Yeah, it does, we've developed the technique to the point it works in mice, and it is now really moving out of the physics lab into that wider area where it's going to have impact in biomedical research.

Robyn Williams: Normally with various machines you can tell whether there is a tumour there, how extensive it is. You're looking at something rather small, but what kind of things are you being able to spot that the normal X-ray-type investigation can't?

David Reilly: The history of where this came from maybe gives you a better understanding of what we're trying to do. I read a paper justI remember I think I was waiting somewhere, it wasn't to see a doctor, it was something like that, I was reading something and I came across an article that said that chemotherapy drugs ferried around the body on a substrate, like on a raft, and that raft happened to be nano-diamond because it's relatively inert and doesn't react and is somewhat safe in small concentrations. And I thought that's really interesting, they're just using diamond purely for the reason that it's inert and it doesn't react with anything. Physics point of view tells you that diamond has other remarkable properties to be optically active, and it's also possible to basically program its nuclear spins, the little tiny bar-magnets that live in the inside of the atom, orient them such that it can give you an image and a signature in an MRI. So it's all about then attaching to something else, goes along for the ride, it's a big lightbulb that will light up whatever it is that it's attached to.

Robyn Williams: This nano outfit that you are in also of course works on quantum computing. Now, without making you cross I hope, I usually think of quantum computing not just at the University of New South Wales and Michelle Simmons, but also with silicon. In what way is your investigation different?

David Reilly: Yes, silicon is a very interesting material, and the effort that you're describing has been around now for over 20 years, and in fact my PhD is from that activity at the University of New South Wales, in fact before it just started back in the late '90s. Silicon is in many ways a very obvious choice in which to make what we call qubits, the fundamental building blocks of quantum information. And the reason that they are an obvious choice is because the name of the game when it comes to quantum information is trying to protect it. It's very fragile, it wants to become regular, boring classical information all the time.

And to preserve these exotic or almost very counterintuitive properties, one has to preserve the quantum nature. So the name of the game is protect it. And silicon is a material that when it comes to the electron spin or the nuclear spin, again that is the little bar-magnet goes along with the electron or the nucleus in an atom, silicon is a material that is extremely free of uncontrolled bar-magnets, uncontrolled spin. So if you then intentionally put a spin in silicon, that's great because that spin can encode information and there is no other spins in the system that can lead to a loss of quantum information.

However, the challenge is, and this is something I think over the last 20 years we've realised, is that if you think of a line where you can choose between really protected systems where the information is stored in a way that is isolated, like silicon, and up the other end of the line is controllable, I can manipulate it really quickly, I can interact with it very strongly, and the challenge is how do you create systems that are both highly protected from the environment but not highly protected from the control because I want to be able to manipulate it. And that did my head in, thinking about that problem. You realise that there is no escaping it.

You can choose your flavour of qubit, it could be spins in silicon, highly protected, but a bit challenging to control, pretty slow and so on, or qubits that want to interact with everything, including the environment, but they can also be controlled very effectively and very quickly. You know, how do you break out of that double-edged sword? That was what inspired me to start to work on very different systems. And the work that's happening here at the University of Sydney is really about trying to explore new types of qubits that break free of this limitation.

Robyn Williams: In different materials?

David Reilly: Different materials, but totally different principles, totally fundamentally different ways of storing and manipulating quantum information. So we are trying to build what we call a topological qubit, that is a system that uses topology, the branch of mathematics associated with global properties of shapes, we want to use those principles to protect the information and break free of this challenge of protected but controllable. So, very different.

Robyn Williams: The president of the Academy of Technological Sciences and Engineering Hugh Bradlow is the president, and he famously said, and we broadcast this on The Science Show, that there are many ways of tackling this gigantic field of quantum computing. And if you imagine a horse race, it's one where you will have not just one winner, there will be a whole stream. And what you're doing is being supported by Microsoft, which shows that they've got tremendous faith in what you are accomplishing with your search for qubits. What's the relationship built on, what does it mean?

David Reilly: There's a whole range of interesting things to unpack there. The first is I would agree with Hugh that we don't havewe, the world, humankind does not yet, in my view, possess a technology that's going to allow us to build a quantum computer, not one of scale that's going to be significant enough to do impactful things, we don't have that technology yet.

We need to go back to the drawing board and really now we understand a lot of these ideas better, that's Microsoft's view, and in some ways it's actually a little bit pessimistic because I think we as a group within the company over a number of years are working on these different systems. You know, many of the people that are part of Microsoft's effort, including myself, started in spin qubits, in silicon or in other materials, or superconducting technology, the different flavours of qubit, and after a decade or so in that, you realise there needs to be other ways of doing it.

And so it's a collection of people who are actually a little bit pessimistic about the approaches that are out there, let's figure out how to do it right, that's going to allow us to scale, build a machine of sufficient complexity and size that it can go after. In some ways Microsoft is not interested in building a quantum computer, it's interested in the applications and the impact of such a machine. So we want to build a useful machine.

Robyn Williams: And it's going to change the world, it's a big deal.

David Reilly: Exactly, and that's what our sights are set on, it's not about for us a physics experiment. For me personally that's very interesting but I recognise if you're going to touch people in the street, if you're going to make an impact in people's lives beyond a physics experiment, then you have to build a very different machine, one that is sufficiently complex and large-scale that it can solve really hard problems.

Robyn Williams: I'm sure in your late-night thoughts you've had dreams about the ways in which it's going to be if everything goes right. What are some of those dreams are made of, what would kind of speculation can you have, not simply just, if you like, more secure bankcards, but our lives, how will they be affected?

David Reilly: You can spend a lot of time dreaming about that. There are things we see right now with the technology as we understand it, even though it doesn't exist at the level that you can actually start to use it. One can imagine using it for obviously a range of things in what people call quantum chemistry, a lot of designing of, again, pharmaceuticals, catalysts, chemicals that are needed in manufacturing, dyes and so on, carbon capture. Many of those types of applications will benefit I think from having a machine of sufficient scale, a quantum computer that can really solve some of the intricacies of quantum chemistry problems.

But the truth is we really don't know, and that sounds bizarre because people think why would you put such a huge effort into building something you don't even know what it's good for. And the answer to that I think is a little bit subtle. On the one hand we can identify applications, but for me a quantum computer changes the fundamental logic, it's totally different logic to how the machines that we carry around in our pockets work. And I think when you change that underlying fundamental aspect of how computing works, it would be very surprising if that didn't also open up all kinds of other applications. I think we can look back in history and see that many, many times. I think the most exciting applications will be the ones we can't dream about and envisage.

Robyn Williams: Just to give you a tiny bit of story which you can bounce off, once I was at a conference and a little old man was looking at an exercise machine, and he thought it would be good for his back and he went off to get his credit card. And I said to the woman running the booth, I said, 'Do you know who that was? That was one of the three guys who got the Nobel prize for inventing lasers. And this was something for which apparently there was no use, laser, organised light. Okay, his credit card is going to be read by you by a laser beam.' In other words, you have something which is so huge, like computers have become so huge, transformed the world. In other words, jobs, in other words who knows what.

David Reilly: Yes, that's exactly right, and transistors are also another story that there are still many people alive who lived through that era and know firsthand about the discussions where people said; what are we going to do with this stuff? The transistor, the original motivation was to make a repeater, telephone repeater stations more robust, serviceable, less frequentlyget away from vacuum tubes that were always blowing. But as they realised they were holding something that was also very small; what are we going to do with that? And here we are, and it's not that long ago, 30, 40, 50 years, and now we are carrying 10 billion of these things around in everybody's pocket and doing things that we could never imagine.

So humans are pretty bad I think at predicting the future, but you've got to believe if you change the fundamental way in which you're doing logic, the logic that you learn in kindergarten, in preschool, whatever, one plus one equals two. Imagine if, well, actually there is some other laws here, some other fundamental mathematics that you can tap into, of course that's going to lead to many other applications, and we are getting a glimpse of those now but I think it's really going to be exciting over the next 10, 20 years to just see how the world changes because we've changed the fundamental logic.

Robyn Williams: A final question, a very short one; have you recorded an album, as they used to call it, done live gigs?

David Reilly: Not for some time. I do have fun recording at home, and in this day and age you can easily do that and plug in. Your laptop is a recording studio, it's a fascinating thing to me actually because talking about vacuum tubes and transistors, I've got to tell you this, this really does amuse me more than keep me up at night, but the idea that for aficionados of sound and music and guitars and amplifiers, it's the vacuum tube that sounds so good, and people spend huge amounts of money to buy amplifiers built from vacuum tubes, as opposed to transistors. But today you can take your laptop with 10 billion transistors, run an operating system and a whole range of high-level applications and software, and then you can dial up the sound with those 10 billion transistors in your CPU, you can dial up the sound of one vacuum tube. So here we are emulating with all of this complex software the sound of 50 years ago, and it's remarkable how history repeats itself in some very weird way like that.

Robyn Williams: Professor David Reilly at the University of Sydney's Nano Centre.

Read the original here:
Designing the computers of tomorrow - The Science Show - ABC News

ISQED’21 Accepts Papers for the 2021 Event – Send2Press Newswire

SAN JOSE, Calif., Aug. 18, 2020 (SEND2PRESS NEWSWIRE) Symposium on Quality Electronic Design (ISQED) today announced that it has started to accept papers for the 2021 event. ISQED is an internationally reputable conference, sponsored by IEEE CASS, IEEE EDS, and IEEE Reliability Societies, and in cooperation with ACM/SigDA.

PHOTO CAPTION: 2021 Conference theme is AI/ML in Electronic Design, Quantum Computing, Hardware Security, 3D Integration, and IoT.

To be considered for presentation and publication by IEEE, authors are asked to send their articles before Oct. 2, 2020. The conference is planned to be held on April 2021 in a combination of physical and virtual formats in Santa Clara, California, USA.

The final format will be announced later when the COVID-19 situation becomes clear.

A partial list of topics of interest includes:

Hardware and System Security

Electronic Design Automation Tools and Methodologies

Design Test and Verification

Emerging Device and Process Technologies and Applications

Circuit Design, 3D Integration and Advanced Packaging

System-level Design and Methodologies

Cognitive Computing Hardware

Submission of Papers (Regular, WIP, Special Sessions)

For any information about submission process refer to: https://www.isqed.org/English/Conference/Call_for_Papers.html

About ISQED

ISQED is the premier interdisciplinary and multidisciplinary Electronic Design conferencebridges the gap among Electronic/Semiconductor ecosystem members providing electronic design tools, integrated circuit technologies, semiconductor technology, packaging, assembly & test to achieve total design quality. Current and all past ISQED events have been held with the technical sponsorship of IEEE CASS, IEEE EDS, and IEEE Reliability Society.

All past Conference proceedings & Papers have been published in IEEE Xplore digital library and indexed by Scopus.

For further information please contact ISQED by sending email to isqed2021@gmail.com.

News Source: ISQED

Read the original here:
ISQED'21 Accepts Papers for the 2021 Event - Send2Press Newswire

Former Trump staffer who joined Google is now on leave to support Biden – CNBC

Google security lead Miles Taylor takes leave ahead of Presidential election.

Miles Taylor, President Donald Trump's former homeland security chief of staff and current Google staffer who became a controversial figure within the company for his work on Trump's child separation policies, is taking a leave from Google until after the presidential election.

Taylor appeared in an anti-Trump ad Monday and backed Joe Biden for president, drawing the president's ire on Twitteron Tuesday morning. Taylor will use his time off to engage in more political activity, according to a source familiar with the situation. He'll then continue a new role he recently took after he returns to Google.

"What we saw week in and week out, after 2 years in that administration was terrifying," Taylor said in the video released Monday. "The president wanted to exploit the Department of Homeland Security for his own political purposed and to fuel his own agenda."

Last fall, Google hired Taylor, who served as chief of staff for Kirstjen Nielsen, the former secretary of the Department of Homeland Security, and with now-acting DHS Secretary Chad Wolf. Taylor joined Google to work on government affairs and national security issues, but it caused a stir among lawmakers and employees, some of whom had actively protested their company's relationship with the government and its policies.

In July, the company and Taylor agreed on transitioning him off of a general national security role and into a position that focuses on security policy for things like artificial intelligence, the source told CNBC. Last week, they agreed he'd take a leave of absence so that he could pursue personal political activity ahead of the election, the source said. His leave of absence is effective until Nov. 4, the day after Election Day.

Google declined to comment.

Since Taylor joined the company, employees have expressed concern about him at all-hands meetings, where executives defended his hiring and downplayed his involvement in DHS policies. That came a few months after more than 1,000 Google employees signed a petitiondemanding that Google abandon bids or potential bids with U.S. Customs and Border Protection contract.A couple of months later, Democratic congressional leaders scolded Google CEO Sundar Pichai for hiring Taylor, whose team hadsupportedTrump's executive order to ban travel to the U.S. from Muslim countries and DHS' family separation policies.

In an op-ed in The Washington Post published Monday, Taylor tried to distance himself from the Trump administration's decision to separate immigrant families entering the country, despite the fact he worked closely on the policy, as BuzzFeed news previously reported.

On Tuesday, Trump attacked Taylor on Twitter, claiming he did not know his former staffer. Pictures of the two men together quickly surfaced on Twitter in response.

Just before Taylor took leave last week, he changed roles to U.S. lead for advanced technology and security strategy, where he'll focus on driving Google's agenda related to AI, quantum computing, cybersecurity and emerging digital threats, according to the source. Previously, his role was overseeing Google's national security policy efforts, including Google work related to the defense, law enforcement and security.

Taylor's transition comes ahead of a tumultuous U.S. election, where Trump is seeking a second term against presumptive Democratic nominee Joe Biden. Tech companies and their staffers have found themselves in the line of fire as both sides particularly the Trump administration launch public attacks against them in recent weeks ranging from allegations of bias to antitrust violations.

Among Taylor's claims Monday were that Trump withheld funds from the Federal Emergency Management Agency for Californians dealing with the fallout from wildfires because the state didn't support him and because it wasn't a base for him "politically." He also claimed Trump wanted to push the boundaries on border policies for the purpose of scaring off potential violators. "He didn't want us to tell us it was illegal anymore because he knew and these are his words he had 'magical authority,'" he is seen saying in the video.

"Even though I'm not a Democrat and disagree on key issues, I'm confident that Joe Biden will protect the country and I'm confident he won't make the same mistakes as this president," Taylor's video concluded.

See the original post here:
Former Trump staffer who joined Google is now on leave to support Biden - CNBC

Top 3 Applications Of Machine Learning To Transform Your Business – Forbes

We all hear about Artificial intelligence and Machine learning in everyday conversations, the terms are becoming increasingly popular across businesses of all sizes and in all industries. We know AI is the future, but how can it be useful to businesses today? Having encountered numerous organisations that are confused about the actual benefits of Machine Learning, AI experts agree it is necessary to outline its key applications in simple terms that most companies can relate to.

Here are the three most impactful Machine Learning applications that can transform your business today.

Machine learning can be used to automate a host of business operations, such as document processing, database analysis, system management, employee analytics, spam detection, chatbots. A lot of manual, time consuming processes can now be replaced or at least supported by off-the-shelf AI solutions. For those companies with unique requirements, looking to create or maintain a competitive advantage or otherwise prefer to retain control of the intellectual property (IP), it is worth reaching out to end-to-end service providers that can assist in planning, developing and implementing bespoke solutions to meet these business needs.

The reason why machine learning often ends up performing better than humans at a single task is that it can very quickly improve its performance through analysing vast amounts of historical data. In other words, it can learn from its own mistakes and optimise its performance very quickly and at scale. There is no ego and no hard feelings involved, simply objective analysis, enabling optimisation to be achieved with a high efficiency and effectiveness. Popular examples of optimisation with machine learning can be found around product quality control, customer satisfaction, storage, logistics, supply chain and sustainability. If you think something in your business is not running as efficiently as it could and you have access to data, machine learning may just be the right solution.

Companies are inundated with data these days. Capturing data is one thing, but navigating and extracting value from big, disconnected databases containing different types of data on different areas of your organisation adds complexity, cost, reduces efficiency and impedes effective decision making. Data management systems can help create clarity and put your data in order. You would be surprised how much valuable information can then be extracted from your data using machine learning. Typical applications in this space include churn prediction, sales forecasting, customer segmentation, personalisation, or predictive maintenance. Machine learning can teach you more about your organisation in a month than you have learned over the past year.

If you think one of the above applications might be helpful to your business now is a good time to start. As much as companies are reluctant to invest in innovation and new technologies, especially due to difficulties caused by Covid-19, it is important to recognise that the afore mentioned applications can bring a long-term benefits to your business such as cost savings, increased efficiency, improved operations and enhanced customer value. Get started and become a leader in your field thanks to the new machine learning technologies available to you today.

Originally posted here:
Top 3 Applications Of Machine Learning To Transform Your Business - Forbes

NeuralCam Launches NeuralCam Live App Using Machine Learning to Turn iPhones into Smart Webcams – MarkTechPost

An era of virtual learning, when interviews, education, etc. are being conducted from home through laptops and the internet. The clarity of the camera for video calls, maybe work or class calls are the primary need of the hour. But laptop webcam still has 720p or 1080 resolutions with low color accuracy and light performance. Understanding the vast market for this NeuralCam introduces an app that converts an apple iPhone into smart webcam. The best part of the deal is its free.

Neuralcam live platform uses machine learning to generate a high-quality computer video stream using the iPhones front camera. Prerequisites are installing the IOS app and MAC driver. iPhone sends a live stream to your computer with features such as video enhancement. Video processing will be handled in the device rather than on the computer. The company is also building an IOS SDK for third-party video calling and streaming apps to control the enhancement features.

The main attractions of the NeuralCam live are

Few shortcomings at present are

A roadmap has been planned by NeuralCam to overcome these drawbacks. They also plan to release windows support soon and serve industries like education, health care, and entertainment.

Related

AI Marketing Strategy Intern: Young inspired management student with a solid background in engineering and technical know-how gathered through work experience with Robert Bosch engineering and business solutions. As a continuous learner, she works towards keeping up-to date with cutting-edge technologies while honing her management and strategy skills to develop a holistic understanding of the modern tech driven industries and drive developments towards better solutions.

Originally posted here:
NeuralCam Launches NeuralCam Live App Using Machine Learning to Turn iPhones into Smart Webcams - MarkTechPost

Machine Learning Just Classified Over Half a Million Galaxies – Universe Today

Humanity is still a long way away from a fully artificial intelligence system. For now at least, AI is particularly good at some specialized tasks, such as classifying cats in videos. Now it has a new skill set: identifying spiral patterns in galaxies.

As with all AI skills, this one started out with categorized data. In this case, that data consisted of images of galaxies taken by the Subaru Telescope in Mauna Kea, Hawaii. The telescope is run by the National Astronomical Observatory of Japan (NAOJ), and has identified upwards of 560,000 galaxies in images it has taken.

Only a small sub-set of those half a million were manually categorized by scientists at NAOJ. The scientists then trained a deep-learning algorithm to identify galaxies that contained a spiral pattern, similar to the Milky Way. When applied to a further sub-set of the half a million galaxies (known as a test set), the algorithm accurately classified 97.5% of the galaxies surveyed as either spiral or non-spiral.

The research team then applied the algorithm to the fully 560,000 galaxies identified in the data so far. It classified about 80,000 of them as spiral, leaving about 480,000 as non-spiral galaxies. Admittedly, there may be some galaxies that are actually spirals that were not identified as such by the algorithm, as they might only be visible edge-on from Earths vantage point. In that case, even human classifiers would have a hard time correctly identifying a galaxy as a spiral.

The next step for the researchers is to train the deep learning algorithm to identify even more types and sub-types of galaxies. But to do that, they will need even more well categorized data. To help with that process, they have launched GALAXY CRUISE, a citizen science project where volunteers help to identify galaxies that are merging or colliding. They will be following in the footsteps of another effort by scientists at the Sloan Digital Sky Survey, which used Galaxy Zoo, collection of citizen science projects, to train a AI algorithm to identify spiral vs non-spiral galaxies as well. After the manual classification is done, the team hopes to upgrade the AI algorithm and analyze all half a million galaxies again to see how many of them might be colliding. Who knows, a few of those colliding galaxies might even look like cats.

Learn More:EurekaAlert: Classifying galaxies with artificial intelligencePhysics Letters B: Classifying galaxies with AI and people powerUniverse Today: Try your hand at identifying galaxiesUnite.ai: Astronomers Apply AI to Discover and Classify Galaxies

Like Loading...

Here is the original post:
Machine Learning Just Classified Over Half a Million Galaxies - Universe Today

Informatica Acquires GreenBay Technologies to Advance AI and Machine Learning Capabilities – thepress.net

REDWOOD CITY, Calif., Aug. 18, 2020 /PRNewswire/ --Informatica, the enterprise cloud data management leader, today announced it has acquired GreenBay Technologies Inc. to accelerate its innovation in AI and machine learning data management technology. The acquisition will strengthen the core capabilities of Informatica's AI-powered CLAIRE engine across its Intelligent Data Platform, empowering businesses to more easily identify, access, and derive insights from organizational data to make informed business decisions.

"We continue to invest and innovate in order to empower enterprises in the shift to the next phase of their digital transformations," said Amit Walia, CEO of Informatica. "GreenBay Technologies is instrumental in delivering on our vision of Data 4.0, by strengthening our ability to deliver AI and machine learning in a cloud-first, cloud-native environment. This acquisition gives us a competitive advantage that will further enable our customers to unleash the power of data to increase productivity with enhanced intelligence and automation."

Core to the GreenBay acquisition are three distinct and advanced capabilities in entity matching, schema matching, and metadata knowledge graphs that will be integrated across Informatica's product portfolio. These technologies will accelerate Informatica's roadmap across Master Data Management, Data Integration, Data Catalog, Data Quality, Data Governance, and Data Privacy.

GreenBay Technologies' AI and machine learning capabilities will be embedded in the CLAIRE engine for a more complete and accurate, 360-degree view and understanding of business, with innovative matching techniques of master data of customers, products, suppliers, and other domains. With the acquisition, GreenBay Technologies will accelerate Informatica's vision for self-integrating systems that automatically infer and link target schemas to source data, enhance capabilities to infer data lineage and relationships, auto-generate and apply data quality rules based on concept schema matching, and increase accuracy of identifying sensitive data across the enterprise data landscape.

GreenBay Technologies was co-founded by Dr. AnHai Doan, University of Wisconsin Madison's Vilas Distinguished Achievement Professor, together with his Ph.D. students, Yash Govind and Derek Paulsen. Dr. Doan oversees multiple data management research projects at the University of Wisconsin's Department of Computer Science and is the co-author of "Principles of Data Integration," a leading textbook in the field, and was among the first to apply machine learning to data integration in 2001. Doan's pioneering work in the area of data integration has received multiple awards, including the prestigious ACM Doctoral Dissertation Award and the Alfred P. Sloan Research Fellowship. Dr. Doan and Informatica have a long history collaborating in the use of AI and machine learning in data management. In 2019, Informatica became the sole investor in GreenBay Technologies, which also has ties to the University of Wisconsin (UW) at Madison and the Wisconsin Alumni Research Foundation (WARF), one of the first and most successful technology transfer offices in the nation focused on advancing transformative discoveries to the marketplace.

"What started as a collaborative project with Informatica's R&D will now help thousands of Informatica customers better manage and utilize their data and solve complex problems at the pace of digital transformation," said Dr. Doan. "GreenBay Technologies will provide Informatica customers with AI and ML innovations for more complete 360 views of the business, self-integrating systems, and more automated data quality and governance tasks."

The GreenBay acquisition is an important part of Informatica's collaboration with academic and research institutions globally to further its vision of AI-powered data management including most recently in Europe with The ADAPT Research Center, a world leader in Natural Language Processing (NLP), in Dublin.

About InformaticaInformatica is the only proven Enterprise Cloud Data Management leader that accelerates data-driven digital transformation. Informatica enables companies to fuel innovation, become more agile, and realize new growth opportunities, resulting in intelligent market disruptions. Over the last 25 years, Informatica has helped more than 9,000 customers unleash the power of data. For more information, call +1 650-385-5000 (1-800-653-3871 in the U.S.), or visit http://www.informatica.com. Connect with Informatica on LinkedIn, Twitter, and Facebook.

Informatica and CLAIRE aretrademarks or registered trademarks of Informatica in the United States and in jurisdictions throughout the world. All other company and product names may be trade names or trademarks of their respective owners.

The information provided herein is subject to change without notice. In addition, the development, release, and timing of any product or functionality described today remain at the sole discretion of Informatica and should not be relied upon in making a purchasing decision, nor as a representation, warranty, or commitment to deliver specific products or functionality in the future.

See the original post here:
Informatica Acquires GreenBay Technologies to Advance AI and Machine Learning Capabilities - thepress.net

How AI & Machine Learning is Infiltrating the Fintech Industry? – Customer Think

Credits: freepik

Fintech is a buzzword in the modern world, which essentially means financial technology. It uses technology to offer improved financial services and solutions.

How AI and machine learning are making ways across industries, including fintech? Its an important question in the business world globally.

The use of artificial intelligence (AI) and machine learning (ML) is evolving in the finance market, owing to their exceptional benefits like more efficient processes, better financial analysis and customer engagement.

According to the prediction of Autonomous Research, AI technologies will allow financial institutions to reduce their operational costs by 22%, by 2030.AI and ML are truly efficient tools in the financial sector. In this blog, we are going to discuss how they actually help fintech? What benefits do these technologies can bring to the industry?

The implementation of AI and ML in the financial landscape has been transforming the industry. As fintech is a developing market, it requires industry-specific solutions to meet its goals. AI tools and machine learning can offer something great here.

Are you eager to know the impact of AI and ML on fintech? These disruptive technologies are not only effective in improving the accuracy level but also speeds up the entire financial process by applying various proven methodologies.

AI-based financial solutions are focused on the crucial needs of the modern financial sector such as better customer experience, cost-effectiveness, real-time data integration, and enhanced security. Adoption of AI and allied its applications enables the industry to create a better, engaging financial environment for its customers.

Use of AI and ML has facilitated financial and banking operations. With the help of such smart developments, fintech companies are delivering tailored products and services as per the needs of the evolving market.

According to a study by research group Forrester, around 50% of financial services and insurance companies already use AI globally. And the number is expected to grow with newer technology advancements.

You will be thinking why AI and ML are becoming more important in fintech? In this section, we explain how these technologies are infiltrating the industry.

The need for better, safer, and customized solutions is rising with expectations of customers. Automation has helped the fintech industry to provide better customer service and experience.

Customer-facing systems such as AI interfaces and Chatbots can offer useful advice while reducing the cost of staffing. Moreover, AI can automate the back office process and make it seamless.

Automation can greatly help Fintech firms to save time as well as money. Using AI and ML, the industry has ample opportunities for reducing human errors and improving customer support.

Finance, insurance and banking firms can leverage AI tools to make better decisions. Here management decisions are data-driven, which creates a unique way for management.

Machine learning effectively analyzes the data and brings required outcomes that help officials to cut costs. Also, it empowers organizations to solve specific problems effectively.

Technologies are meant to deliver convenience and improved speed. But, along with these benefits, there is also an increase in online fraud. Keeping this in mind, Fintech companies and financial institutions are investing in AI and machine learning to defeat fraudulent transactions.

AI and machine learning solutions are strong enough to react in real-time and can analyze more data quickly. The organizations can efficiently find patterns and recognize fraudulent process using different models of machine learning. The fintech software development company can help build secured financial software and apps using these technologies.

With AI and ML, a huge amount of data can be analyzed and optimized for better applications. Hence fintech is the right industry where there is a great future of AI and machine learning innovations.

Owing to their potential benefits, automation and machine learning are increasingly used in the Fintech industry. In the case of smart wallets, they learn and monitor users behaviour and activities, so that appropriate information can be provided for their expenses.

Fintech firms are working with development and technology leaders to bring new concepts that are effective and personalized. Artificial intelligence, machine learning, and allied technologies are playing a vital role in financial organizations to improve skills, customer satisfaction, and reduce costs.

In the developing world, it is crucial for fintech companies to categorize clients by data analyzing, and allied patterns. AI tools show excellent capabilities in it to automate the process of profiling clients, based on their risk profile. This profiling work helps experts give product recommendations to customers in an appropriate and automated way.

Predictive analytics is another competitive advantage of using AI tools in the financial sector. It is helpful to improve sales, optimize resource use, and enhance operational efficiency.

With machine learning algorithms, businesses can effectively gather and analyze huge data sets to make faster and more accurate predictions of future trends in the financial market. Accordingly, they can offer specific solutions for customers.

As the market continues to demand easier and faster transactions, emerging technologies, such as artificial intelligence and machine learning, will remain crucial for the Fintech sector.

Innovations based on AI and ML are empowering the Fintech industry significantly. As a result, financial institutions are now offering better financial services to customers with excellence.

Leading financial and banking firms globally are using the convenient features of artificial intelligence to make business more stable and streamlined.

View post:
How AI & Machine Learning is Infiltrating the Fintech Industry? - Customer Think

Machine Learning takes Robotics to the Next Level of Development – Analytics Insight

In the mid-twentieth century when the computer and its applications were starting to bring changes to the world, sociologist David Reisman had something stuck in his mind. He wondered what people would do once machine automation comes to effect and humans have no compulsion to do daily physical chores and strain their brain to come up with solutions. He was excited to see what people would do with all the free time.

More than half a century later when the world has exactly what Reisman wondered, humans are still working on a full-time scale. Work alleviated by industrious machines such as robotics systems has only freed humans to create more elaborate new tasks to be laboured over. To counter attack all the predictions of the previous century, machines gave humans more time to work and not to relax.

Like how we currently imagine robots taking over the human society and doing all the work by themselves including physical and intellectual labour without human assistance as they are well programmed and set to adapt to any environment and take the accurate decision without human help, the previous century people too dreamed that robots will take over all the physical work during the era of the space race. But today, robots are used for their intelligence more vigorously than their physical assistance. Humans can only teach robots and make them follow instructions up to an extent. So when humans lack, machine learning makes its way to discipline robotics.

Machine learningis one of the advanced and innovative technological fields today in which robotics is being influenced. Machine learning aids robots to function with their developed applications and a deep vision.

According to a recentsurveypublished by the Evans Data Corporation Global Development, machine learning and robotics is at the top developers priority for 2016. It is calculated that 56.4% of participants build robotic apps and 24.7% of them indicate the use of machine learning in their project.

Machine learning involves enormous caches of data to be taught to the robot for its perfect learning. The procedure contains algorithms and physical machines to aid the robots in the learning process.

Deep Learning educates the purpose of the robot

Deep Learning has been in the machine learning field for more than 30 years. But it was recognised and brought into continuous use recently when Deep Neutral Network algorithms and hardware advancements started having high potential. Deep learning can be accomplished through computational capacity and the required datasets that are ultimately the powerful assets of machine learning.

The process of teaching robots machine learning necessitates engineers and scientists to decide how AI learns. Domain experts take the next role of advising on how robots need to function and operate within the scope of the task. They also specify the features of robots being of assistance at logistics experts and security consultants. Deep learning focuses on the sector that a robot needs to be specialised from its root.

Feeding robots with planning and learning

AI robotsthrough machine learning acquire two important processes namely planning and learning. Planning is like a physical way of teaching robot that presumes the robots to work on what pace it has to move every joint to complete a task. For example, grabbing an object by a robot is a planning input.

Meanwhile, learning involves different inputs and reacts according to the data added to it on a dynamic environment. Learning process takes place through physical demonstrations in which movements are trained, stimulation of 3D artificial environments and feeding video and data of a person or another robot performing the task it is hoping to master for itself. The stimulation is a training data where a set of labelled or annotated datasets that an AI algorithm can use to recognize and learn from it.

Educating and training with accurate data

The process of educating a robot needs accuracy and abundance. Inaccurate or corrupt data is going to bring nothing except for chaos. Inaccurate data will lead to a robot drawing to the wrong conclusion. For example, if the database is focused on green apples, and you input a picture of a blueberry muffin, they would still get a green apple. This kind of data disruption is a major threat. Insufficient training data will bar the robot from acquiring the full potential it is designed to perform.

Reaping the maximum of physical help

Machine learning will push robots to do physical work at its best. Recently, these kinds of robots are used in industries for various purposes. For example, unmanned vehicles are stealing the spotlight at construction sites.

It is not just the construction sector that is reaping a handful of help from machine learning.Medical industry makes use of itby involving computer vision models to recognise tumours within MRIs and CT scans. Through further training, an AI robot will be capable of doing life-saving surgeries and other medical procedures through its machine learning input.

With the emergence of robots in the society, the opportunity of training data, machine learning and Artificial Intelligence (AI) are playing a critical role in bringing it to enforcement. Tech companies involved in the robot creating and training process should spend some time to sensitize people on the robots help to humanity. If things work well and the AI department comes up with advanced robots that are well-trained, built and purposed AI, Reismans dream of humans having leisure time could come true.

The rest is here:
Machine Learning takes Robotics to the Next Level of Development - Analytics Insight

Too many AI researchers think real-world problems are not relevant – MIT Technology Review

Any researcher whos focused on applying machine learning to real-world problems has likely received a response like this one: The authors present a solution for an original and highly motivating problem, but it is an application and the significance seems limited for the machine-learning community.

These words are straight from a review I received for a paper I submitted to the NeurIPS (Neural Information Processing Systems) conference, a top venue for machine-learning research. Ive seen the refrain time and again in reviews of papers where my coauthors and I presented a method motivated by an application, and Ive heard similar stories from countless others.

This makes me wonder: If the community feels that aiming to solve high-impact real-world problems with machine learning is of limited significance, then what are we trying to achieve?

The goal of artificial intelligence (pdf) is to push forward the frontier of machine intelligence. In the field of machine learning, a novel development usually means a new algorithm or procedure, orin the case of deep learninga new network architecture. As others have pointed out, this hyperfocus on novel methods leads to a scourge of papers that report marginal or incremental improvements on benchmark data sets and exhibit flawed scholarship (pdf) as researchers race to top the leaderboard.

Meanwhile, many papers that describe new applications present both novel concepts and high-impact results. But even a hint of the word application seems to spoil the paper for reviewers. As a result, such research is marginalized at major conferences. Their authors only real hope is to have their papers accepted in workshops, which rarely get the same attention from the community.

This is a problem because machine learning holds great promise for advancing health, agriculture, scientific discovery, and more. The first image of a black hole was produced using machine learning. The most accurate predictions of protein structures, an important step for drug discovery, are made using machine learning. If others in the field had prioritized real-world applications, what other groundbreaking discoveries would we have made by now?

This is not a new revelation. To quote a classic paper titled Machine Learning that Matters (pdf), by NASA computer scientist Kiri Wagstaff: Much of current machine learning research has lost its connection to problems of import to the larger world of science and society. The same year that Wagstaff published her paper, a convolutional neural network called AlexNet won a high-profile competition for image recognition centered on the popular ImageNet data set, leading to an explosion of interest in deep learning. Unfortunately, the disconnect she described appears to have grown even worse since then.

Marginalizing applications research has real consequences. Benchmark data sets, such as ImageNet or COCO, have been key to advancing machine learning. They enable algorithms to train and be compared on the same data. However, these data sets contain biases that can get built into the resulting models.

More than half of the images in ImageNet (pdf) come from the US and Great Britain, for example. That imbalance leads systems to inaccurately classify images in categories that differ by geography (pdf). Popular face data sets, such as the AT&T Database of Faces, contain primarily light-skinned male subjects, which leads to systems that struggle to recognize dark-skinned and female faces.

While researchers try to outdo one another on contrived benchmarks, one in every nine people in the world is starving.

When studies on real-world applications of machine learning are excluded from the mainstream, its difficult for researchers to see the impact of their biased models, making it far less likely that they will work to solve these problems.

One reason applications research is minimized might be that others in machine learning think this work consists of simply applying methods that already exist. In reality, though, adapting machine-learning tools to specific real-world problems takes significant algorithmic and engineering work. Machine-learning researchers who fail to realize this and expect tools to work off the shelf often wind up creating ineffective models. Either they evaluate a models performance using metrics that dont translate to real-world impact, or they choose the wrong target altogether.

For example, most studies applying deep learning to echocardiogram analysis try to surpass a physicians ability to predict disease. But predicting normal heart function (pdf) would actually save cardiologists more time by identifying patients who do not need their expertise. Many studies applying machine learning to viticulture aim to optimize grape yields (pdf), but winemakers want the right levels of sugar and acid, not just lots of big watery berries, says Drake Whitcraft of Whitcraft Winery in California.

Another reason applications research should matter to mainstream machine learning is that the fields benchmark data sets are woefully out of touch with reality.

New machine-learning models are measured against large, curated data sets that lack noise and have well-defined, explicitly labeled categories (cat, dog, bird). Deep learning does well for these problems because it assumes a largely stable world (pdf).

But in the real world, these categories are constantly changing over time or according to geographic and cultural context. Unfortunately, the response has not been to develop new methods that address the difficulties of real-world data; rather, theres been a push for applications researchers to create their own benchmark data sets.

The goal of these efforts is essentially to squeeze real-world problems into the paradigm that other machine-learning researchers use to measure performance. But the domain-specific data sets are likely to be no better than existing versions at representing real-world scenarios. The results could do more harm than good. People who might have been helped by these researchers work will become disillusioned by technologies that perform poorly when it matters most.

Because of the fields misguided priorities, people who are trying to solve the worlds biggest challenges are not benefiting as much as they could from AIs very real promise. While researchers try to outdo one another on contrived benchmarks, one in every nine people in the world is starving. Earth is warming and sea level is rising at an alarming rate.

As neuroscientist and AI thought leader Gary Marcus once wrote (pdf): AIs greatest contributions to society could and should ultimately come in domains like automated scientific discovery, leading among other things towards vastly more sophisticated versions of medicine than are currently possible. But to get there we need to make sure that the field as whole doesnt first get stuck in a local minimum.

For the world to benefit from machine learning, the community must again ask itself, as Wagstaff once put it: What is the fields objective function? If the answer is to have a positive impact in the world, we must change the way we think about applications.

Hannah Kerner is an assistant research professor at the University of Maryland in College Park. She researches machine learning methods for remote sensing applications in agricultural monitoring and food security as part of the NASA Harvest program.

See the article here:
Too many AI researchers think real-world problems are not relevant - MIT Technology Review

MLops: The rise of machine learning operations – Reseller News

As hard as it is for data scientists to tag data and develop accurate machine learning models, managing models in production can be even more daunting.

Recognising model drift, retraining models with updating data sets, improving performance, and maintaining the underlying technology platforms are all important data science practices. Without these disciplines, models can produce erroneous results that significantly impact business.

Developing production-ready models is no easy feat. According to one machine learning study, 55 per cent of companies had not deployed models into production, and 40 per cent or more require more than 30 days to deploy one model. Success brings new challenges, and 41 per cent of respondents acknowledge the difficulty of versioning machine learning models and reproducibility.

The lesson here is that new obstacles emerge once machine learning models are deployed to production and used in business processes.

Model management and operations were once challenges for the more advanced data science teams. Now tasks include monitoring production machine learning models for drift, automating the retraining of models, alerting when the drift is significant, and recognising when models require upgrades.

As more organisations invest in machine learning, there is a greater need to build awareness around model management and operations.

The good news is platforms and libraries such as open source MLFlow and DVC, and commercial tools from Alteryx, Databricks, Dataiku, SAS, DataRobot, ModelOp, and others are making model management and operations easier for data science teams. The public cloud providers are also sharing practices such as implementing MLops with Azure Machine Learning.

There are several similarities between model management and devops. Many refer to model management and operations as MLops and define it as the culture, practices, and technologies required to develop and maintain machine learning models.

Understanding model management and operations

To better understand model management and operations, consider the union of software development practices with scientific methods.

As a software developer, you know that completing the version of an application and deploying it to production isnt trivial. But an even greater challenge begins once the application reaches production. End-users expect regular enhancements, and the underlying infrastructure, platforms, and libraries require patching and maintenance.

Now lets shift to the scientific world where questions lead to multiple hypotheses and repetitive experimentation. You learned in science class to maintain a log of these experiments and track the journey of tweaking different variables from one experiment to the next. Experimentation leads to improved results, and documenting the journey helps convince peers that youve explored all the variables and that results are reproducible.

Data scientists experimenting with machine learning models must incorporate disciplines from both software development and scientific research.

Machine learning models are software code developed in languages such as Python and R, constructed with TensorFlow, PyTorch, or other machine learning libraries, run on platforms such as Apache Spark, and deployed to cloud infrastructure. The development and support of machine learning models require significant experimentation and optimisation, and data scientists must prove the accuracy of their models.

Like software development, machine learning models need ongoing maintenance and enhancements. Some of that comes from maintaining the code, libraries, platforms, and infrastructure, but data scientists must also be concerned about model drift. In simple terms, model drift occurs as new data becomes available, and the predictions, clusters, segmentations, and recommendations provided by machine learning models deviate from expected outcomes.

Successful model management starts with developing optimal models

I spoke with Alan Jacobson, chief data and analytics officer at Alteryx, about how organisations succeed and scale machine learning model development.

To simplify model development, the first challenge for most data scientists is ensuring strong problem formulation. Many complex business problems can be solved with very simple analytics, but this first requires structuring the problem in a way that data and analytics can help answer the question. Even when complex models are leveraged, the most difficult part of the process is typically structuring the data and ensuring the right inputs are being used are at the right quality levels.

I agree with Jacobson. Too many data and technology implementations start with poor or no problem statements and with inadequate time, tools, and subject matter expertise to ensure adequate data quality. Organisations must first start with asking smart questions about big data, investing in dataops, and then using agile methodologies in data science to iterate toward solutions.

Monitoring machine learning models for model drift

Getting a precise problem definition is critical for ongoing management and monitoring of models in production.

Jacobson went on to explain, monitoring models is an important process, but doing it right takes a strong understanding of the goals and potential adverse effects that warrant watching. While most discuss monitoring model performance and change over time, whats more important and challenging in this space is the analysis of unintended consequences.

One easy way to understand model drift and unintended consequences is to consider the impact of Covid-19 on machine learning models developed with training data from before the pandemic.

Machine learning models based on human behaviours, natural language processing, consumer demand models, or fraud patterns have all been affected by changing behaviours during the pandemic that are messing with AI models.

Technology providers are releasing new MLops capabilities as more organisations are getting value and maturing their data science programs. For example, SAS introduced a feature contribution index that helps data scientists evaluate models without a target variable. Cloudera recently announced an ML Monitoring Service that captures technical performance metrics and tracking model predictions.

MLops also addresses automation and collaboration

In between developing a machine learning model and monitoring it in production are additional tools, processes, collaborations, and capabilities that enable data science practices to scale. Some of the automation and infrastructure practices are analogous to devops and include infrastructure as code and CI/CD (continuous integration/continuous deployment) for machine learning models.

Others include developer capabilities such as versioning models with their underlying training data and searching the model repository.

The more interesting aspects of MLops bring scientific methodology and collaboration to data science teams. For example, DataRobot enables a champion-challenger model that can run multiple experimental models in parallel to challenge the production versions accuracy.

SAS wants to help data scientists improve speed to markets and data quality. Alteryx recently introduced Analytics Hub to help collaboration and sharing between data science teams.

All this shows that managing and scaling machine learning requires a lot more discipline and practice than simply asking a data scientist to code and test a random forest, k-means, or convolutional neural network in Python.

Error: Please check your email address.

Tags machine learningDevops

Read the original post:
MLops: The rise of machine learning operations - Reseller News

Machine Learning Practices And The Art of Research Management – Analytics India Magazine

Allegro AI offers the first true end-to-end ML / DL product life-cycle management solution with a focus on deep learning applied to unstructured data.

Machine learning projects involve iterative and recursive R&D process of data gathering, data annotation, research, QA, deployment, additional data gathering from deployed units and back again. The effectiveness of a machine learning product depends on how intact the synergies are between data, model and various teams across the organisation.

In this informative session at CVDC 2020, a 2 day event organised by ADaSci, Dan Malowany of Allegro.AI presented the attendees with the best practices to imbibe during the lifecycle of an ML productfrom inception to production.

Dan Malowany is currently the head of deep learning research at allegro.ai. His Ph.D. research at the Laboratory of Autonomous Robotics (LAR) was focused on integrating mechanisms of the human visual system with deep convolutional neural networks. His research interests include computer vision, convolutional neural networks, reinforcement learning, the visual cortex and robotics.

Dan spoke about the features required to boost productivity in the different R&D stages. This talk specifically focused on the following:

Dan, who has worked for 15 years at the Directorate for Defense Research & Development and led various R&D programs, briefed the attendees about various complexities involved in developing deep learning applications. He shed light on the unattractive and often overlooked aspects of research. He explained the trade offs between effort and accuracy through concepts of diminishing returns in the case of increased inputs.

When your model is as good as your data then the role of data management becomes crucial. Organisations are often in the pursuit of achieving better results with less data. Practices such as mixing and matching data sets with detailed control and creating optimised synthetic data come in handy.

Underlining the importance of data and experiment management, Dan advised the attendees to track the various versions of data and treat it as a hyperparameter. Dan also highlighted the various risk factors involved in improper data management. He took the example of developing a deep learning solution for diagnosis of diabetic retinopathy. He followed this up with an overview of the benefits of resource management.

Unstructured data management is only a part of the solution. There are other challenges, which Allegro AI claims to solve. In this talk Dan introduced the audience to their customised solutions.

Towards the end of the talk, Dan gave a glimpse about the various tools integrated with allegro.ais services. Allegro AIs products are market proven and have partnered with leading global brands, such as Intel, NVIDIA, NetApp, IBM and Microsoft. Allegro AI is backed by world-class firms including household name strategic investors: Samsung, Bosch and Hyundai.

Allegro AI helps companies develop, deploy and manage machine & deep learning solutions. The companys products are based on the Allegro Trains open source ML & DL experiment manager and ML-Ops package. Here are a few features:

Unstructured Data Management

Resource Management & ML-Ops

Know more here.

Stay tuned to AIM for more updates on CVDC2020.

comments

I have a master's degree in Robotics and I write about machine learning advancements.email:ram.sagar@analyticsindiamag.com

See the original post:
Machine Learning Practices And The Art of Research Management - Analytics India Magazine

Mphasis Partners With Ashoka University to Create ‘Mphasis Laboratory for Machine Learning and Computational Thinking’ – AiThority

Mphasis,an Information Technology solutions provider specialising in cloud and cognitive services,is coming together withAshoka University toset upalaboratory for machine learning and computational thinking, through a grant of INR 10 crore that Mphasis F1Foundation, the CSR arm ofthe company. The Mphasis Laboratory for Machine Learning and Computational Thinking will apply ML and design thinking to produce world-class papers and compelling proof-of-concepts of systems/prototypes with a potential for large societal impact.

The laboratory will be the setting for cutting edge research and a novel educational initiative that is focused on bringing thoroughly researched, pedagogy-based learning modules to Indian students. Through this laboratory, Mphasis and Ashoka University will work to translate research activity into educational modules focusing on the construction of entire systems that allow students to understand and experientially recreate the project. This approach to education is aimed creating a more engaging and widely accessible mode of learning.

Recommended AI News: AppTek Ranks First in Multiple Tracks of the 2020 Evaluation Campaign of the IWSLT

Mphasis believes that in order to fully embrace the digital learning paradigm, one needs to champion for accessibility and invest in quality education in mainstream academic spaces. Through this partnership, we hope to encourage students across disciplines and socio-economic backgrounds to learn and flourish. As Ashoka University also has a strong focus on diverse liberal arts disciplines, we hope to find avenues to expand some of Mphasis efforts towards Design (CX Design and Design Thinking) through this collaboration and eventually tap into the talent pool from Ashoka, saidNitin Rakesh, Chief Executive Officer, Mphasis.

Being ready to welcome students into the world of virtual learning is not enough Mphasis and Ashoka seek to enable an innovative pedagogy based on a problem-solving approach to learning about AI, ML, Design Thinking and System Design. Through this grant, Mphasis and Ashoka will establish avenues for knowledge exchange in the areas of Core Machine Learning, Information Curation, Accessibility for persons with disabilities and Health & Medicine. They seek to encourage a hands-on learning approach in areas such as core machine learning and information curation, which form the foundation of solution-driven design. They also seek to address the accessibility barrier through public-domain placement of all intellectual property produced in the laboratory which will benefit millions of students across the country.

Recommended AI News: Identity Automation Announces New CEO

We stand at the threshold of a discontinuity brought about by an increased ability to sense and produce enormous amounts of data and to create extremely large clusters driven by parallel runtimes.These developments have enabled ML and other data-driven approaches to become the paradigm of choice for complex problem solving. There is now a considerable opportunity to improving life-at-large based on these capabilities, said Prof. Ravi Kothari, HOD, Computer Science at Ashoka University.

With that as our over-arching goal, we proposed the creation of aLaboratory for Machine Learning and Computational Thinkingand found heartening support in Mphasis.saidAshish Dhawan, Founder & Chairman, Board of Trustees, Ashoka University.

While universities world-over have taken great strides to bring quality education to digital platforms, higher educational institutions in India have begun to address questions surrounding accessibility in a post-COVID setting. The collaboration between Mphasis and Ashoka is pioneering in its effort to establish a centre of excellence for collaborative and human-centred design that aims to fuel data-driven solutions for real-life challenges and address key areas of reform at the larger community level.

Recommended AI News: IBM Reveals Next-Generation IBM POWER10 Processor

Read the original:
Mphasis Partners With Ashoka University to Create 'Mphasis Laboratory for Machine Learning and Computational Thinking' - AiThority

Benefits Of AI And Machine Learning | Expert Panel | Security News – SecurityInformed

The real possibility of advancing intelligence through deep learning and other AI-driven technology applied to video is that, in the long term, were not going to be looking at the video until after something has happened. The goal of gathering this high level of intelligence through video has the potential to be automated to the point that security operators will not be required to make the decisions necessary for response. Instead, the intelligence-driven next steps will be automatically communicated to various stakeholders from on-site guards to local police/fire departments. Instead, when security leaders access the video that corresponds to an incident, it will be because they want to see the incident for themselves. And isnt the automation, the ability to streamline response, and the instantaneous response the goal of an overall, data-rich surveillance strategy? For almost any enterprise, the answer is yes.

See the article here:
Benefits Of AI And Machine Learning | Expert Panel | Security News - SecurityInformed

Machine Learning as a Service (MLaaS) Market Size: Opportunities, Current Trends And Industry Analysis by 2028 | Microsoft, IBM Corporation,…

Market Scenario of the Machine Learning as a Service (MLaaS) Market:

The most recent Machine Learning as a Service (MLaaS) Market Research study includes some significant activities of the current market size for the worldwide Machine Learning as a Service (MLaaS) market. It presents a point by point analysis dependent on the exhaustive research of the market elements like market size, development situation, potential opportunities, and operation landscape and trend analysis. This report centers around the Machine Learning as a Service (MLaaS)-business status, presents volume and worth, key market, product type, consumers, regions, and key players.

Sample Copy of This Report @ https://www.quincemarketinsights.com/request-sample-50032?utm_source=TDC/komal

The prominent players covered in this report: Microsoft, IBM Corporation, International Business Machine, Amazon Web Services, Google, Bigml, Fico, Hewlett-Packard Enterprise Development, At&T, Fuzzy.ai, Yottamine Analytics, Ersatz Labs, Inc., and Sift Science Inc.

The market is segmented into By Type (Special Services and Management Services), By Organization Size (SMEs and Large Enterprises), By Application (Marketing & Advertising, Fraud Detection & Risk Analytics, Predictive Maintenance, Augmented Reality, Network Analytics, and Automated Traffic Management), By End User (BFSI, IT & Telecom, Automobile, Healthcare, Defense, Retail, Media & Entertainment, and Communication)

Geographical segments are North America, Europe, Asia Pacific, Middle East & Africa, and South America.

A 360 degree outline of the competitive scenario of the Global Machine Learning as a Service (MLaaS) Market is presented by Quince Market Insights. It has a massive data allied to the recent product and technological developments in the markets.

It has a wide-ranging analysis of the impact of these advancements on the markets future growth, wide-ranging analysis of these extensions on the markets future growth. The research report studies the market in a detailed manner by explaining the key facets of the market that are foreseeable to have a countable stimulus on its developing extrapolations over the forecast period.

Get ToC for the overview of the premium report @ https://www.quincemarketinsights.com/request-toc-50032?utm_source=TDC/komal

This is anticipated to drive the Global Machine Learning as a Service (MLaaS) Market over the forecast period. This research report covers the market landscape and its progress prospects in the near future. After studying key companies, the report focuses on the new entrants contributing to the growth of the market. Most companies in the Global Machine Learning as a Service (MLaaS) Market are currently adopting new technological trends in the market.

Machine Learning as a Service (MLaaS)

Finally, the researchers throw light on different ways to discover the strengths, weaknesses, opportunities, and threats affecting the growth of the Global Machine Learning as a Service (MLaaS) Market. The feasibility of the new report is also measured in this research report.

Reasons for buying this report:

Make an Enquiry for purchasing this Report @ https://www.quincemarketinsights.com/enquiry-before-buying/enquiry-before-buying-50032?utm_source=TDC/komal

About Us:

QMI has the most comprehensive collection of market research products and services available on the web. We deliver reports from virtually all major publications and refresh our list regularly to provide you with immediate online access to the worlds most extensive and up-to-date archive of professional insights into global markets, companies, goods, and patterns.

Contact Us:

Quince Market Insights

Ajay D. (Knowledge Partner)

Office No- A109

Pune, Maharashtra 411028

Phone: APAC +91 706 672 4848 / US +1 208 405 2835 / UK +44 1444 39 0986

Email: [emailprotected]

Web: https://www.quincemarketinsights.com

Read more here:
Machine Learning as a Service (MLaaS) Market Size: Opportunities, Current Trends And Industry Analysis by 2028 | Microsoft, IBM Corporation,...

How AI can help payers navigate a coming wave of delayed and deferred care – FierceHealthcare

So far insurers have seen healthcare use plummet since the onset of the COVID-19 pandemic.

But experts are concerned about a wave of deferred care that could hit as patients start to return to patients and hospitals putting insurers on the hook for an unexpected surge of healthcare spending.

Artificial intelligence and machine learning could lend insurers a hand.

Against Coronavirus, Knowledge is Power

For organizations with a need for affordable and convenient COVID-19 antibody testing, Truvian's Easy Check COVID-19 IgM/IgG antibody test empowers onsite testing at scale, with accurate results at 10 minutes from a small sample of blood. Hear from industry experts Dr. Jerry Yeo, University of Chicago and Dr. Stephen Rawlings, University of California, San Diego on the state of COVID antibody testing and Easy Check through our on-demand webinar.

We are using the AI approaches to try to protect future cost bubbles, said Colt Courtright, chief data and analytics officer at Premera Blue Cross, during a session with Fierce AI Week on Wednesday.

WATCH THE ON-DEMAND PLAYBACK:What Payers Should Know About How AI Can Change Their Business

He noted that people are not going in and getting even routine cancer screenings.

If people have delay in diagnostics and delay in medical care how is that going to play out in the future when we think about those individuals and the need for clinical programs and the cost and how do we manage that? he said.

Insurers have started in some ways to incorporate AI and machine learning in several different facets such as claims management and customer service, but insurers are also starting to explore how AI can be used to predict healthcare costs and outcomes.

In some ways, the pandemic has accelerated the use of AI and digital technologies in general.

If we can predict, forecast and personalize care virtually, then why not do that, said Rajeev Ronanki, senior vice president and chief digital officer for Anthem, during the session.

The pandemic has led to a boom in virtual telemedicine as the Trump administration has increased flexibility for getting Medicare payments for telehealth and patients have been scared to go to hospitals and physician offices.

But Ronanki said that AI cant just help with predicting healthcare costs, but also on fixing supply chains wracked by the pandemic.

He noted that the manufacturing global supply chain is extremely optimized, especially with just-in-time ordering that doesnt require businesses to have a large amount of inventory.

But that method doesnt really work during a pandemic when there is a vast imbalance in supply and demand with personal protective equipment, said Ronanki.

When you connect all those dots, AI can then be used to configure supply and demand better in anticipation of issues like this, he said.

View original post here:
How AI can help payers navigate a coming wave of delayed and deferred care - FierceHealthcare