The tensions between explainable AI and good public policy – Brookings Institution

Democratic governments and agencies around the world are increasingly relying on artificial intelligence. Police departments in the United States, United Kingdom, and elsewhere have begun to use facial recognition technology to identify potential suspects. Judges and courts have started to rely on machine learning to guide sentencing decisions. In the U.K., one in three British local authorities are said to be using algorithms or machine learning (ML) tools to make decisions about issues such as welfare benefit claims. These government uses of AI are widespread enough to wonder: Is this the age of government by algorithm?

Many critics have expressed concerns about the rapidly expanding use of automated decision-making in sensitive areas of policy such as criminal justice and welfare. The most often voiced concern is the issue of bias: When machine learning systems are trained on biased data sets, they will inevitably embed in their models the datas underlying social inequalities. The data science and AI communities are now highly sensitive to data bias issues, and as a result have started to focus far more intensely on the ethics of AI. Similarly, individual governments and international organizations have published statements of principle intended to govern AI use.

A common principle of AI ethics is explainability. The risk of producing AI that reinforces societal biases has prompted calls for greater transparency about algorithmic or machine learning decision processes, and for ways to understand and audit how an AI agent arrives at its decisions or classifications. As the use of AI systems proliferates, being able to explain how a given model or system works will be vital, especially for those used by governments or public sector agencies.

Yet explainability alone will not be a panacea. Although transparency about decision-making processes is essential to democracy, it is a mistake to think this represents an easy solution to the dilemmas algorithmic decision-making will present to our societies.

There are two reasons why. First, with machine learning in general and neural networks or deep learning in particular, there is often a trade-off between performance and explainability. The larger and more complex a model, the harder it will be to understand, even though its performance is generally better. Unfortunately, for complex situations with many interacting influenceswhich is true of many key areas of policymachine learning will often be more useful the more of a black box it is. As a result, holding such systems accountable will almost always be a matter of post hoc monitoring and evaluation. If it turns out that a given machine learning algorithms decisions are significantly biased, for example, then something about the system or (more likely) the data it is trained on needs to change. Yet even post hoc auditing is easier said than done. In practice, there is surprisingly little systematic monitoring of policy outcomes at all, even though there is no shortage of guidance about how to do it.

The second reason is due to an even more significant challenge. The aim of many policies is often not made explicit, typically because the policy emerged as a compromise between people pursuing different goals. These necessary compromises in public policy presents a challenge when algorithms are tasked with implementing policy decisions. A compromise in public policy is not always a bad thing; it allows decision makers to resolve conflicts as well as avoiding hard questions about the exact outcomes desired. Yet this is a major problem for algorithms as they need clear goals to function. An emphasis on greater model explainability will never be able to resolve this challenge.

Consider the recent use of an algorithm to produce U.K. high school grades in the absence of examinations during the pandemic, which provides a remarkable example of just how badly algorithms can function in the absence of well-defined goals. British teachers had submitted their assessment of individual pupils likely grades and ranked their pupils within each subject and class. The algorithm significantly downgraded many thousands of these assessed results, particularly in state schools in low-income areas. Star pupils with conditional university places consequently failed to attain the level they needed, causing much heartbreak, not to mention pandemonium in the centralized system for allocating students to universities.

After a few days of uproar, the U.K. government abandoned the results, instead awarding everyone the grades their teachers had predicted. When the algorithm was finally published, it turned out to have placed most weight on matching the distribution of grades the same school had received in previous years, penalizing the best pupils at typically poorly performing schools. However, small classes were omitted as having too few observations, which meant affluent private schools with small class sizes escaped the downgrading.

Of course, the policy intention was never to increase educational inequality, but to prevent grade inflation. This aim had not been stated publicly beforehandor statisticians might have warned of the unintended consequences. The objectives of no grade inflation, school by school, and of individual fairness were fundamentally in conflict. Injustice to some pupilsthose who had worked hardest to overcome unfavorable circumstanceswas inevitable.

For government agencies and offices that increasingly rely on AI, the core problem is that machine learning algorithms need to be given a precisely specified objective. Yet in the messy world of human decision-making and politics, it is often possible and even desirable to avoid spelling out conflicting aims. By balancing competing interests, compromise is essential to the healthy functioning of democracies.

This is true even in the case of what might at first glance seem a more straightforward example, such as keeping criminals who are likely to reoffend behind bars rather than granting them bail or parole. An algorithm using past data to find patterns willgiven the historically higher likelihood that people from low income or minority communities will have been arrested or imprisonedpredict that similar people are more likely to offend in future. Perhaps judges can stay alert for this data bias and override the algorithm when sentencing particular individuals.

But there is still an ambiguity about what would count as a good outcome. Take bail decisions. About a third of the U.S. prison population is awaiting trial. Judges make decisions every day about who will await trial in jail and who will be bailed, but an algorithm can make a far more accurate prediction than a human about who will commit an offense if they are bailed. According to one model, if bail decisions were made by algorithm, the prison population in the United States would be 40% smaller, with the same recidivism rate as when the decisions are made by humans. Such a system would reduce prison populationsan apparent improvement on current levels of mass incarceration. But given that people of color make up the great majority of the U.S. prison population, the algorithm may also recommend a higher proportion of people from minority groups are denied bailwhich seems to perpetuate unfairness.

Some scholars have argued that exposing such trade-offs is a good thing. Algorithms or ML systems can then be set more specific aimsfor instance, to predict recidivism subject to a rule requiring that equal proportions of different groups get bailand still do better than humans. Whats more, this would enforce transparency about the ultimate objectives.

But this is not a technical problem about how to write computer code. Perhaps greater transparency about objectives could eventually be healthy for our democracies, but it would certainly be uncomfortable. Compromises work by politely ignoring inconvenient contradictions. Should government assistance for businesses hit by the pandemic go to those with most employees or to those most likely to repay? There is no need to answer this question about ultimate aims in order to set specific criteria for an emergency loan scheme. But to automate the decision requires specifying an objectivesave jobs, maximize repayments, or perhaps weight each equally. Similarly, people might disagree about whether the aim of the justice system is retribution or rehabilitation and yet agree on sentencing guidelines.

Dilemmas about objectives do not crop up in many areas of automated decisions or predictions, where the interests of those affected and those running the algorithm are aligned. Both the bank and its customers want to prevent frauds, both the doctor and her patient want an accurate diagnosis or radiology results. However, in most areas of public policy there are multiple overlapping and sometimes competing interests.

There is often a trust deficit too, particularly in criminal justice and policing, or in welfare policies which bring the power of the state into peoples family lives. Even many law-abiding citizens in some communities do not trust the police and judiciary to have their best interests at heart. It is nave to believe that algorithmically enforced transparency about objectives will resolve political conflicts in situations like these. The first step, before deploying machines to make decisions, is not to insist on algorithmic explainability and transparency, but to restore the trustworthiness of institutions themselves. Algorithmic decision-making can sometimes assist good government but can never make up for its absence.

Diane Coyle is professor of public policy and co-director of the Bennett Institute at the University of Cambridge.

Go here to see the original:
The tensions between explainable AI and good public policy - Brookings Institution

Finance Sector Benefits from Machine Learning Development and AI – Legal Reader

Banking and finance rely on experts but the new expert on the scene is your AI/ML combo, able to do far more, do it fast and do it accurately.

Making the right decisions and grabbing opportunities in the fast moving world of finance can make a difference to your bottom line. This is where artificial intelligence and machine learning make a tangential difference. Engage machine learning development services in your finance segment and life will not be the same. Markets and Markets study shows that artificial intelligence in financial segment will grow to over $ 7300 million by 2022.

Data

The simple reason you need machine learning development company to help you make better decisions with the help of AI/ML is data. Data flows in torrents from diverse sources and contains precious nuggets of information. This can be the basis of understanding customer behaviors and it can help you gain predictive capabilities. Data analysis with ML can also help identify patterns that could be indicative of attempts at fraud and you save your reputation and money by tackling it in time.

The key

Normalize huge sets of data and derive information in real time according to specifiable parameters. Machine Learning algorithms can help you train the system to carry out fast analysis and deliver results based on algorithm models created for the purpose by Machine Learning Development Company for you. As it ages the system actually becomes smarter because it learns as it goes along.

To achieve the same result manually using standard IT solutions you would employ a team of IT specialists but even then it is doubtful if you could get outputs in time to help you take decisive action.

Fraud prevention

This is one case where prevention is better than cure. A typical bank may have hundreds of thousands of customers carry out any number of different transactions. All such data is under the watchful eye of the ML imbued system and it is quick to detect anomalies. In fact, ML has been known to cause misunderstanding because a customer not familiar with credit card operations repeatedly fumbled and that raised a false alarm. Still, it is better to be safe than sorry and carry out firefighting after the event.

Stock trading

Day trading went algorithmic quite a few years back and helped brokers profit by getting the system to make automatic profitable trades. Apart from day trading there are derivatives, forex, commodities and binary where specific models for ML can help you, as a trader or a broker, anticipate price movements. This is one area where price is influenced not just by demand-supply but also by political factors, climate, company results and unforeseen calamities. ML keeps track of all and integrates them into a predictive capability to keep you ahead of the game.

Investment decisions

Likewise, investments in other areas like bonds, mutual funds and real estate need to be based on smart analysis of present and future while factoring external influencers. No one, for example, foresaw the covid-19 devastation that froze economies and dried up sources of funds that have an impact on investments, especially in real estate. However, if you have machine learning based system it would keep track of developments and alert you in advance so that you can be prepared. Then there are more mundane tasks in finance sector where ML does help. Portfolio managers always walk a tight rope and rely on experts who can make false decisions and affect clients capital. Tap into the power of ML to stay on top and grow wealth of wealthy clients. Their recommendations will get you more clients making the investment in ML solutions more than worthwhile. It could be the best investment you make.

Automation

Banks, private lenders, institutions and insurance companies routinely carry out repetitive and mundane tasks like attending to inquiries, processing forms and handling transactions. This does involve extreme manpower usage leading to high costs. Your employees work under a deluge of such tasks and cannot do anything productive. Switch to ML technologies to automate such repetitive tasks. You will have two benefits:

The second one alone is worth the investment. In the normal course of things you would have to devote considerable energies to identify developing patterns whereas the ML solution presents trends based on which you can modify services, design offers or address customer pain points and ensure loyalty.

Risk mitigation

Smart operators are always gaming the system such as finding ways to improve credit score and obtain credit despite being ineligible. Such operators would pass the normal scanning technique of banks. However, if you have ML for assessment of loan application the system delves deeper and digs to find out all relevant information, collate it and analyze it to help you get a true picture. Non-performing assets cause immense losses to banks and this is one area where Machine Learning solutions put in place by expert machine learning development services can and does prove immensely valuable.

Banking and finance rely on experts but the new expert on the scene is your AI/ML combo, able to do far more, do it fast and do it accurately.

Read more here:
Finance Sector Benefits from Machine Learning Development and AI - Legal Reader

Google’s Vision for the Future of Bank Marketing, AI, Data and Brand – The Financial Brand

Subscribe to The Financial Brand via email for FREE!

No financial marketer questions the tectonic shift digital media has wrought on marketing and advertising. Yet even the most ardent digital marketing proponent might be startled by the prediction that 100% of advertising will be online and automated by 2025. Startled, and perhaps more than a bit skeptical. Although the pandemic has changed the situation to varying degrees, many financial marketers continue to find value in TV, radio, print and outdoor and human input into what appears there.

The 100% figure is a little less startling, however, when you consider that about 55% of U.S. advertising was already online as of 2019, according to Nicolas Darveau-Garneau, Chief Evangelist at Google. The marketing executive, who is in touch regularly with the search giants biggest advertisers, also notes that the 100% consists of two components: First, about 65% of the ads in 2025 will be online ads. Second, the other 35% will also be digital, but not online.

Whether youre buying a billboard or youre buying television, it will be a lot more like buying YouTube, he says. Machine learning algorithms are going to automate most advertising in the next five years. The time that bank and credit union marketers spend today optimizing media, selecting keywords and placing the right targeting on banner ads will be done by machines more and more, says Darveau-Garneau.

Machine learning is doubling in power every four to six months, he points out. Even as that rate begins to slow, there will still be a multiple thousand X improvement in machine learning power within the next ten years.

That kind of dramatic change prompts two big questions from CMOs the Google exec speaks with:

In answering the first question, Darveau-Garneau, who made spoke during a WPromote virtual presentation explores three key points:

As he says, all three need to happen for institutions to be able to compete. The effort to accomplish that, which is difficult, also takes care of the question about job security: There will plenty of marketing jobs, just different, which Darveau-Garneau expands upon below.

( sponsored content )

Before he joined Google about nine years ago, Darveau-Garneau was steeped in performance marketing essentially the modern digital marketing milieu of data and metrics with everything measurable. Yet he believes that to compete more effectively in the rapidly automating marketing world, many CMOs need to shift their thinking about performance marketing. They should create a marketing strategy that, simply put, makes you as much money as possible, trying to squeeze every ounce of profit you can, Darveau-Garneau states. Thats the most important KPI, he emphasizes.

While that advice may seem self evident, the Google marketer says few advertisers he works with are trying to make as much money as possible. Instead, many are trying to achieve the highest ROI.

And that is not the same thing.

Maximizing cash flow is very different than maximizing ROI, Darveau-Garneau states. The best advice I can give you in your performance marketing strategy is to build a dashboard that motivates your marketing teams to maximize profitability, as opposed to efficiency.

Dont fall in love with your ROASGoogle tools today cannot automatically maximize a financial institutions profitability, but Darveau-Garneau says they can produce maximum revenues out of a certain return on ad spend (ROAS). So a bank CMO can incorporate maximum profitability as a criteria for finding the right ROAS. But the Google exec advises being careful in selecting the right ROAS whether that is five-to-one, seven-to-one or another number.

Dont fall in love with your ROAS, Darveau-Garneau states. Test various numbers up and down to see which one makes you the most money. Once you know what the right ROAS is for your business, he adds, then make sure you get enough budget to cover full demand.

REGISTER FOR THIS FREE WEBINAR

How to Communicate With Customers in a Digital Branchless Reality

Is your bank or credit union struggling to provide exceptional customer experience an in time when you may never see or hear your customers?

TUESDAY, October 6th at 2pm (ET)

Why customer lifetime value makes so much senseManaging a company based on customer lifetime value is the future of business, Darveau-Garneau firmly believes. He ran four marketing (CX?) startups before joining Google. In hindsight, he says, he should have narrowed his customer database at these firms by building marketing based on customer lifetime value (CLV). This requires determining who a companys top customers are and then acquiring more people like that.

While financial inclusion is a major theme in banking today, banks and credit union marketers can benefit from a CLV focus in terms of outreach and messaging for loans, savings, investments and many other products.

The best advice I can give you, says Darveau-Garneau, is dont try to forecast customer lifetime value perfectly. Just do it approximately quintiles or deciles. One example of how to use CLV as part of efforts to personalize marketing is do A/B testing of landing page conversions to see which one converts better for your high CLV customers compared with average customers.

Dont worry, marketing jobs arent going awayWhile automation will increasingly handle things like selecting brand placements, Darveau-Garneau maintains that marketing work will shift to things such as building CLV models, segmenting customers in clever ways, optimizing creative and having the right data structure and the data sets to feed into the machine learning algorithms.

I actually think there will be more people doing marketing five years from now than there are now, he states, because its going to be easier in some ways, but much more complex in other ways.

Read More:

You could describe Nicolas Darveau-Garneau as a reformed performance marketer. For much of his career, he never did any brand marketing. As he describes it, performance marketers always have this tension about brand marketing because they like to measure things accurately to be sure theyre not wasting money. Hes changed his tune now to the point where Build a strong brand is the second of his three key strategies for marketers to be ready for the future.

Darveau-Garneau points to the fintech Credit Karma as a great example of a company combining performance marketing with great brand marketing. There is an extraordinary amount of value created by building a strong brand, he insists. This includes having consumers go directly to your site, or searching specifically for your brand on Google, or generating higher conversions.

These advantages are harder to measure than the clicks, leads or sales that result from pay-for-performance advertising, but they can be measured over time. Darveau-Garneau counsels patience to those skeptical of brand marketings benefits. It takes three months to a year, he says, to see the impact of a consideration or awareness campaign.

To financial marketers who still need convincing, he recommends starting small and trying out a branding campaign in one state (or possibly one part of a state) and tracking how business does there over six months. This doesnt require a big investment in a hardcore attribution model. If successful, it can then be expanded.

Brand marketing is becoming a lot more like performance marketing, the Google exec states. Brand marketing should be optimized in real time, and held accountable, he states, but give it some time to work.

Also to the point raised earlier as machine learning makes performance marketing easier, it diminishes the competitive advantage. That makes building a strong brand that much more important, Darveau-Garneau emphasizes. Ideally, financial institution marketers that can combine the skill sets of both disciplines will be in a good position, he believes.

Bank and credit union marketers can be doing great performance marketing and great brand marketing, but if youre sending these clicks to a site that doesnt perform very well, its going to be hard to compete, stresses Darveau-Garneau. A simple example is having a fast mobile site. He cites data from Chinese ecommerce giant Alibaba in which an already good conversion rate jumped 76% when they built a much faster mobile site.

Friction is the enemy of great digital experience, which in turn robs marketing of much of its power. The Google executive counsels CMOs to remove anything that creates significant friction: remove one field from a form, for example, or add Google Pay or Apple Pay to your app. Get the ball rolling so your marketing and digital banking teams start looking for things to remove to streamline the customer experience.

Dont get hung up on mega projects that are huge investments and take forever, like breaking down data silos and merging them all into one vast data lake, Darveau-Garneau advises. Such projects should be undertaken over the long term, but think about small projects in the short term.

Ive seen a lot of marketers trying to get things perfect from the beginning, as opposed to peeling the onion and just getting better every day, Darveau-Garneau observes.

Read More:

With the surge in ecommerce unleashed by the pandemics arrival, financial marketers may be wondering whether omnichannel marketing even makes sense any more versus concentrating solely on online digital.

While acknowledging the difficulty of forecasting what will happen to in-person commerce (and in-person banking), Darveau-Garneau firmly believes that whatever new normal arises, people will once again venture into retail facilities, so having an omni-channel strategy makes a lot of sense.

Financial marketers should be sure to include in-branch and other channel data beyond website and mobile data in what they share with the machine learning application they use. In the case of Google, Darveau-Garneau advises not to think of the company as driving just your online business. We can help you drive your store business as well. The company now has tools to integrate data, revenue and margin, for example, from physical locations into its smart bidding algorithms.

Importantly, Darveau-Garneau says Google has found that for many customers, including those in banking, consumers who buy both online and in-store often are much better customers than those who dont.

Continue reading here:
Google's Vision for the Future of Bank Marketing, AI, Data and Brand - The Financial Brand

Quantum startup CEO suggests we are only five years away from a quantum desktop computer – TechCrunch

Today at TechCrunch Disrupt 2020, leaders from three quantum computing startups joined TechCrunch editor Frederic Lardinois to discuss the future of the technology. IonQ CEO and president Peter Chapman suggested we could be as little as five years away from a desktop quantum computer, but not everyone agreed on that optimistic timeline.

I think within the next several years, five years or so, youll start to see [desktop quantum machines]. Our goal is to get to a rack-mounted quantum computer, Chapman said.

But that seemed a tad optimistic to Alan Baratz, CEO at D-Wave Systems. He says that when it comes to developing the super-conducting technology that his company is building, it requires a special kind of rather large quantum refrigeration unit called a dilution fridge, and that unit would make a five-year goal of having a desktop quantum PC highly unlikely.

Itamar Sivan, CEO at Quantum Machines, too, believes we have a lot of steps to go before we see that kind of technology, and a lot of hurdles to overcome to make that happen.

This challenge is not within a specific, singular problem about finding the right material or solving some very specific equation, or anything. Its really a challenge, which is multidisciplinary to be solved here, Sivan said.

Chapman also sees a day when we could have edge quantum machines, for instance on a military plane, that couldnt access quantum machines from the cloud efficiently.

You know, you cant rely on a system which is sitting in a cloud. So it needs to be on the plane itself. If youre going to apply quantum to military applications, then youre going to need edge-deployed quantum computers, he said.

One thing worth mentioning is that IonQs approach to quantum is very different from D-Waves and Quantum Machines .

IonQ relies on technology pioneered in atomic clocks for its form of quantum computing. Quantum Machines doesnt build quantum processors. Instead, it builds the hardware and software layer to control these machines, which are reaching a point where that cant be done with classical computers anymore.

D-Wave, on the other hand, uses a concept called quantum annealing, which allows it to create thousands of qubits, but at the cost of higher error rates.

As the technology develops further in the coming decades, these companies believe they are offering value by giving customers a starting point into this powerful form of computing, which when harnessed will change the way we think of computing in a classical sense. But Sivan says there are many steps to get there.

This is a huge challenge that would also require focused and highly specialized teams that specialize in each layer of the quantum computing stack, he said. One way to help solve that is by partnering broadly to help solve some of these fundamental problems, and working with the cloud companies to bring quantum computing, however they choose to build it today, to a wider audience.

In this regard, I think that this year weve seen some very interesting partnerships form which are essential for this to happen. Weve seen companies like IonQ and D-Wave, and others partnering with cloud providers who deliver their own quantum computers through other companies cloud service, Sivan said. And he said his company would be announcing some partnerships of its own in the coming weeks.

The ultimate goal of all three companies is to eventually build a universal quantum computer, one that can achieve the goal of providing true quantum power. We can and should continue marching toward universal quantum to get to the point where we can do things that just cant be done classically, Baratz said. But he and the others recognize we are still in the very early stages of reaching that end game.

Read the original post:
Quantum startup CEO suggests we are only five years away from a quantum desktop computer - TechCrunch

Are We Close To Realising A Quantum Computer? Yes And No, Quantum Style – Swarajya

Scientists have been hard at work to get a new kind of computer going for about a couple of decades. This new variety is not a simple upgrade over what you and I use every day. It is different. They call it a quantum computer.

The name doesnt leave much to the imagination. It is a machine based on the central tenets of the most successful theory of physics yet devised quantum mechanics. And since it is based on such a powerful theory, it promises to be so advanced that a conventional computer, the one we know and recognise, cannot keep up with it.

Think of the complex real-world problems that are hard to solve and its likely that quantum computers will throw up answers to them someday. Examples include simulating complex molecules to design new materials, making better forecasts for weather, earthquakes or volcanoes, map out the reaches of the universe, and, yes, demystify quantum mechanics itself.

One of the major goals of quantum computers is to simulate a quantum system. It is probably the reason why quantum computation is becoming a major reality, says Dr Arindam Ghosh, professor at the Department of Physics, Indian Institute of Science.

Given that the quantum computer is full of promise, and work on it has been underway for decades, its fair to ask do we have one yet?

This is a million-dollar question, and there is no simple answer to it, says Dr Rajamani Vijayaraghavan, the head of the Quantum Measurement and Control Laboratory at the Tata Institute of Fundamental Research (TIFR). Depending on how you view it, we already have a quantum computer, or we will have one in the future if the aim is to have one that is practical or commercial in nature.

We have it and dont. That sounds about quantum.

In the United States, Google has been setting new benchmarks in quantum computing.

Last year, in October, it declared quantum supremacy a demonstration of a quantum computers superiority over its classical counterpart. Googles Sycamore processor took 200 seconds to make a calculation that, the company claims, would have taken 10,000 years on the worlds most powerful supercomputer.

This accomplishment came with conditions attached. IBM, whose supercomputer Summit (the worlds fastest) came second-best to Sycamore, contested the 10,000-year claim and said that the calculation would have instead taken two and a half days with a tweak to how the supercomputer approached the task.

Some experts suggested that the nature of the task, generating random numbers in a quantum way, was not particularly suited to the classical machine. Besides, Googles quantum processor didnt dabble in a real-world application.

Yet, Google was on to something. For even the harsh critic, it provided a glimpse of the spectacular processing power of a quantum computer and whats possible down the road.

Google did one better recently. They simulated a chemical reaction on their quantum computer the rearrangement of hydrogen atoms around nitrogen atoms in a diazene molecule (nitrogen hydride or N2H2).

The reaction was a simple one, but it opened the doors to simulating more complex molecules in the future an eager expectation from a quantum computer.

But how do we get there? That would require scaling up the system. More precisely, the number of qubits in the machine would have to increase.

Short for quantum bits, qubits are the basic building blocks of quantum computers. They are equivalent to the classical binary bits, zero and one, but with an important difference. While the classical bits can assume states of zero or one, quantum bits can accommodate both zero and one at the same time a principle in quantum mechanics called superposition.

Similarly, quantum bits can be entangled. That is when two qubits in superposition are bound in such a way that one dictates the state of the other. It is what Albert Einstein in his lifetime described, and dismissed, as spooky action at a distance.

Qubits in these counterintuitive states are what allow a quantum computer to work its magic.

Presently, the most qubits, 72, are found on a Google device. The Sycamore processor, the Google chip behind the simulation of a chemical reaction, has a 53-qubit configuration. IBM has 53 qubits too, and Intel has 49. Some of the academic labs working with quantum computing technology, such as the one at Harvard, have about 40-50 qubits. In China, researchers say they are on course to develop a 60-qubit quantum computing system within this year.

The grouping is evident. The convergence is, more or less, around 50-60 qubits. That puts us in an interesting place. About 50 qubits can be considered the breakeven point the one where the classical computer struggles to keep up with its quantum counterpart, says Dr Vijayaraghavan.

It is generally acknowledged that once qubits rise to about 100, the classical computer gets left behind entirely. That stage is not far away. According to Dr Ghosh of IISc, the rate of qubit increase is today faster than the development of electronics in the early days.

Over the next couple of years, we can get to 100-200 qubits, Dr Vijayaraghavan says.

A few more years later, we could possibly reach 300 qubits. For a perspective on how high that is, this is what Harvard Quantum Initiative co-director Mikhail Lukin has said about such a machine: If you had a system of 300 qubits, you could store and process more bits of information than the number of particles in the universe.

In Indian labs, we are working with much fewer qubits. There is some catching up to do. Typically, India is slow to get off the blocks to pursue frontier research. But the good news is that over the years, the pace is picking up, especially in the quantum area.

At TIFR, researchers have developed a unique three-qubit trimon quantum processor. Three qubits might seem small in comparison to examples cited earlier, but together they pack a punch. We have shown that for certain types of algorithms, our three-qubit processor does better than the IBM machine. It turns out that some gate operations are more efficient on our system than the IBM one, says Dr Vijayaraghavan.

The special ingredient of the trimon processor is three well-connected qubits rather than three individual qubits a subtle but important difference.

Dr Vijayaraghavan plans to build more of these trimon quantum processors going forward, hoping that the advantages of a single trimon system spill over on to the larger machines.

TIFR is simultaneously developing a conventional seven-qubit transmon (as opposed to trimon) system. It is expected to be ready in about one and a half years.

About a thousand kilometres south, at IISc, two labs under the Department of Instrumentation and Applied Physics are developing quantum processors too, with allied research underway in the Departments of Computer Science and Automation, and Physics, as well as the Centre for Nano Science and Engineering.

IISc plans to develop an eight-qubit superconducting processor within three years.

Once we have the know-how to build a working eight-qubit processor, scaling it up to tens of qubits in the future is easier, as it is then a matter of engineering progression, says Dr Ghosh, who is associated with the Quantum Materials and Devices Group at IISc.

It is not hard to imagine India catching up with the more advanced players in the quantum field this decade. The key is to not think of India building the biggest or the best machine it is not necessary that they have the most number of qubits. Little scientific breakthroughs that have the power to move the quantum dial decisively forward can come from any lab in India.

Zooming out to a global point of view, the trajectory of quantum computing is hazy beyond a few years. We have been talking about qubits in the hundreds, but, to have commercial relevance, a quantum computer needs to have lakhs of qubits in its armoury. That is the challenge, and a mighty big one.

It isnt even the case that simply piling up qubits will do the job. As the number of qubits go up in a system, it needs to be ensured that they are stable, highly connected, and error-free. This is because qubits cannot hang on to their quantum states in the event of environmental noise such as heat or stray atoms or molecules. In fact, that is the reason quantum computers are operated at temperatures in the range of a few millikelvin to a kelvin. The slightest disturbance can knock the qubits off their quantum states of superposition and entanglement, leaving them to operate as classical bits.

If you are trying to simulate a quantum system, thats no good.

For that reason, even if the qubits are few, quantum computation can work well if the qubits are highly connected and error-free.

Companies like Honeywell and IBM are, therefore, looking beyond the number of qubits and instead eyeing a parameter called quantum volume.

Honeywell claimed earlier this year that they had the worlds highest performing quantum computer on the basis of quantum volume, even though it had just six qubits.

Dr Ghosh says quantum volume is indeed an important metric. Number of qubits alone is not the benchmark. You do need enough of them to do meaningful computation, but you need to look at quantum volume, which measures the length and complexity of quantum circuits. The higher the quantum volume, the higher is the potential for solving real-world problems.

It comes down to error correction. Dr Vijayaraghavan says none of the big quantum machines in the US today use error-correction technology. If that can be demonstrated over the next five years, it would count as a real breakthrough, he says.

Guarding the system against faults or "errors" is the focus of researchers now as they look to scale up the qubits in a system. Developing a system with hundreds of thousands of qubits without correcting for errors cancels the benefits of a quantum computer.

As is the case with any research in the frontier areas, progress will have to accompany scientific breakthroughs across several different fields, from software to physics to materials science and engineering.

In light of that, collaboration between academia and industry is going to play a major role going forward. Depending on each of their strengths, academic labs can focus on supplying the core expertise necessary to get a quantum computer going while the industry can provide the engineering muscle to build the intricate stuff. Both are important parts of the quantum computing puzzle. At the end of the day, the quantum part of a quantum computer is tiny. Most of the machine is high-end electronics. The industry can support that.

It is useful to recall at this point that even our conventional computers took decades to develop, starting from the first transistor in 1947 to the first microprocessor in 1971. The computers that we use today would be unrecognisable to people in the 1970s. In the same way, how quantum computing in the future, say, 20 years down the line, is unknown to us today.

However, governments around the world, including India, are putting their weight behind the development of quantum technology. It is clear to see why. Hopefully, this decade can be the springboard that launches quantum computing higher than ever before. All signs point to it.

Excerpt from:
Are We Close To Realising A Quantum Computer? Yes And No, Quantum Style - Swarajya

How quantum computing could drive the future auto industry – TechHQ

Quantum Computing (QC) has been gaining huge momentum in the last few years. Recent breakthroughs in affordable technology have seen conversations shift from the theoretical to practical use cases.

As early as 2018, IBM drew attention across the tech world with the creation of its Q System One quantum computer, while D-Wave Technologies went on to announce a QC chip with 5,000 qubits, more than doubling its own previous 2,000-qubit record.

While quantum-computing applications may still be five to ten years down the road, a recent report by McKinsey shows that the automotive and transportation sectors have been quick to capitalize on QCs potential, and have successfully showcased how effective the technology can be with several pilots.

Several OEMs (original equipment manufacturers) and tier-one suppliers are actively discovering how the technology can benefit the industry by resolving existing issues related to route optimization, fuel-cell optimization, and material durability.

Just last year, Volkswagen partnered with D-Wave to demonstrate an efficient traffic-management system that optimized the travel routes of nine public-transit buses during the 2019 Web Summit in Lisbon.

Elsewhere, significant investments have already been made, with German supplier Bosch acquiring a stake in Massachusetts-based quantum start-up Zapata Computing, contributing to a US$21 million Series A investment.

BMW, Daimler, and Volkswagen have announced that they are actively pursuing QC research, including quantum simulation for material sciences, aiming to improve the efficiency, safety, and durability of batteries and fuel cells.

Quantum Computing is being embraced by the automotive sector. Source: Pixabay

The potential for QC in the automotive sector could translate into billions of dollars in value as OEMs and automotive stakeholders hone in on the markets niche and develop a clear QC strategy.

As things stand, automotive will be one of the primary value pools for QC and is expected to have an impact on the automotive industry of up to US$3 billion by 2030, thanks to its potential in solving complex optimization problems that include processing vast amounts of data to accelerate learning in autonomous-vehicle-navigation algorithms.

QC will later have a positive effect on vehicle routing and route optimization, material and process research, as well as help improve the security of connected driving, and help accelerate research into electric vehicles (EV).

Supply routes involving several modes of transport could be optimized using algorithms developed through QC, while other applications will improve energy storage and generative design. QC could also help suppliers improve or refine kinetic properties of materials for lightweight structures and adhesives, as well as develop efficient cooling systems.

QC will be utilized by automakers during vehicle design to produce improvements relating to minimizing drag and improving fuel efficiency. Whats more, QC has the ability to perform advanced simulations in areas such as vehicle crash behavior and cabin soundproofing, as well as to train algorithms used in the development of autonomous-driving software. QCs potential to reduce computing times from several weeks to a few seconds means that OEMs could ensure car-to-car communications in real-time, every time.

Shared mobility players such as Lyft and Uber also have the potential to use QC to optimize vehicle routing, while improving fleet efficiency and availability. Alternatively, QC can help service providers simulate complex economic scenarios to predict how demand varies by geography.

Within the next five years, the automotive industry will continue to focus on product development and R&D.

QC isnt likely to replace existing high-performance computing (HPC), but will instead rely heavily on hybrid schemes where a conventional HPC can help refine problem-solving more efficiently. A computational problem, for example, to find the most efficient option among billions of possible combinations will initially be iterated with a quantum computer to get an approximate answer before the remainder is handled by an HPC to round off assessments in the subset of solution space.

The pathway for QC is still uncertain despite its potential. Investing in QC is a heavy commitment but will almost certainly put companies ahead of competitors further down the line once it has become more mainstream in use.

Automotive players will need to determine exactly where they fit in the value chain, while building solid partnerships and valuable intellectual property.

The next five to ten years will see players prioritizing application development and building focused capabilities, while making first pilots and prototypes operational. Ten years and beyond will see businesses take full advantage of their technological edge through QC and expand their core capabilities.

As QC continues to make breakthroughs, the automotive sector is set to drive the technology to the next level.

Read more:
How quantum computing could drive the future auto industry - TechHQ

Putting the Quantum in Security – Optics & Photonics News

Grgoire Ribordy [Image: Courtesy of ID Quantique]

In the second day of OSAs Quantum 2.0 conference, the focus shifted from quantum computing to other aspects of quantum technologyparticularly quantum communications and quantum sensing. On that note, Grgoire Ribordy, the founder of the Switzerland-based quantum crypto firm ID Quantique, looked at how quantum technologies are being employed for the long-term challenges in data security posed by quantum computing itself.

ID Quantique has a long pedigree in quantum technology; the company has been in business since 2001. In retrospect, Ribordy said, we were really crazy to start a company in quantum technology in 2001 It was way too early. But the firm forged ahead and has now developed a suite of applications in the data-security space.

Ribordy stressed thatespecially over the past few monthsits become increasingly clear that digital security, and protecting digital information against hacking, is extremely important. Classical cryptography assembles a set of techniques for hiding information from unauthorized users, which Ribordy compared to building a castle around the data.

The problem, however, is that after quantum computers become reality, one application for them will be to crack the cryptography systems that are currently in use. When that happens, said Ribordy, the walls we have today wont be able to protect the data anymore. The best cryptography techniques for avoiding that baleful outcome, he suggested, are those that themselves rely on quantum technologyand that can provide robust protection, while still allowing the convenience of the prevailing classical private-key encryption systems.

[Image: Grgoire Ribordy/OSA Quantum 2.0 Conference]

Just how much one should worry about all ofthis nowwhen quantum computers powerful enough to do this sort of cracking still lie years in the futuredepends, according toRibordy, on three factors. One, which he labeled factor x, is how long you need current data to be encryptedperhaps only a short time for some kinds of records, decades for other kinds. The second, y, is the time that it will take to retool the current infrastructure to be transformed into somethingquantum-safe. And the third, z, is how long it will actually take for large-scale, encryption-breaking quantum computers to be built.

If x and/or y are longer than z, he suggested, we have a problemand theres a lot of debate today surrounding just what the values of these variables are. One of ID Quantiques services is to take clients through a quantum risk assessment that attempts to suss out how long they need to protect their data, and what the implications are for their cryptography approach.

Ribordy cited three key components to effective long-term quantum encryption. One, and perhaps the oldest, is quantum random number generation (QRNG) to build security keys, whether classical or quantum. A second is something that Ribordy called crypto-agility. (You dont hard-code cryptography, he explained. Instead, you want to upgrade it whenever a new advance comes.) And the third component is quantum key distribution (QKD), which is a technique still under active development, but which is already being deployed in some cases.

On the first component, Ribordy noted that ID Quantique has been active in QRNG since 2014, when the idea arose of using mobile-phone camera sensors as a source for QRNs. These arrays of pixels, he said, can provide both large rates of raw entropy (an obvious necessity for true randomness), and an industry-compatible interface. He walked the audience through the companys efforts to create a low-cost (CMOS-based), low-power, security-compliant chip for QRNGbeginning with early experiments using a Nokia phone and moving through the required efforts at miniaturization, engineering for stability and consistency, and avoiding such pitfalls as correlations between the different camera pixels, which would degrade the randomness of the output.

The result, Ribordy said, is a QRNG chip that has recently been added to a new Samsung mobile phoneappropriately named the Galaxy A71 Quantumthat is now available in the Republic of Korea. And the chip is not just window dressinga Korean software company partnered with Samsung to create apps for pay services, cryptocurrency services and other features that rely on random numbers, and that use the ID Quantique chip to get high-quality instances of them.

Grgoire Ribordy, at the OSA Quantum 2.0 conference.

We think this is very important, said Ribordy, because it shows that quantum technologies can be industrialized and integrated into applications.

In terms of such industrialization, another security application, quantum key distribution (QKD) is not as advanced as QRNG, according to Ribordybut he argued that the experience of QRNG bodes well for QKDs commercialization path. One issue for QKD is the short distance that such secure links can exist in fiber before quantum bit error rates become too high, though Ribordy pointed to recent paper in Nature Photonics in which practical QKD was demonstrated across a fiber link of 307 km.

Ribordy noted a number of areas of particular activity in the QKD sphere. One active area of interest, for example, is developing network topologies that play especially well with QKD. ID Quantique is also working with SK Telecom in the Republic of Korea on how QKD can be integrated into the optical networks underlying next-generation, 5G wireless. In these circumstances, the proverbial last mile, operating at radio frequencies, can only be secured with traditional cryptography, but using QKD on the optical part of the communication change will make the network as a whole more secure.

A number of other projects are in the works as well, Ribordy said, including a European project, Open QKD, the goal of which is to prepare the next generation of QKD deployment in Europe. And large-scale deployment projects are afoot in China as well.

The presence of these diverging global efforts prompted a question in the Q&A session that followed Ribordys talkjust how open are these QKD markets? Ribordy noted that, in the near term they are closing down Since quantum is a new industry, every country or region would like to be a player. The Chinese QKD ecosystem, he suggested, is completely cut offthere is kind of a Galapagos effect, and Europe also is starting to become a more closed ecosystem in the QKD arena. Ribordy views this as a sign of market immaturity, however, and believes things will become more open again in the future with efforts toward certification and standardization.

See the original post:
Putting the Quantum in Security - Optics & Photonics News

The Hyperion-insideHPC Interviews: ORNL Distinguished Scientist Travis Humble on Coupling Classical and Quantum Computing – insideHPC

Oak Ridge National Labs Travis Humble has worked at the headwaters of quantum computing research for years. In this interview, he talks about his particular areas of interest, including integration of quantum computing with classical HPC systems. Weve already recognized that we can accelerate solving scientific applications using quantum computers, he says. These demonstrations are just early examples of how we expect quantum computers can take us to the most challenging problems for scientific discovery.

In This Update. From the HPC User Forum Steering Committee

By Steve Conway and Thomas Gerard

After the global pandemic forced Hyperion Research to cancel the April 2020 HPC User Forum planned for Princeton, New Jersey, we decided to reach out to the HPC community in another way by publishing a series of interviews with leading members of the worldwide HPC community. Our hope is that these seasoned leaders perspectives on HPCs past, present and future will be interesting and beneficial to others. To conduct the interviews, Hyperion Research engaged insideHPC Media. We welcome comments and questions addressed to Steve Conway, sconway@hyperionres.com or Earl Joseph, ejoseph@hyperionres.com.

This interview is with Travis Humble, Deputy Director at the Department of Energys Quantum Science Center, a Distinguished Scientist at Oak Ridge National Laboratory, and director of the labs Quantum Computing Institute. Travis is leading the development of new quantum technologies and infrastructure to impact the DOE mission of scientific discovery through quantum computing. He is editor-in-chief for ACM Transactions on Quantum Computing, Associate Editor for Quantum Information Processing, and co-chair of the IEEE Quantum Initiative. Travis also holds a joint faculty appointment with the University of Tennessee Bredesen Center for Interdisciplinary Research and Graduate Education, where he works with students in developing energy-efficient computing solutions. He received his doctorate in theoretical chemistry from the University of Oregon before joining ORNL in 2005.

The HPC User Forum was established in 1999 to promote the health of the global HPC industry and address issues of common concern to users. More than 75 HPC User Forum meetings have been held in the Americas, Europe and the Asia-Pacific region since the organizations founding in 2000.

Doug Black: Hi, everyone. Im Doug Black. Im editor-in-chief at InsideHPC and today we are talking with Dr. Travis Humble. He is a distinguished scientist at Oak Ridge National Lab, where he is director of the labs Quantum Computing Institute. Dr. Humble, welcome. Thanks for joining us today.

Travis Humble: Thanks for having me on, Doug.

Black: Travis, tell us, if you would, the area of quantum computing that youre working in and the research that youre doing that youre most excited about, that has what you would regard as the greatest potential.

Humble: Quantum computing is a really exciting area, so its really hard to narrow it down to just one example. This is the intersection of quantum informationquantum mechanicswith computer science.

Weve already recognized that we can accelerate solving scientific applications using quantum computers. At Oak Ridge, for example, we have already demonstrated examples in chemistry, material science and high-energy physics, where we can use quantum computers to solve problems in those areas. These demonstrations are just early examples of how we expect quantum computers can take us to the most challenging problems for scientific discovery.

My own research is actually focused on how we could integrate quantum computers with high-performance computing systems. Of course, we are adopting an accelerator model at Oak Ridge, where we are thinking about using quantum processors to offload the most challenging computational tasks. Now, this seems like an obvious approach; the best of both worlds. But the truth is that there are a lot of challenges in bringing those two systems together.

Black: It sounds like sort of a hybrid approach, almost a CPU/GPU, only were talking about systems writ large. Tell us about DOEs and Oak Ridges overall quantum strategy and how the Quantum Computing Institute works with vendors and academic institutions on quantum technology development.

Humble: The Oak Ridge National Laboratory has played an important role within the DOEs national laboratory system, a leading role in both research and infrastructure. In 2018, the President announced the National Quantum Initiative, which is intended to accelerate the development of quantum science and technology in the United States. Oak Ridge has taken the lead in the development of research, especially software applications and hardware, for how quantum computing can address scientific discovery.

At the same time, weve helped DOE establish a quantum computing user program; something we call QCUP. This is administered through the Oak Ridge Leadership Computing Facility and it looks for the best of the best in terms of approaches to how quantum computers could be used for scientific discovery. We provide access to the users through the user program in order for them to test and evaluate how quantum computers might be used to solve problems in basic energy science, nuclear physics, and other areas.

Black: Okay, great. So how far would you we are from practical quantum computing and from what is referred to as quantum advantage, where quantum systems can run workloads faster than conventional or classical supercomputers?

Humble: This is such a great question. Quantum advantage, of course, is the idea that a quantum computer would be able to outperform any other conventional computing system on the planet. Very early in this fiscal year, back in October, there was an announcement from Google where they actually demonstrated an example of quantum advantage using their quantum computing hardware processor. Oak Ridge was part of that announcement, because we used our Summit supercomputer system as the baseline to compare that calculation.

But heres the rub: the Google demonstration was primarily a diagnostic check that their processor was behaving as expected, and the Summit supercomputer actually couldnt keep up with that type of diagnostic check. But when we look at the practical applications of quantum computing, still focusing on problems in chemistry, material science and other scientific disciplines, we appear to still be a few years away from demonstrating a quantum advantage for those applications. This is one of the hottest topics in the field at the moment, though. Once somebody can identify that, we expect to see a great breakthrough in how quantum computers can be used in these practical areas.

Black: Okay. So, how did you become involved in quantum in the first place? Tell us a little bit about your background in technology.

Humble: I started early on studying quantum mechanics through chemistry. My focus, early on in research, was on theoretical chemistry and understanding how molecules behave quantum mechanically. What has turned out to be one of the greatest ironies of my career is that quantum computers are actually significant opportunities to solve chemistry problems using quantum mechanics.

So I got involved in quantum computing relatively early. Certainly, the last 15 years or so have been a roller coaster ride, mainly going uphill in terms of developing quantum computers and looking at the question of how they can intersect with high-performance computing. Being at Oak Ridge, thats just a natural question for me to come across. I work every day with people who are using some of the worlds fastest supercomputers in order to solve the same types of problems that we think quantum computers would be best at. So for me, the intersection between those two areas just seems like a natural path to go down.

Black: I see. Are there any other topics related to all this that youd like to add?

Humble: I think that quantum computing has a certain mystique around it. Its an exciting area and it relies on a field of physics that many people dont yet know about, but I certainly anticipate that in the future thats going to change. This is a topic that is probably going to impact everyones lives. Maybe its 10 years from now, maybe its 20 years, but its certainly something that I think we should start preparing for in the long term, and Oak Ridge is really happy to be one of the places that is helping to lead that change.

Black: Thanks so much for your time. It was great to talk with you.

Humble: Thanks so much, Doug. It was great to talk with you, too.

Read more from the original source:
The Hyperion-insideHPC Interviews: ORNL Distinguished Scientist Travis Humble on Coupling Classical and Quantum Computing - insideHPC

Spin-Based Quantum Computing Breakthrough: Physicists Achieve Tunable Spin Wave Excitation – SciTechDaily

Magnon excitation. Credit: Daria Sokol/MIPT Press Office

Physicists from MIPT and the Russian Quantum Center, joined by colleagues from Saratov State University and Michigan Technological University, have demonstrated new methods forcontrolling spin waves in nanostructured bismuth iron garnet films via short laser pulses. Presented inNano Letters, the solution has potential for applications in energy-efficient information transfer and spin-based quantum computing.

Aparticles spin is its intrinsic angular momentum, which always has a direction. Inmagnetized materials, the spins all point in one direction. A local disruption of this magnetic order is accompanied by the propagation of spin waves, whose quanta are known as magnons.

Unlike the electrical current, spin wave propagation does not involve a transfer of matter. Asaresult, using magnons rather than electrons to transmit information leads to much smaller thermal losses. Data can be encoded in the phase or amplitude of a spin wave and processed via wave interference or nonlinear effects.

Simple logical components based on magnons are already available as sample devices. However, one of the challenges of implementing this new technology is the need to control certain spin wave parameters. Inmany regards, exciting magnons optically is more convenient than by other means, with one of the advantages presented in the recent paper in Nano Letters.

The researchers excited spin waves in a nanostructured bismuth iron garnet. Even without nanopatterning, that material has unique optomagnetic properties. It is characterized by low magnetic attenuation, allowing magnons topropagate over large distances even at room temperature. It is also highly optically transparent in the near infrared range and has a high Verdet constant.

The film used in the study had an elaborate structure: a smooth lower layer with a one-dimensional grating formed on top, with a 450-nanometer period (fig.1). This geometry enables the excitation ofmagnons with a very specific spin distribution, which is not possible for an unmodified film.

To excite magnetization precession, the team used linearly polarized pump laser pulses, whose characteristics affected spin dynamics and the type of spin waves generated. Importantly, wave excitation resulted from optomagnetic rather than thermal effects.

Schematic representation of spin wave excitation by optical pulses. The laser pump pulse generates magnons by locally disrupting the ordering of spins shown as violet arrows in bismuth iron garnet (BiIG). A probe pulse is then used to recover information about the excited magnons. GGG denotes gadolinium gallium garnet, which serves as the substrate. Credit: Alexander Chernov et al./Nano Letters

The researchers relied on 250-femtosecond probe pulses to track the state of the sample and extract spin wave characteristics. Aprobe pulse can be directed to any point on the sample with adesired delay relative to the pump pulse. This yields information about the magnetization dynamics in a given point, which can be processed to determine the spin waves spectral frequency, type, and other parameters.

Unlike the previously available methods, the new approach enables controlling the generated wave by varying several parameters of the laser pulse that excites it. In addition to that, thegeometry of the nanostructured film allows the excitation center to be localized inaspot about 10 nanometers in size. The nanopattern also makes it possible to generate multiple distinct types of spin waves. The angle of incidence, the wavelength and polarization of the laser pulses enable the resonant excitation of the waveguide modes of the sample, which are determined by the nanostructure characteristics, so the type of spin waves excited can be controlled. It is possible for each of the characteristics associated with optical excitation to be varied independently to produce the desired effect.

Nanophotonics opens up new possibilities in the area of ultrafast magnetism, said the studys co-author, Alexander Chernov, who heads the Magnetic Heterostructures and Spintronics Lab at MIPT. The creation of practical applications will depend on being able to go beyond the submicrometer scale, increasing operation speed and the capacity for multitasking. We have shown a way to overcome these limitations by nanostructuring a magnetic material. We have successfully localized light in a spot few tens of nanometers across and effectively excited standing spin waves of various orders. This type of spin waves enables the devices operating at high frequencies, up to the terahertz range.

The paper experimentally demonstrates an improved launch efficiency and ability to control spin dynamics under optical excitation by short laser pulses in a specially designed nanopatterned film of bismuth iron garnet. It opens up new prospects for magnetic data processing and quantum computing based on coherent spin oscillations.

Reference: All-Dielectric Nanophotonics Enables Tunable Excitation of the Exchange Spin Waves by Alexander I. Chernov*, Mikhail A. Kozhaev, Daria O. Ignatyeva, Evgeniy N. Beginin, Alexandr V. Sadovnikov, Andrey A. Voronov, Dolendra Karki, Miguel Levy and Vladimir I. Belotelov, 9 June 2020, Nano Letters.DOI: 10.1021/acs.nanolett.0c01528

The study was supported by the Russian Ministry of Science and Higher Education.

More:
Spin-Based Quantum Computing Breakthrough: Physicists Achieve Tunable Spin Wave Excitation - SciTechDaily

Study: How the automotive industry will benefit from quantum computing – eeNews Europe

After companies such as IBM with its Q System One or D-Wave Technologies made headlines in recent years with supposedly usable quantum computers, various companies in the automotive value chain have taken a closer look at this technology - the promises made by manufacturers were too seductive. According to their pledges, quantum computers are ideal for solving certain problems that the best scientists have long been brooding over, such as route optimisation, fuel cell optimisation and the durability of materials.

According to the McKinsey study, some of these early users have already achieved a certain success. Volkswagen, for example, has teamed up with D-Wave to develop a traffic management system that optimises the routes of buses in urban traffic. The automotive supplier Bosch has invested $21 million in the start-up company Zapata Computing (Cambridge, Massachusetts).

However, the reluctance still far outweighs the commitment to this innovative computing technology, write the authors of the McKinsey study. The novelty of the technology and the still very narrow market have so far discouraged many companies from intensively engaging in quantum computing. It will take another five to ten years before this technology has become established in the long term. By then, quantum computing will have overcome several hurdles: Quantum Supremacy must be achieved; the practical benefit must be proven beyond doubt; application software must be available to solve concrete problems; and above all, a Quantum Turing Machine must be available. The latter means that a universally applicable quantum architecture with quantum memory and conventional main memory (RAM) must be available. Such a machine, as described by the experts at McKinsey, will be able to work with the number of qubits required by the users and execute arbitrary algorithms. Such a machine will be available in one to two decades, the study says.

Read more:
Study: How the automotive industry will benefit from quantum computing - eeNews Europe

Quantum Computer Developed by Chinese Physicists May Achieve Quantum Supremacy One Million Times Greater than Sycamore from Google – Crowdfund Insider

The team at Quantum Resistant Ledger (QRL), a post-Quantum value store and decentralized communication layer, notes that theres recently been what appears to be another quantum leap in the field of quantum computing.

The QRL team states:

The team at @ustc potentially achieved quantum supremacy one million times greater than the record currently held by Sycamore. Things can advance quickly and unexpectedly, and $QRL is ready for it, today.

As reported by the SCMP, China now claims that it has taken a quantum leap with a machine that may be able to process information a million times faster than Googles latest quantum computer, called Sycamore.

Physicist Pan Jianwei and his research team may have achieved quantum supremacy, however, we need more verification at this time. Jianweis team is backed by the Chinese government.

He works as a physicist at the University of Science and Technology of China, where he announced during a lecture at Westlake University, Hangzhou, (on September 5, 2020) that a newly developed machine achieved quantum supremacy one million times greater than what Googles Sycamore can process.

Sycamore has been able to complete certain calculations in approximately 200 seconds, which was demonstrated in 2019. The same calculation will take around 10,000 years to complete even if we use the worlds fastest binary model computer.

However, Jianwei has pointed out that the results are still preliminary at this point. He also mentioned that theres no 100% guarantee until further verification.

Fintech service providers and distributed ledger technology (DLT)-based application developers have been working on solutions that could be quantum-resistant, meaning that theyre creating software that could resist potential attacks from quantum computers (if and when they become available).

As covered in August 2020, major South Korean Daegu Bank and SK Telecom, the nations largest wireless carrier, are working on 5G enabled quantum cryptography technology.

Spanish financial giant BBVA continues to work on quantum computing data projects. Its also collaborating with German Fintech Rubean to trial a contactless payments solution. In July 2020, the BBVA shared the results of quantum computing tech proof of concepts for improving currency arbitrage and portfolio management.

Also in July, Standard Chartered announced a new collaboration with Universities Space Research Association on quantum computing.

Ethereum (ETH), the worlds largest blockchain-based platform for building decentralized applications, might not even have quantum resistance on its roadmap, according to the QRL team.

Quantum computers might completely shatter the current internet security systems protecting Bitcoin (BTC), digital payments, and IoT devices, according to a May 2020 report.

See the original post:
Quantum Computer Developed by Chinese Physicists May Achieve Quantum Supremacy One Million Times Greater than Sycamore from Google - Crowdfund Insider

Trending News: Quantum Computing Market Overview and Forecast Report 2020-2026 – Top players: D-Wave Systems, 1QB Information Technologies, QxBranch…

The latest Quantum Computingmarket report estimates the opportunities and current market scenario, providing insights and updates about the corresponding segments involved in the global Quantum Computingmarket for the forecast period of 2020-2026. The report provides detailed assessment of key market dynamics and comprehensive information about the structure of the Quantum Computingindustry. This market study contains exclusive insights into how the global Quantum Computingmarket is predicted to grow during the forecast period.

The primary objective of the Quantum Computing market report is to provide insights regarding opportunities in the market that are supporting the transformation of global businesses associated with Quantum Computing. This report also provides an estimation of the Quantum Computingmarket size and corresponding revenue forecasts carried out in terms of US$. It also offers actionable insights based on the future trends in the Quantum Computingmarket. Furthermore, new and emerging players in the global Quantum Computingmarket can make use of the information presented in the study for effective business decisions, which will provide momentum to their businesses as well as the global Quantum Computingmarket.

Get Exclusive Sample copy on Quantum Computing Market is available at https://inforgrowth.com/sample-request/2396295/quantum-computing-market

The study is relevant for manufacturers, suppliers, distributors, and investors in the Quantum Computingmarket. All stakeholders in the Quantum Computingmarket, as well as industry experts, researchers, journalists, and business researchers can influence the information and data represented in the report.

Quantum Computing Market 2020-2026: Segmentation

The Quantum Computing market report covers major market players like

Quantum Computing Market is segmented as below:

By Product Type:

Breakup by Application:

Get Chance of 20% Extra Discount, If your Company is Listed in Above Key Players List;https://inforgrowth.com/discount/2396295/quantum-computing-market

Impact of COVID-19:Quantum ComputingMarket report analyses the impact of Coronavirus (COVID-19) on the Quantum Computingindustry.Since the COVID-19 virus outbreak in December 2019, the disease has spread to almost 180+ countries around the globe with the World Health Organization declaring it a public health emergency. The global impacts of the coronavirus disease 2019 (COVID-19) are already starting to be felt, and will significantly affect the Quantum Computingmarket in 2020.

The outbreak of COVID-19 has brought effects on many aspects, like flight cancellations; travel bans and quarantines; restaurants closed; all indoor events restricted; emergency declared in many countries; massive slowing of the supply chain; stock market unpredictability; falling business assurance, growing panic among the population, and uncertainty about future.

COVID-19 can affect the global economy in 3 main ways: by directly affecting production and demand, by creating supply chain and market disturbance, and by its financial impact on firms and financial markets.

Get the Sample ToC and understand the COVID19 impact and be smart in redefining business strategies. https://inforgrowth.com/CovidImpact-Request/2396295/quantum-computing-market

Global Quantum Computing Market Report Answers Below Queries:

To know about the global trends impacting the future of market research, contact at: https://inforgrowth.com/enquiry/2396295/quantum-computing-market

Key Questions Answered in this Report:

What is the market size of the Quantum Computing industry?This report covers the historical market size of the industry (2013-2019), and forecasts for 2020 and the next 5 years. Market size includes the total revenues of companies.

What is the outlook for the Quantum Computing industry?This report has over a dozen market forecasts (2020 and the next 5 years) on the industry, including total sales, a number of companies, attractive investment opportunities, operating expenses, and others.

What industry analysis/data exists for the Quantum Computing industry?This report covers key segments and sub-segments, key drivers, restraints, opportunities and challenges in the market and how they are expected to impact the Quantum Computing industry. Take a look at the table of contents below to see the scope of analysis and data on the industry.

How many companies are in the Quantum Computing industry?This report analyzes the historical and forecasted number of companies, locations in the industry, and breaks them down by company size over time. The report also provides company rank against its competitors with respect to revenue, profit comparison, operational efficiency, cost competitiveness, and market capitalization.

What are the financial metrics for the industry?This report covers many financial metrics for the industry including profitability, Market value- chain and key trends impacting every node with reference to companys growth, revenue, return on sales, etc.

What are the most important benchmarks for the Quantum Computing industry?Some of the most important benchmarks for the industry include sales growth, productivity (revenue), operating expense breakdown, the span of control, organizational make-up. All of which youll find in this market report.

FOR ALL YOUR RESEARCH NEEDS, REACH OUT TO US AT:Address: 6400 Village Pkwy suite # 104, Dublin, CA 94568, USAContact Name: Rohan S.Email:[emailprotected]Phone: US: +1-909-329-2808UK: +44 (203) 743 1898

Here is the original post:
Trending News: Quantum Computing Market Overview and Forecast Report 2020-2026 - Top players: D-Wave Systems, 1QB Information Technologies, QxBranch...

Assistant Professor in Computer Science job with Indiana University | 286449 – The Chronicle of Higher Education

The Luddy School of Informatics, Computing, and Engineering atIndiana University (IU) Bloomington invites applications for atenure track assistant professor position in Computer Science tobegin in Fall 2021. We are particularly interested in candidateswith research interests in formal models of computation,algorithms, information theory, and machine learning withconnection to quantum computation, quantum simulation, or quantuminformation science. The successful candidate will also be aQuantum Computing and Information Science Faculty Fellow supportedin part for the first three years by an NSF-funded program thataims to grow academic research capacity in the computing andinformation science fields to support advances in quantum computingand/or communication over the long term. For additional informationabout the NSF award please visit:https://www.nsf.gov/awardsearch/showAward?AWD_ID=1955027&HistoricalAwards=false.The position allows the faculty member to collaborate actively withcolleagues from a variety of outside disciplines including thedepartments of physics, chemistry, mathematics and intelligentsystems engineering, under the umbrella of the Indiana Universityfunded "quantum science and engineering center" (IU-QSEc). We seekcandidates prepared to contribute to our commitment to diversityand inclusion in higher education, especially those with experiencein teaching or working with diverse student populations. Dutieswill include research, teaching multi-level courses both online andin person, participating in course design and assessment, andservice to the School. Applicants should have a demonstrablepotential for excellence in research and teaching and a PhD inComputer Science or a related field expected before August 2021.Candidates should review application requirements, learn more aboutthe Luddy School and apply online at: https://indiana.peopleadmin.com/postings/9841.For full consideration submit online application by December 1,2020. Applications will be considered until the positions arefilled. Questions may be sent to sabry@indiana.edu. IndianaUniversity is an equal employment and affirmative action employerand a provider of ADA services. All qualified applicants willreceive consideration for employment without regard to age,ethnicity, color, race, religion, sex, sexual orientation, genderidentity or expression, genetic information, marital status,national origin, disability status or protected veteranstatus.

Link:
Assistant Professor in Computer Science job with Indiana University | 286449 - The Chronicle of Higher Education

How To Set Up Your Clinical Space Efficiently? A Complete Guide

Were You Aware of the Look of One's Clinic that could help determine the connections that you might have with your patients? A healthy, comfortable clinical space is where the patients can feel relaxed and comfortable. For that you can discover how to favorably impact people who move across the clinic with speedy and very affordable design methods that could smooth workstreams, enhance patient safety, and boost team and patient connections. With more plans and cheap designs for smooth workstreams, you can connect with contractor books. It's easy to understand and is a thorough guidance, which allows you to study successful building, power of positive clinical space, boosting methods, the relation between doctors and patients, and many more. It will guide you on how to tackle the situations and eco-friendly environment for patients.

As a doctor, you operate with individuals to successfully Take Care of present Problems And protect against future disorders. Preventive treatment methods may begin as soon as patients put in the clinic. What can they view and practical experience?  Contractor books are the best source for all the answers you need. ANd tell you exactly how your personnel can navigate the identical distance. How can patients proceed through space all through the trip, and also just how can they believe as though they browse their journey?

Addressing those queries using accessible layout alternatives can reduce Patient stress and enrich team civilization. Even a Free On-line module from the AMA's Actions Ahead Assortment Demonstrates How.

Use those five measures to maximize your clinic's distance:

Develop teamwork stations that enhance interactions

Well-designed Operate channels and components may enhance efficacy and Strengthen team civilization.

 Place test rooms near the workforce's job space to lessen the distance that has to be traveled amongst actions and also enhance assessment place visibility. Because take short guidance from contractor books.

  • Create possibilities on their team to socialize with and nurture an even far more collegial environment. Glass walls make it possible for mates to find each other while preserving solitude and reducing sound.

Place furnishing to encourage patient engagement.

The structure, Designs, and Sorts of desks, evaluation tables along with Chairs may do the job with each other to promote successful interactions and eye.

  • When an individual may sit at a seat to consult together with the medic rather than shelling out the full trip on the exam table, then they indeed are more inclined to experience favorably concerning the trip.
  • Mount machines onto the walls onto a detachable arm or utilize laptops so that your workforce is free to alter their location to confront the individual patient.

Add positive distractions to alleviate patient anxiety.

man in brown shirt sitting on sofa with man leaning on his arm

Patients participate in your practice's environment to collect clues regarding The standard of maintenance they'll obtain. It can impact their faith from the clinic and also their general encounter.

  • Sitting in a living room might be more stressful. Because Patients might feel stressed or frustrated predicated in the waiting-room encounter. Good distractions, such as, for instance, window perspectives of all-natural configurations, divert focus from migraines, and make a constructive disposition.
  • Video may hamper the worries of ready. Because Adding art depicting landscapes using high visual thickness, healthy foliage, and hot weather or even favorable connections involving folks reduces stress.

Reconfigure rooms to feel spacious and welcoming

That Is no requirement to rip walls down, build new chambers to create a Space appear far roomier. Simple rearranging may make a tiny space feel open and more comfortable.

 

  • Jazz a consultation up the distance with added light or enhance stern overhead light. Place test tables in an angle to spare the wall up area for more extended seats.
  • Utilizing lighting, warm-colored paint onto the partitions may increase the constructive result of this art you've picked in the third step.

Connect with patients while incorporating technology

Typing or analyzing an individual's digital Wellness document may shoot Away from individual interaction and frustrate patients and physicians.

  • Raise your attention and share some screen with your patients to favorably impact their participation and adherence to treatment methods. Semi circular desks and big monitors can assist you in maintaining face to face touch. Because we involving patients within their very own info.
  • Try out executing the a-team confirmation approach, at which a nurse, physician assistant, or instruction helps with record-keeping to permit the medic to supply more excellent comprehensive Care to sufferers.

Physicians talk about changing their practice space.

"We had been inconveniencing our sufferers and generating unneeded Work," explained Morris Gagliardi, MD, associate clinical director of Gouverneur overall health at new york.

"Fretting about greater wayfinding for individuals along with groups such as Services together. At the practice demonstrated fantastic chances for individuals to deliver an even better, patient-centered working experience," he explained.

Vermont family doctor Michael Toedt, MD, has discovered Adjoining team chambers having open-distance teams get the job done most useful. He chose never to own some personal offices inside their brand new center.

"Being a doctor, I'm Not operating to Locate the staff members. I have to organize caution," Dr. Toedt explained. "I do not need to worry about this Patient maybe perhaps not after a behavioral wellness pro or dietician. Because we supply the hottest handoff in realtime "

This artist used machine learning to create realistic portraits of Roman emperors – The World

Some people have spent their quarantine downtime bakingsourdough bread. Others experiment with tie-dye. But others namely Toronto-based artist Daniel Voshart have createdpainstaking portraits of all 54 Roman emperors of the Principate period, which spanned from 27 BC to 285 AD.

The portraits help people visualize what the Roman emperors would have looked like when they were alive.

Included are Vosharts best artistic guesses of the faces of emperors Augustus, Nero, Caligula, Marcus Aurelius and Claudius, among others. They dont look particularly heroic or epic rather, they look like regular people, with craggy foreheads, receding hairlines and bags under their eyes.

To make the portraits, Voshart used a design software called Artbreeder, which relies on a kind of artificial intelligence called generative adversarial networks (GANs).

Voshart starts by feeding the GANs hundreds of images of the emperors collected from ancient sculpted busts, coins and statues. Then he gets a composite image, which he tweaks in Photoshop. To choose characteristics such as hair color and eye color, Voshart researches the emperors backgrounds and lineages.

It was a bit of a challenge, he says. About a quarter of the project was doing research, trying to figure out if theres something written about their appearance.

He also needed to find good images to feed the GANs.

Another quarter of the research was finding the bust, finding when it was carved because a lot of these busts are recarvings or carved hundreds of years later, he says.

In a statement posted on Medium, Voshartwrites: My goal was not to romanticize emperors or make them seem heroic. In choosing bust/sculptures, my approach was to favor the bust that was made when the emperor was alive. Otherwise, I favored the bust made with the greatest craftsmanship and where the emperor was stereotypically uglier my pet theory being that artists were likely trying to flatter their subjects.

Related:Battle of the bums: Museums complete over best artistic behinds

Voshart is not a Rome expert. His background is in architecture and design, and by day he works in the art department of the TV show "Star Trek: Discovery," where he designs virtual reality walkthroughs of the sets before they'rebuilt.

But when the coronavirus pandemic hit, Voshart was furloughed. He used the extra time on his hands to learn how to use the Artbreeder software.The idea for the Roman emperor project came from a Reddit threadwhere people were posting realistic-looking images theyd created on Artbreeder using photos of Roman busts. Voshart gave it a try and went into exacting detail with his research and design process, doing multiple iterations of the images.

Voshart says he made some mistakes along the way. For example, Voshart initially based his portrait of Caligula, a notoriously sadistic emperor, on a beautifully preserved bust in the Metropolitan Museum of Art. But the bust was too perfect-looking, Voshart says.

Multiple people told me he was disfigured, and another bust was more accurate, he says.

So, for the second iteration of the portrait, Voshart favored a different bust where one eye was lower than the other.

People have been telling me my first depiction of Caligula was hot, he says. Now, no ones telling me that.

Voshart says people who see his portraits on Twitter and Reddit often approach them like theyd approachTinder profiles.

I get maybe a few too many comments, like such-and-such is hot. But a lot of these emperors are such awful people!

I get maybe a few too many comments, like such-and-such is hot. But a lot of these emperors are such awful people! Voshart says.

Voshart keeps a list on his computer of all the funny comparisons people have made to present-day celebrities and public figures.

Ive heard Nero looks like a football player. Augustus looks like Daniel Craigmy early depiction of Marcus Aurelius looks like the Dude from 'The Big Lebowski.'

But the No. 1 comment? Augustus looks like Putin.

Related:UNESCO says scammers are using its logo to defraudartcollectors

No one knows for sure whether Augustus actually looked like Vladimir Putin in real life.Voshart says his portraits are speculative.

Its definitely an artistic interpretation, he says. Im sure if you time-traveled, youd be very angry at me."

Read the original:
This artist used machine learning to create realistic portraits of Roman emperors - The World

Demonstration Of What-If Tool For Machine Learning Model Investigation – Analytics India Magazine

Machine learning era has reached the stage of interpretability where developing models and making predictions is simply not enough any more. To make a powerful impact and get good results on the data it is important to investigate and probe the dataset and the models. A good model investigation involves digging deep into the understanding of the model to find insights and inconsistencies in the developed model. This task usually involves writing a lot of custom functions. But, with tools like What-If, it makes the probing task very easy and saves time and efforts for programmers.

In this article we will learn about:

What-If tool is a visualization tool that is designed to interactively probe the machine learning models. WIT allows users to understand machine learning models like classification, regression and deep neural networks by providing methods to evaluate, analyse and compare the model. It is user friendly and can be used not only by developers but also by researchers and non-programmers very easily.

WIT was developed by Google under the People+AI research (PAIR) program. It is open-source and brings together researchers across Google to study and redesign the ways people interact with AI systems.

This tool provides multiple features and advantages for users to investigate the model.

Some of the features of using this are:

WIT can be used with a Google Colab notebook or Jupyter notebook. It can also be used with Tensorflow Board.

Let us take a sample dataset to understand the different features of WIT. I will choose the forest fire dataset available for download on Kaggle. You can click here for downloading the dataset. The goal here is to predict the areas affected by forest fires given the temperature, month, amount of rain etc.

I will implement this tool on google collaboratory. Before we load the dataset and perform the processing, we will first install the WIT. To install this tool use,

!pip install witwidget

Once we have split the data, we can convert the columns month and day to categorical values using label encoder.

Now we can build our model. I will use sklearn ensemble model and implement the gradient boosting regression model.

Now that we have the model trained, we will write a function to predict the data since we need to use this for the widget.

Next, we will write the code to call the widget.

This opens an interactive widget with two panels.

To the left, there is a panel for selecting multiple techniques to perform on the data and to the right is the data points.

As you can see on the right panel we have options to select features in the dataset along X-axis and Y-axis. I will set these values and check the graphs.

Here I have set FFMC along the X-axis and area as the target. Keep in mind that these points are displayed after the regression is performed.

Let us now explore each of the options provided to us.

You can select a random data point and highlight the point selected. You can also change the value of the datapoint and observe how the predictions change dynamically and immediately.

As you can see, changing the values changes the predicted outcomes. You can change multiple values and experiment with the model behaviour.

Another way to understand the behaviour of a model is to use counterfactuals. Counterfactuals are slight changes made that can cause a model to flip its decision.

By clicking on the slide button shown below we can identify the counterfactual which gets highlighted in green.

This plot shows the effects that the features have on the trained machine learning model.

As shown below, we can see the inference of all the features with the target value.

This tab allows us to look at the overall model performance. You can evaluate the model performance with respect to one feature or more than the one feature. There are multiple options available for analysis of the performance.

I have selected two features FFMC and temp against the area to understand performance using mean error.

If multiple training models are used their performance can be evaluated here.

The features tab is used to get the statistics of each feature in the dataset. It displays the data in the form of histograms or quantile charts.

The tab also enables us to look into the distribution of values for each feature in the dataset.

It also highlights the features that are most non-uniform in comparison to the other features in the dataset.

Identifying non-uniformity is a good way to reduce bias in the model.

WIT is a very useful tool for analysis of model performance. Ability to inspect models in a simple no-code environment will be of great help especially in the business perspective.

It also gives insights to factors beyond training the model like understanding why and how that model was created and how the dataset is fitting in the model.

comments

More:
Demonstration Of What-If Tool For Machine Learning Model Investigation - Analytics India Magazine

Machine Learning Chips Market Dynamics Analysis to Grow at Cagr with Major Companies and Forecast 2026 – The Scarlet

Machine Learning Chips Market 2018: Global Industry Insights by Global Players, Regional Segmentation, Growth, Applications, Major Drivers, Value and Foreseen till 2024

The recent published research report sheds light on critical aspects of the global Machine Learning Chips market such as vendor landscape, competitive strategies, market drivers and challenges along with the regional analysis. The report helps the readers to draw a suitable conclusion and clearly understand the current and future scenario and trends of global Machine Learning Chips market. The research study comes out as a compilation of useful guidelines for players to understand and define their strategies more efficiently in order to keep themselves ahead of their competitors. The report profiles leading companies of the global Machine Learning Chips market along with the emerging new ventures who are creating an impact on the global market with their latest innovations and technologies.

Request Sample Report @ https://www.marketresearchhub.com/enquiry.php?type=S&repid=2632983&source=atm

The recent published study includes information on key segmentation of the global Machine Learning Chips market on the basis of type/product, application and geography (country/region). Each of the segments included in the report is studies in relations to different factors such as market size, market share, value, growth rate and other quantitate information.

The competitive analysis included in the global Machine Learning Chips market study allows their readers to understand the difference between players and how they are operating amounts themselves on global scale. The research study gives a deep insight on the current and future trends of the market along with the opportunities for the new players who are in process of entering global Machine Learning Chips market. Market dynamic analysis such as market drivers, market restraints are explained thoroughly in the most detailed and easiest possible manner. The companies can also find several recommendations improve their business on the global scale.

The readers of the Machine Learning Chips Market report can also extract several key insights such as market size of varies products and application along with their market share and growth rate. The report also includes information for next five years as forested data and past five years as historical data and the market share of the several key information.

Make An EnquiryAbout This Report @ https://www.marketresearchhub.com/enquiry.php?type=E&repid=2632983&source=atm

Global Machine Learning Chips Market by Companies:

The company profile section of the report offers great insights such as market revenue and market share of global Machine Learning Chips market. Key companies listed in the report are:

Market Segment AnalysisThe research report includes specific segments by Type and by Application. Each type provides information about the production during the forecast period of 2015 to 2026. Application segment also provides consumption during the forecast period of 2015 to 2026. Understanding the segments helps in identifying the importance of different factors that aid the market growth.Segment by TypeNeuromorphic ChipGraphics Processing Unit (GPU) ChipFlash Based ChipField Programmable Gate Array (FPGA) ChipOther

Segment by ApplicationRobotics IndustryConsumer ElectronicsAutomotiveHealthcareOther

Global Machine Learning Chips Market: Regional AnalysisThe report offers in-depth assessment of the growth and other aspects of the Machine Learning Chips market in important regions, including the U.S., Canada, Germany, France, U.K., Italy, Russia, China, Japan, South Korea, Taiwan, Southeast Asia, Mexico, and Brazil, etc. Key regions covered in the report are North America, Europe, Asia-Pacific and Latin America.The report has been curated after observing and studying various factors that determine regional growth such as economic, environmental, social, technological, and political status of the particular region. Analysts have studied the data of revenue, production, and manufacturers of each region. This section analyses region-wise revenue and volume for the forecast period of 2015 to 2026. These analyses will help the reader to understand the potential worth of investment in a particular region.Global Machine Learning Chips Market: Competitive LandscapeThis section of the report identifies various key manufacturers of the market. It helps the reader understand the strategies and collaborations that players are focusing on combat competition in the market. The comprehensive report provides a significant microscopic look at the market. The reader can identify the footprints of the manufacturers by knowing about the global revenue of manufacturers, the global price of manufacturers, and production by manufacturers during the forecast period of 2015 to 2019.The major players in the market include Wave Computing, Graphcore, Google Inc, Intel Corporation, IBM Corporation, Nvidia Corporation, Qualcomm, Taiwan Semiconductor Manufacturing, etc.

Global Machine Learning Chips Market by Geography:

You can Buy This Report from Here @ https://www.marketresearchhub.com/checkout?rep_id=2632983&licType=S&source=atm

Some of the Major Highlights of TOC covers in Machine Learning Chips Market Report:

Chapter 1: Methodology & Scope of Machine Learning Chips Market

Chapter 2: Executive Summary of Machine Learning Chips Market

Chapter 3: Machine Learning Chips Industry Insights

Chapter 4: Machine Learning Chips Market, By Region

Chapter 5: Company Profile

And Continue

See original here:
Machine Learning Chips Market Dynamics Analysis to Grow at Cagr with Major Companies and Forecast 2026 - The Scarlet

Machine Learning & Big Data Analytics Education Market Size is Thriving Worldwide 2020 | Growth and Profit Analysis, Forecast by 2027 – The Daily…

Fort Collins, Colorado The Global Machine Learning & Big Data Analytics Education Market research report offers insightful information on the Global Machine Learning & Big Data Analytics Education market for the base year 2019 and is forecast between 2020 and 2027. Market value, market share, market size, and sales have been estimated based on product types, application prospects, and regional industry segmentation. Important industry segments were analyzed for the global and regional markets.

The effects of the COVID-19 pandemic have been observed across all sectors of all industries. The economic landscape has changed dynamically due to the crisis, and a change in requirements and trends has also been observed. The report studies the impact of COVID-19 on the market and analyzes key changes in trends and growth patterns. It also includes an estimate of the current and future impact of COVID-19 on overall industry growth.

Get a sample of the report @ https://reportsglobe.com/download-sample/?rid=64357

The report has a complete analysis of the Global Machine Learning & Big Data Analytics Education Market on a global as well as regional level. The forecast has been presented in terms of value and price for the 8 year period from 2020 to 2027. The report provides an in-depth study of market drivers and restraints on a global level, and provides an impact analysis of these market drivers and restraints on the relationship of supply and demand for the Global Machine Learning & Big Data Analytics Education Market throughout the forecast period.

The report provides an in-depth analysis of the major market players along with their business overview, expansion plans, and strategies. The main actors examined in the report are:

The Global Machine Learning & Big Data Analytics Education Market Report offers a deeper understanding and a comprehensive overview of the Global Machine Learning & Big Data Analytics Education division. Porters Five Forces Analysis and SWOT Analysis have been addressed in the report to provide insightful data on the competitive landscape. The study also covers the market analysis and provides an in-depth analysis of the application segment based on the market size, growth rate and trends.

Request a discount on the report @ https://reportsglobe.com/ask-for-discount/?rid=64357

The research report is an investigative study that provides a conclusive overview of the Global Machine Learning & Big Data Analytics Education business division through in-depth market segmentation into key applications, types, and regions. These segments are analyzed based on current, emerging and future trends. Regional segmentation provides current and demand estimates for the Global Machine Learning & Big Data Analytics Education industry in key regions in North America, Europe, Asia Pacific, Latin America, and the Middle East and Africa.

Global Machine Learning & Big Data Analytics Education Market Segmentation:

In market segmentation by types of Global Machine Learning & Big Data Analytics Education , the report covers-

In market segmentation by applications of the Global Machine Learning & Big Data Analytics Education , the report covers the following uses-

Request customization of the report @https://reportsglobe.com/need-customization/?rid=64357

Overview of the table of contents of the report:

To learn more about the report, visit @ https://reportsglobe.com/product/global-machine-learning-big-data-analytics-education-assessment/

Thank you for reading our report. To learn more about report details or for customization information, please contact us. Our team will ensure that the report is customized according to your requirements.

How Reports Globe is different than other Market Research Providers

The inception of Reports Globe has been backed by providing clients with a holistic view of market conditions and future possibilities/opportunities to reap maximum profits out of their businesses and assist in decision making. Our team of in-house analysts and consultants works tirelessly to understand your needs and suggest the best possible solutions to fulfill your research requirements.

Our team at Reports Globe follows a rigorous process of data validation, which allows us to publish reports from publishers with minimum or no deviations. Reports Globe collects, segregates, and publishes more than 500 reports annually that cater to products and services across numerous domains.

Contact us:

Mr. Mark Willams

Account Manager

US: +1-970-672-0390

Email:[emailprotected]

Web:reportsglobe.com

Follow this link:
Machine Learning & Big Data Analytics Education Market Size is Thriving Worldwide 2020 | Growth and Profit Analysis, Forecast by 2027 - The Daily...

Improving The Use Of Social Media For Disaster Management – Texas A&M University Today

The algorithm could be used to quickly identify social media posts related to a disaster.

Getty Images

There has been a significant increase in the use of social media to share updates, seek help and report emergencies during a disaster. Algorithms keeping track of social media posts that signal the occurrence of natural disasters must be swift so that relief operations can be mobilized immediately.

A team of researchers led by Ruihong Huang, assistant professor in the Department of Computer Science and Engineering at Texas A&M University, has developed a novel weakly supervised approach that can train machine learning algorithms quickly to recognize tweets related to disasters.

Because of the sudden nature of disasters, theres not much time available to build an event recognition system, Huang said. Our goal is to be able to detect life-threatening events using individual social media messages and recognize similar events in the affected areas.

Text on social media platforms, like Twitter, can be categorized using standard algorithms called classifiers. This sorting algorithm separates data into labeled classes or categories, similar to how spam filters in email service providers scan incoming emails and classify them as either spam or not spam based on its prior knowledge of spam messages.

Most classifiers are an integral part of machine learning algorithms that make predictions based on carefully labeled sets of data. In the past, machine learning algorithms have been used for event detection based on tweets or a burst of words within tweets. To ensure a reliable classifier for the machine learning algorithms, human annotators have to manually label large amounts of data instances one by one, which usually takes several days, sometimes even weeks or months.

The researchers also found that it is essentially impossible to find a keyword that does not have more than one meaning on social media depending on the context of the tweet. For example, if the word dead is used as a keyword, it will pull in tweets talking about a variety of topics such as a phone battery being dead or the television series The Walking Dead.

We have to be able to know which tweets that contain the predetermined keywords are relevant to the disaster and separate them from the tweets that contain the correct keywords but are not relevant, Huang said.

To build more reliable labeled datasets, the researchers first used an automatic clustering algorithm to put them into small groups. Next, a domain expert looked at the context of the tweets in each group to identify if it was relevant to the disaster. The labeled tweets were then used to train the classifier how to recognize the relevant tweets.

Using data gathered from the most impacted time periods for Hurricane Harvey and Hurricane Florence, the researchers found that their data labeling method and overall weakly-supervised system took one to two person-hours instead of the 50 person-hours that were required to go through thousands of carefully annotated tweets using the supervised approach.

Despite the classifiers overall good performance, they also observed that the system still missed several tweets that were relevant but used a different vocabulary than the predetermined keywords.

Users can be very creative when discussing a particular type of event using the predefined keywords, so the classifier would have to be able to handle those types of tweets, Huang said. Theres room to further improve the systems coverage.

In the future, the researchers will look to explore how to extract information about the users location so first responders will know exactly where to dispatch their resources.

Other contributors to this research include Wenlin Yao, a doctoral student supervised by Huang from the computer science and engineering department; Ali Mostafavi and Cheng Zhang from the Zachry Department of Civil and Environmental Engineering; and Shiva Saravanan, former intern of the Natural Language Processing Lab at Texas A&M.

The researchers described their findings in the proceedings from the Association for the Advancement of Artificial Intelligences 34th Conference on Artificial Intelligence.

This work is supported by funds from the National Science Foundation.

Originally posted here:
Improving The Use Of Social Media For Disaster Management - Texas A&M University Today

Machine Learning Does Not Improve Upon Traditional Regression in Predicting Outcomes in Atrial Fibrillation: An Analysis of the ORBIT-AF and…

Aims

Prediction models for outcomes in atrial fibrillation (AF) are used to guide treatment. While regression models have been the analytic standard for prediction modelling, machine learning (ML) has been promoted as a potentially superior methodology. We compared the performance of ML and regression models in predicting outcomes in AF patients.

The Outcomes Registry for Better Informed Treatment of Atrial Fibrillation (ORBIT-AF) and Global Anticoagulant Registry in the FIELD (GARFIELD-AF) are population-based registries that include 74 792 AF patients. Models were generated from potential predictors using stepwise logistic regression (STEP), random forests (RF), gradient boosting (GB), and two neural networks (NNs). Discriminatory power was highest for death [STEP area under the curve (AUC) = 0.80 in ORBIT-AF, 0.75 in GARFIELD-AF] and lowest for stroke in all models (STEP AUC = 0.67 in ORBIT-AF, 0.66 in GARFIELD-AF). The discriminatory power of the ML models was similar or lower than the STEP models for most outcomes. The GB model had a higher AUC than STEP for death in GARFIELD-AF (0.76 vs. 0.75), but only nominally, and both performed similarly in ORBIT-AF. The multilayer NN had the lowest discriminatory power for all outcomes. The calibration of the STEP modelswere more aligned with the observed events for all outcomes. In the cross-registry models, the discriminatory power of the ML models was similar or lower than the STEP for most cases.

When developed from two large, community-based AF registries, ML techniques did not improve prediction modelling of death, major bleeding, or stroke.

Read the rest here:
Machine Learning Does Not Improve Upon Traditional Regression in Predicting Outcomes in Atrial Fibrillation: An Analysis of the ORBIT-AF and...