IonQ CEO Peter Chapman on how quantum computing will change the future of AI – VentureBeat

Businesses eager to embrace cutting-edge technology are exploring quantum computing, which depends on qubits to perform computations that would be much more difficult, or simply not feasible, on classical computers. The ultimate goals are quantum advantage, the inflection point when quantum computers begin to solve useful problems, and quantum supremacy, when a quantum computer can solve a problem that classical computers practically cannot. While those are a long way off (if they can even be achieved), the potential is massive. Applications include everything from cryptography and optimization to machine learning and materials science.

As quantum computing startup IonQ has described it, quantum computing is a marathon, not a sprint. We had the pleasure of interviewing IonQ CEO Peter Chapman last month to discuss a variety of topics. Among other questions, we asked Chapman about quantum computings future impact on AI and ML.

The conversation quickly turned to Strong AI, or Artificial General Intelligence (AGI), which does not yet exist. Strong AI is the idea that a machine could one day understand or learn any intellectual task that a human being can.

AI in the Strong AI sense, that I have more of an opinion just because I have more experience in that personally, Chapman told VentureBeat. And there was a really interesting paper that just recently came out talking about how to use a quantum computer to infer the meaning of words in NLP. And I do think that those kinds of things for Strong AI look quite promising. Its actually one of the reasons I joined IonQ. Its because I think that does have some sort of application.

In a follow-up email, Chapman expanded on his thoughts. For decades it was believed that the brains computational capacity lay in the neuron as a minimal unit, he wrote. Early efforts by many tried to find a solution using artificial neurons linked together in artificial neural networks with very limited success. This approach was fueled by the thought that the brain is an electrical computer, similar to a classical computer.

However, since then, I believe we now know, the brain is not an electrical computer, but an electrochemical one, he added. Sadly, todays computers do not have the processing power to be able to simulate the chemical interactions across discrete parts of the neuron, such as the dendrites, the axon, and the synapse. And even with Moores law, they wont next year or even after a million years.

Chapman then quoted Richard Feynman, who famously said Nature isnt classical, dammit, and if you want to make a simulation of nature, youd better make it quantum mechanical, and by golly its a wonderful problem, because it doesnt look so easy.

Similarly, its likely Strong AI isnt classical, its quantum mechanical as well, Chapman said.

One of IonQs competitors, D-Wave, argues that quantum computing and machine learning are extremely well matched. Chapman is still on the fence.

I havent spent enough time to really understand it, he admitted. There clearly is a lot of people who think that ML and quantum have an overlap. Certainly, if you think of 85% of all ML produces a decision tree. And the depth of that decision tree could easily be optimized with a quantum computer. Clearly theres lots of people that think that generation of the decision tree could be optimized with a quantum computer. Honestly, I dont know if thats the case or not. I think its still a little early for machine learning, but there clearly is so many people that are working on it. Its hard to imagine it doesnt have application.

Again, in an email later, Chapman followed up. ML has intimate ties to optimization: many learning problems are formulated as minimization of some loss function on a training set of examples. Generally, Universal Quantum Computers excel at these kinds of problems.

Chapman listed three improvements in ML that quantum computing will likely allow:

Strong AI or ML, IonQ isnt particularly interested either. The company leaves that part to its customers and future partners.

Theres so much to be to be done in a quantum, Champan said. From education at one end all the way to the quantum computer itself. I think some of our competitors have taken on lots of the entire problem set. We at IonQ are just focused on producing the worlds best quantum computer for them. We think thats a large enough task for a little company like us to handle.

So, for the moment were kind of happy to let everyone else work on different problems, he added. We just think, producing the worlds best quantum computer is a large enough task. We just dont have extra bandwidth or resources to put into working on machine learning algorithms. And luckily, theres lots of other companies that think that theres applications there. Well partner with them in the sense that well provide the hardware that their algorithms will run on. But were not in the ML business per se.

The rest is here:
IonQ CEO Peter Chapman on how quantum computing will change the future of AI - VentureBeat

Archer to work alongside IBM in progressing quantum computing – ZDNet

Archer CEO Dr Mohammad Choucair and quantum technology manager Dr. Martin Fuechsle

Archer Materials has announced a new agreement with IBM which it hopes will advance quantum computing and progress work towards solutions for the greater adoption of the technology.

Joining the IBM Q Network, Archer will gain access to IBM's quantum computing expertise and resources, seeing the Sydney-based company use IBM's open-source software framework, Qiskit.

See also: Australia's ambitious plan to win the quantum race

Archer is the first Australian company that develops a quantum computing processor and hardware to join the IBM Q Network. The IBM Q Network provides access to the company's experts, developer tools, and cloud-based quantum systems through IBM Q Cloud.

"We are the first Australian company building a quantum chip to join into the global IBM Q Network as an ecosystem partner, a group of the very best organisations at the forefront of quantum computing." Archer CEO Dr Mohammad Choucair said.

"Ultimately, we want Australian businesses and consumers to be one of the first beneficiaries of this exciting technology, and now that we are collaborating with IBM, it greatly increases our chances of success".

Archer is advancing the commercial readiness of its12CQ qubit processor chip technology towards a minimum viable product.

"We look forward to working with IBM and members of the network to address the most fundamental challenges to the wide-scale adoption of quantum computing, using our potentially complementary technologies as starting points," Choucair added.

In November, Archer said it was continuing to inch towards its goal of creating a room temperature quantum computer, announcing at the time it had assembled a three qubit array.

The company said it has placed three isolated qubits on a silicon wafer with metallic control electrodes being used for measurement. Archer has previously told ZDNet it conducts measurements by doing magnetic fields sweeps at microwave frequencies.

"The arrangement of the qubits was repeatable and reproducible, thereby allowing Archer to quickly build and test working prototypes of quantum information processing devices incorporating a number of qubits; individual qubits; or a combination of both, which is necessary to meet Archer's aim of building a chip for a practical quantum computer," the company said.

In August, the company said it hadassembled its first room-temperature quantum bit.

Archer is building chip prototypes at the Research and Prototype Foundry out of the University of Sydney's AU$150 million Sydney Nanoscience Hub.

2020s are the decade of commercial quantum computing, says IBM

IBM spent a great deal of time showing off its quantum-computing achievements at CES, but the technology is still in its very early stages.

What is quantum computing? Understanding the how, why and when of quantum computers

There are working machines today that perform some small part of what a full quantum computer may eventually do. But what are the real-world applications for quantum computing?

Quantum computing has arrived, but we still don't really know what to do with it

Even for a technology that makes a virtue of uncertainty, where quantum goes next is something of a mystery.

Quantum computing: Myths v. Realities (TechRepublic)

Futurist Isaac Arthur explains why quantum computing is a lot more complicated than classical computing.

Read this article:
Archer to work alongside IBM in progressing quantum computing - ZDNet

QUANTUM COMPUTING INC. : Entry into a Material Definitive Agreement, Creation of a Direct Financial Obligation or an Obligation under an Off-Balance…

Item 1.01 Entry into a Material Definitive Agreement.

On May 6, 2020, Quantum Computing Inc. (the "Company") executed an unsecuredpromissory note (the "Note") with BB&T/Truist Bank N.A. to evidence a loan tothe Company in the amount of $218,371 (the "Loan") under the Paycheck ProtectionProgram (the "PPP") established under the Coronavirus Aid, Relief, and EconomicSecurity Act (the "CARES Act"), administered by the U.S. Small BusinessAdministration (the "SBA").

In accordance with the requirements of the CARES Act, the Company expects to usethe proceeds from the Loan exclusively for qualified expenses under the PPP,including payroll costs, mortgage interest, rent and utility costs. Interestwill accrue on the outstanding balance of the Note at a rate of 1.00% per annum.The Company expects to apply for forgiveness of up to the entire amount of theNote. Notwithstanding the Company's eligibility to apply for forgiveness, noassurance can be given that the Company will obtain forgiveness of all or anyportion of the amounts due under the Note. The amount of forgiveness under theNote is calculated in accordance with the requirements of the PPP, including theprovisions of Section 1106 of the CARES Act, subject to limitations and ongoingrule-making by the SBA and the maintenance of employee and compensation levels.

Subject to any forgiveness granted under the PPP, the Note is scheduled tomature two years from the date of first disbursement under the Note. The Notemay be prepaid at any time prior to maturity with no prepayment penalties. TheNote provides for customary events of default, including, among others, thoserelating to failure to make payments, bankruptcy, and significant changes inownership. The occurrence of an event of default may result in the requiredimmediate repayment of all amounts outstanding and/or filing suit and obtainingjudgment against the Company. The Company's obligations under the Note are notsecured by any collateral or personal guarantees.

Item 2.03 Creation of Direct Financial Obligation or an Obligation under an

The discussion of the Loan set forth in Item 1.01 of this Current Report on Form8-K is incorporated in this Item 2.03 by reference.

Item 9.01. Financial Statements and Exhibits.

Edgar Online, source Glimpses

Visit link:
QUANTUM COMPUTING INC. : Entry into a Material Definitive Agreement, Creation of a Direct Financial Obligation or an Obligation under an Off-Balance...

Could quantum machine learning hold the key to treating COVID-19? – Tech Wire Asia

Sundar Pichai, CEO of Alphabet with one of Googles quantum computers. Source: AFP PHOTO / GOOGLE/HANDOUT

Scientific researchers are hard at work around the planet, feverishly crunching data using the worlds most powerful supercomputers in the hopes of a speedier breakthrough in finding a vaccine for the novel coronavirus.

Researchers at Penn State University think that they have hit upon a solution that could greatly accelerate the process of discovering a COVID-19 treatment, employing an innovative hybrid branch of research known as quantum machine learning.

When it comes to a computer science-driven approach to identifying a cure, most methodologies harness machine learning to screen different compounds one at a time to see if they might bond with the virus main protease, or protein.

This process is arduous and time-consuming, despite the fact that the most powerful computers were actually condensing years (maybe decades) of drug testing into less than two years time. Discovering any new drug that can cure a disease is like finding a needle in a haystack, said lead researcher Swaroop Ghosh, the Joseph R. and Janice M. Monkowski Career Development Assistant Professor of Electrical Engineering and Computer Science and Engineering at Penn State.

It is also incredibly expensive. Ghosh says the current pipeline for discovering new drugs can take between five and ten years from the concept stage to being released to the market, and could cost billions in the process.

High-performance computing such as supercomputers and artificial intelligence (AI) canhelp accelerate this process by screeningbillions of chemical compounds quicklyto findrelevant drugcandidates, he elaborated.

This approach works when enough chemical compounds are available in the pipeline, but unfortunately this is not true for COVID-19. This project will explorequantum machine learning to unlock new capabilities in drug discovery by generating complex compounds quickly.

Quantum machine learning is an emerging field that combines elements of machine learning with quantum physics. Ghosh and his doctoral students had in the past developed a toolset for solving a specific set of problems known as combinatorial optimization problems, using quantum computing.

Drug discovery computation aligns with combinatorial optimization problems, allowing the researchers to tap the same toolset in the hopes of speeding up the process of discovering a cure, in a more cost-effective fashion.

Artificial intelligence for drug discovery is a very new area, Ghosh said. The biggest challenge is finding an unknown solution to the problem by using technologies that are still evolving that is, quantum computing and quantum machine learning. We are excited about the prospects of quantum computing in addressing a current critical issue and contributing our bit in resolving this grave challenge.

Joe Devanesan | @thecrystalcrown

Joe's interest in tech began when, as a child, he first saw footage of the Apollo space missions. He still holds out hope to either see the first man on the moon, or Jetsons-style flying cars in his lifetime.

Read the original:
Could quantum machine learning hold the key to treating COVID-19? - Tech Wire Asia

Cover Corona Outbreak: Quantum Computing Market 2020 Demand, Leading Players, Emerging Technologies, Applications, Development History Segmentation by…

The Latest Research Report on Quantum Computing Market size | Industry Segment by Applications, by Type, Regional Outlook, Market Demand, Latest Trends, Quantum Computing Industry Share & Revenue by Manufacturers, Company Profiles, Growth Forecasts 2025. Analyzes current market size and upcoming 5 years growth of this industry.

According to the report, the Quantum Computing market is projected to register high demand during the forecast period with increasing demand from major end-use industries such as increasing demand due to growing inclination towards the use of renewable energy during the forecast period.

Click here to get sample of the premium report: https://industrystatsreport.com/Request/Sample?ResearchPostId=309&RequestType=Sample

A combined cycle power plant is a heat engine assembly that works in conjunction from the same heat source, converting it into mechanical energy that generally drives electrical generators in turn. The concept is that the operating fluid temperature in the system is still high enough after completing its cycle that a second subsequent heat engine extracts energy from the heat generated by the first engine.

With an emphasis on both organic and inorganic growth strategies, there have been several primary developments done by major companies include

Hewlett PackardAlibaba Quantum Computing LaboratoryBooz Allen Hamilton Inc.QxBranchSPARROW QUANTUM A/SSeeQCQuantum Circuits, Inc.Anyon Systems IncRigetti ComputingToshiba Research Europe Ltd. Others

Key Factors Impacting Market Growth:

o Increasing demand due to growing inclination towards the use of renewable energy.

o Strict government regulations directing various industries towards reducing their carbon footprint

o New developments in the clean energy sector, prompting companies to expand the horizon for CCGT market globally

Market Segmentation:

Reports include the following segmentation: By VerticalAerospace & DefenseBFSIEnergy & PowerHealthcareInformation Technology & TelecommunicationTransportationOthersBy TechnologySuperconducting loops technologyTrapped ion technologyTopological qubits technologyBy OfferingSystemsConsulting SolutionsBy ComponentHardwareSoftwareServicesBy IndustryDefenseBanking & FinanceEnergy & PowerChemicalsHealthcare & PharmaceuticalsBy ApplicationOptimizationMachine LearningSimulationOthersBy RegionNorth Americao U.S.o Canadao MexicoEuropeo UKo Franceo Germanyo Russiao Rest of EuropeAsia-Pacifico Chinao South Koreao Indiao Japano Rest of Asia-PacificLAMEAo Latin Americao Middle Easto Africa

Request Customization of the premium report: https://industrystatsreport.com/Request/Sample?ResearchPostId=309&RequestType=Methodology

Customization:

We provide customization of the study to meet specific requirements:

o By Segment

o By Sub-segment

o By Region/Country

Regional segmentation and analysis to understand growth patterns:

The market has been segmented in major regions to understand the global development and demand patterns of this market.

Detailed information for markets like North America, Western Europe, Eastern Europe, Asia Pacific, Middle East, and Rest of the World is provided by the global outlook for the Quantum Computing market. During the forecast period, North America and Western Europe are projected as main regions for the shortwave infrared sector. As one of the developed regions, the energy & power sector is important for the operations of different industries in this area.

This is one of the key factors regulating Quantum Computing market growth in those regions. Some of the major countries covered in this region include the USA, Germany, United Kingdom, France, Italy, Canada, etc.

During the forecast period, the Asia Pacific is expected to be one of the fastest-growing regions for the Quantum Computing market. Some of the fastest-growing economies and increasing energy & power demand to cater for high population & industries are expected to drive demand in this area. During the forecast period, China and India are expected to record large demand. During the forecast period, the Middle East which includes the UAE, Saudi Arabia, Iran, Qatar, and others promises high market potential. In terms of market demand during the forecast period, the rest of the world including South America and Africa are developing regions.

This report provides:

1) An overview of the global market for Quantum Computing market and related technologies.

2) Analysis of global market trends, yearly estimates and annual growth rate projections for compounds (CAGRs).

3) Identification of new market opportunities and targeted consumer marketing strategies for global Quantum Computing market.

4) Analysis of R&D and demand for new technologies and new applications

5) Extensive company profiles of key players in the industry.

The researchers have studied the market in-depth and have developed important segments such as product type, application, and region. Each and every segment and its sub-segments are analyzed based on their market share, growth prospects, and CAGR. Each market segment offers in-depth, both qualitative and quantitative information on market outlook.

Full Access to the Report: https://industrystatsreport.com/ICT-and-Media/Quantum-Computing-Market/Summary

About Us:

We publish market research reports & business insights produced by highly qualified and experienced industry analysts. Our research reports are available in a wide range of industry verticals including aviation, food & beverage, healthcare, ICT, Construction, Chemicals and lot more. Brand Essence Market Research report will be best fit for senior executives, business development managers, marketing managers, consultants, CEOs, CIOs, COOs, and Directors, governments, agencies, organizations and Ph.D. Students.

Contact Us:

https://brandessenceresearch.biz/

Brandessence Market Research & Consulting Pvt ltd.,

Kemp House, 152 160 City Road, London EC1V 2NX

+44-2038074155

[emailprotected]

Go here to see the original:
Cover Corona Outbreak: Quantum Computing Market 2020 Demand, Leading Players, Emerging Technologies, Applications, Development History Segmentation by...

The pandemic and national security go hand-in-hand for Nebraska’s Ben Sasse – KETV Omaha

What Ben Sasse sees out of China from his seat on the Senate Intelligence Committee scares him, and he's convinced Americans aren't taking the threat seriously."China is the biggest long-term threat," the Nebraska Republican said during a KETV NewsWatch 7 interview from Capitol Hill. "There isn't enough urgency or agreement about that problem."Over the past few years, the Chinese government has flexed its growing military and economic might with countries across the Pacific Ocean. It's made substantial investments in 5G technology, and one of its biggest tech manufacturers, Huawei, supplies those networks around the globe.Hauwei has drawn scrutiny from U.S. national security experts for its ties to the Chinese government.Sasse explained 5G technology allows more advanced uses for artificial intelligence, and ultimately quantum computing.Once deployed, effective quantum algorithms can enable machine learning. In the hands of an adversary, the development could allow computers to break codes with little effort, revealing U.S. intelligence assets."The Chinese communist party cannot beat us in the long-term tech race, and right now they are closing on us really fast," Sasse said.In the video above, watch Sen. Ben Sasse, R-Neb., question President Trump's nominee for Director of National Intelligence on Chinese government initiatives during a Senate hearing May 5.The national security implications also play out in pandemics, Sasse said, citing years of drills at the Pentagon."Most of those exercises said a pandemic would be the biggest problem," he said.The pandemic finally arrived in the form of COVID-19, and the U.S. government was left scrambling to contain it.Sasse says it's time to get serious about investing in health preparedness. The self-described "small government guy" wants more serious federal investment in vaccine accelerator programs and a "Shark Tank" for therapeutics."We need to have more red team, blue team, green team exercises inside the public health space, the vaccine development space," Sasse said.While public health experts try to contain the virus, it has already wrecked havoc across the world's biggest economy.As coronavirus closures crippled the U.S., Congress spent more than $3 trillion to rescue American businesses and the American people. More than 33 million Americans lost their jobs since the pandemic began."The average small business has about 16 days of cash on hand, and this thing has been going on for a couple of months," Sasse said. "So there's a lot more that needs to be done."The American people would seem to agree.Three quarters of Americans in swing states want sustained, direct payments during the coronavirus pandemic, according to a poll published Wednesday by CNBC. But before he signs off on more relief, Sasse wants to see what's working and what's not."Congress and the executive branch have spent way too much of the next generation's money without knowing whether it's going to be effective," he said. "So we need to start evaluating what we've already started to do before people start advocating to spread more money out of helicopters."Sasse also wants to see COVID-19 legal shields for health care workers and small businesses.He told KETV NewsWatch 7 he's open to spending money on data-driven job re-training programs that can get Nebraskans back to work.While those efforts are short-term efforts to rescue the economy, Sasse said the U.S. can't afford to forget the long-term challenges.Investing in robust efforts to shore up global health preparedness are critical, he said. Especially when he considers the China threat."They want to dominate the globe from a national security standpoint," said Sasse. "And viruses are one of many tools they might consider using."

What Ben Sasse sees out of China from his seat on the Senate Intelligence Committee scares him, and he's convinced Americans aren't taking the threat seriously.

"China is the biggest long-term threat," the Nebraska Republican said during a KETV NewsWatch 7 interview from Capitol Hill. "There isn't enough urgency or agreement about that problem."

Over the past few years, the Chinese government has flexed its growing military and economic might with countries across the Pacific Ocean. It's made substantial investments in 5G technology, and one of its biggest tech manufacturers, Huawei, supplies those networks around the globe.

Hauwei has drawn scrutiny from U.S. national security experts for its ties to the Chinese government.

Sasse explained 5G technology allows more advanced uses for artificial intelligence, and ultimately quantum computing.

Once deployed, effective quantum algorithms can enable machine learning. In the hands of an adversary, the development could allow computers to break codes with little effort, revealing U.S. intelligence assets.

"The Chinese communist party cannot beat us in the long-term tech race, and right now they are closing on us really fast," Sasse said.

In the video above, watch Sen. Ben Sasse, R-Neb., question President Trump's nominee for Director of National Intelligence on Chinese government initiatives during a Senate hearing May 5.

The national security implications also play out in pandemics, Sasse said, citing years of drills at the Pentagon.

"Most of those exercises said a pandemic would be the biggest problem," he said.

The pandemic finally arrived in the form of COVID-19, and the U.S. government was left scrambling to contain it.

Sasse says it's time to get serious about investing in health preparedness. The self-described "small government guy" wants more serious federal investment in vaccine accelerator programs and a "Shark Tank" for therapeutics.

"We need to have more red team, blue team, green team exercises inside the public health space, the vaccine development space," Sasse said.

While public health experts try to contain the virus, it has already wrecked havoc across the world's biggest economy.

As coronavirus closures crippled the U.S., Congress spent more than $3 trillion to rescue American businesses and the American people. More than 33 million Americans lost their jobs since the pandemic began.

"The average small business has about 16 days of cash on hand, and this thing has been going on for a couple of months," Sasse said. "So there's a lot more that needs to be done."

The American people would seem to agree.

Three quarters of Americans in swing states want sustained, direct payments during the coronavirus pandemic, according to a poll published Wednesday by CNBC.

But before he signs off on more relief, Sasse wants to see what's working and what's not.

"Congress and the executive branch have spent way too much of the next generation's money without knowing whether it's going to be effective," he said. "So we need to start evaluating what we've already started to do before people start advocating to spread more money out of helicopters."

Sasse also wants to see COVID-19 legal shields for health care workers and small businesses.

He told KETV NewsWatch 7 he's open to spending money on data-driven job re-training programs that can get Nebraskans back to work.

While those efforts are short-term efforts to rescue the economy, Sasse said the U.S. can't afford to forget the long-term challenges.

Investing in robust efforts to shore up global health preparedness are critical, he said. Especially when he considers the China threat.

"They want to dominate the globe from a national security standpoint," said Sasse. "And viruses are one of many tools they might consider using."

Follow this link:
The pandemic and national security go hand-in-hand for Nebraska's Ben Sasse - KETV Omaha

Physicists Criticize Stephen Wolfram’s ‘Theory of Everything’ – Scientific American

Stephen Wolfram blames himself for not changing the face of physics sooner.

I do fault myself for not having done this 20 years ago, the physicist turned software entrepreneur says. To be fair, I also fault some people in the physics community for trying to prevent it happening 20 years ago. They were successful. Back in 2002, after years of labor, Wolfram self-published A New Kind of Science, a 1,200-page magnum opus detailing the general idea that nature runs on ultrasimple computational rules. The book was an instant best seller and received glowing reviews: the New York Times called it a first-class intellectual thrill. But Wolframs arguments found few converts among scientists. Their work carried on, and he went back to running his software company Wolfram Research. And that is where things remaineduntil last month, when, accompanied by breathless press coverage (and a 448-page preprint paper), Wolfram announced a possible path to the fundamental theory of physics based on his unconventional ideas. Once again, physicists are unconvincedin no small part, they say, because existing theories do a better job than his model.

At its heart, Wolframs new approach is a computational picture of the cosmosone where the fundamental rules that the universe obeys resemble lines of computer code. This code acts on a graph, a network of points with connections between them, that grows and changes as the digital logic of the code clicks forward, one step at a time. According to Wolfram, this graph is the fundamental stuff of the universe. From the humble beginning of a small graph and a short set of rules, fabulously complex structures can rapidly appear. Even when the underlying rules for a system are extremely simple, the behavior of the system as a whole can be essentially arbitrarily rich and complex, he wrote in a blog post summarizing the idea. And this got me thinking: Could the universe work this way? Wolfram and his collaborator Jonathan Gorard, a physics Ph.D. candidate at the University of Cambridge and a consultant at Wolfram Research, found that this kind of model could reproduce some of the aspects of quantum theory and Einsteins general theory of relativity, the two fundamental pillars of modern physics.

But Wolframs models ability to incorporate currently accepted physics is not necessarily that impressive. Its this sort of infinitely flexible philosophy where, regardless of what anyone said was true about physics, they could then assert, Oh, yeah, you could graft something like that onto our model, says Scott Aaronson, a quantum computer scientist at the University of Texas at Austin.

When asked about such criticisms, Gorard agreesto a point. Were just kind of fitting things, he says. But we're only doing that so we can actually go and do a systematized search for specific rules that fit those of our universe.

Wolfram and Gorard have not yet found any computational rules meeting those requirements, however. And without those rules, they cannot make any definite, concrete new predictions that could be experimentally tested. Indeed, according to critics, Wolframs model has yet to even reproduce the most basic quantitative predictions of conventional physics. The experimental predictions of [quantum physics and general relativity] have been confirmed to many decimal placesin some cases, to a precision of one part in [10 billion], says Daniel Harlow, a physicist at the Massachusetts Institute of Technology. So far I see no indication that this could be done using the simple kinds of [computational rules] advocated by Wolfram. The successes he claims are, at best, qualitative. Further, even that qualitative success is limited: There are crucial features of modern physics missing from the model. And the parts of physics that it can qualitatively reproduce are mostly there because Wolfram and his colleagues put them in to begin with. This arrangement is akin to announcing, If we suppose that a rabbit was coming out of the hat, then remarkably, this rabbit would be coming out of the hat, Aaronson says. And then [going] on and on about how remarkable it is.

Unsurprisingly, Wolfram disagrees. He claims that his model has replicated most of fundamental physics already. From an extremely simple model, were able to reproduce special relativity, general relativity and the core results of quantum mechanics, he says, which, of course, are what have led to so many precise quantitative predictions of physics over the past century.

Even Wolframs critics acknowledge he is right about at least one thing: it is genuinely interesting that simple computational rules can lead to such complex phenomena. But, they hasten to add, that is hardly an original discovery. The idea goes back long before Wolfram, Harlow says. He cites the work of computing pioneers Alan Turing in the 1930s and John von Neumann in the 1950s, as well as that of mathematician John Conway in the early 1970s. (Conway, a professor at Princeton University, died of COVID-19 last month.) To the contrary, Wolfram insists that he was the first to discover that virtually boundless complexity could arise from simple rules in the 1980s. John von Neumann, he absolutely didnt see this, Wolfram says. John Conway, same thing.

Born in London in 1959, Wolfram was a child prodigy who studied at Eton College and the University of Oxford before earning a Ph.D. in theoretical physics at the California Institute of Technology in 1979at the age of 20. After his Ph.D., Caltech promptly hired Wolfram to work alongside his mentors, including physicist Richard Feynman. I dont know of any others in this field that have the wide range of understanding of Dr. Wolfram, Feynman wrote in a letter recommending him for the first ever round of MacArthur genius grants in 1981. He seems to have worked on everything and has some original or careful judgement on any topic. Wolfram won the grantat age 21, making him among the youngest ever to receive the awardand became a faculty member at Caltech and then a long-term member at the Institute for Advanced Study in Princeton, N.J. While at the latter, he became interested in simple computational systems and then moved to the University of Illinois in 1986 to start a research center to study the emergence of complex phenomena. In 1987 he founded Wolfram Research, and shortly after he left academia altogether. The software companys flagship product, Mathematica, is a powerful and impressive piece of mathematics software that has sold millions of copies and is today nearly ubiquitous in physics and mathematics departments worldwide.

Then, in the 1990s, Wolfram decided to go back to scientific researchbut without the support and input provided by a traditional research environment. By his own account, he sequestered himself for about a decade, putting together what would eventually become A New Kind of Science with the assistance of a small army of his employees.

Upon the release of the book, the media was ensorcelled by the romantic image of the heroic outsider returning from the wilderness to single-handedly change all of science. Wired dubbed Wolfram the man who cracked the code to everything on its cover. Wolfram has earned some bragging rights, the New York Times proclaimed. No one has contributed more seminally to this new way of thinking about the world. Yet then, as now, researchers largely ignored and derided his work. Theres a tradition of scientists approaching senility to come up with grand, improbable theories, the late physicist Freeman Dyson told Newsweek back in 2002. Wolfram is unusual in that hes doing this in his 40s.

Wolframs story is exactly the sort that many people want to hear, because it matches the familiar beats of dramatic tales from science history that they already know: the lone genius (usually white and male), laboring in obscurity and rejected by the establishment, emerges from isolation, triumphantly grasping a piece of the Truth. But that is rarelyif everhow scientific discovery actually unfolds. There are examples from the history of science that superficially fit this image: Think of Albert Einstein toiling away on relativity as an obscure Swiss patent clerk at the turn of the 20th century. Or, for a more recent example, consider mathematician Andrew Wiles working in his attic for years to prove Fermats last theorem before finally announcing his success in 1995. But portraying those discoveries as the work of a solo genius, romantic as it is, belies the real working process of science. Science is a group effort. Einstein was in close contact with researchers of his day, and Wiless work followed a path laid out by other mathematicians just a few years before he got started. Both of them were active, regular participants in the wider scientific community. And even so, they remain exceptions to the rule. Most major scientific breakthroughs are far more collaborativequantum physics, for example, was developed slowly over a quarter-century by dozens of physicists around the world.

I think the popular notion that physicists are all in search of the eureka moment in which they will discover the theory of everything is an unfortunate one, says Katie Mack, a cosmologist at North Carolina State University. We do want to find better, more complete theories. But the way we go about that is to test and refine our models, look for inconsistencies and incrementally work our way toward better, more complete models.

Most scientists would readily tell you that their discipline isand always has beena collaborative, communal process. Nobody can revolutionize a scientific field without first getting the critical appraisal and eventual validation of their peers. Today this requirement is performed through peer reviewa process Wolframs critics say he has circumvented with his announcement. Certainly theres no reason that Wolfram and his colleagues should be able to bypass formal peer review, Mack says. And they definitely have a much better chance of getting useful feedback from the physics community if they publish their results in a format we actually have the tools to deal with.

Mack is not alone in her concerns. Its hard to expect physicists to comb through hundreds of pages of a new theory out of the blue, with no buildup in the form of papers, seminars and conference presentations, says Sean Carroll, a physicist at Caltech. Personally, I feel it would be more effective to write short papers addressing specific problems with this kind of approach rather than proclaiming a breakthrough without much vetting.

So why did Wolfram announce his ideas this way? Why not go the traditional route? I don't really believe in anonymous peer review, he says. I think its corrupt. Its all a giant story of somewhat corrupt gaming, I would say. I think its sort of inevitable that happens with these very large systems. Its a pity.

So what are Wolframs goals? He says he wants the attention and feedback of the physics community. But his unconventional approachsoliciting public comments on an exceedingly long paperalmost ensures it shall remain obscure. Wolfram says he wants physicists respect. The ones consulted for this story said gaining it would require him to recognize and engage with the prior work of others in the scientific community.

And when provided with some of the responses from other physicists regarding his work, Wolfram is singularly unenthused. Im disappointed by the naivete of the questions that youre communicating, he grumbles. I deserve better.

The rest is here:
Physicists Criticize Stephen Wolfram's 'Theory of Everything' - Scientific American

1QBit and Canadian health care providers team up to empower front-line clinicians with Health Canada’s first approved AI tool for radiology in the…

Health and technology providers have joined forces to deploy XrAI, a machine learning tool that acts as a co-pilot for clinicians to increase accuracy in detecting lung abnormalities associated with diseases such as COVID-19 infection, pneumonia, tuberculosis, and lung cancer.

VANCOUVER, May 7, 2020 /CNW/ - 1QBit, a global leader in advanced computing and software development, and its partners representing health authorities from East to West, have received funding from the Digital Technology Supercluster to accelerate the clinical deployment ofXrAI, the first radiology AI (artificial intelligence) tool to be certified as a Class III Medical Device by Health Canada.

XrAI(pronounced "X-ray") is a machine learning, clinical-decision support tool that improves the accuracy and consistency of chest X-ray interpretation. This tool supports medical teams by identifying lung abnormalities on chest radiographs within the teams' existing clinical workflow, requiring little to no further training. Its analysis capabilities empower clinicians with this informationso that they can more effectively manage patients with COVID-19 infections or other respiratory complications, such as SARS, pneumonia, and tuberculosis.

"As a physician, I recognize that trust is the currency with which health systems operate. So we designed XrAI to act as a trusted co-pilot that helps doctors and nurses on the front lines. The tool identifies a lung abnormality and displays this information in terms of a confidence level. This is intuitive to busy clinicians, as it reflects a familiar way in which a radiologist would share their opinion," said Dr. Deepak Kaura, Chief Medical Officer of 1QBit. "We were so impressed by how quickly the Saskatchewan Health Authority mobilized to conduct the clinical trial for XrAI, which had actually been planned for a later date. Equally impressive was Health Canada, whose team was detail oriented, responding diligently and acting effectively to grant us approval."

XrAI received certification as a Class III Medical Device by Health Canada last month, based on rigorous review and the results of a single-blind, randomized control clinical trial. 1QBit trained the algorithm on 250,000 cases taken from more than 500,000 anonymized radiograph images from Canadian health organizations, and open and subscription-based datasets. The data covered a broad spectrum of diseases, across geographically and demographically diverse populations, while the tool's features were designed with input from a broad cross section of physicians and other health care professionals.

"Many physicians recognize the value of machine learning applied to our field. However, we are not willing to sacrifice the scientific rigour upon which medicine and our profession has been built. XrAI is one of the first AI tools that I have seen that has been built and validated with a randomized control trial across multiple physician groups," said Dr. Paul Babyn, Physician Executive of the Saskatchewan Health Authority. "The trust that 1QBit's tool has garnered as a result of its rigorous approach is what I believe has led to such a prompt and positive response from the medical community."

The ability to get XrAI into the hands of clinicians is being accelerated by funding from the Digital Technology Supercluster through its COVID-19 Program. This award is contributing to the implementation costs for partnering health care authorities to deploy the software across their clinical systems, which span hospitals and clinics in British Columbia, Saskatchewan, and Ontario. Microsoft is also providing support for 1QBit as they implement XrAI with their partners.

"XrAI is yet another example of the Supercluster's 'all hands-on deck' approach to overcoming the challenges presented by COVID-19. By collaborating closely with health authorities, 1QBit has allowed us to expedite this critical technology to get into the hands of practitioners across the country and contribute to what we expect may be a turning point in the speed at which we identify abnormalities and treat those infected with COVID-19," said Sue Paish, CEO of the Digital Technology Supercluster.

Early on, 1QBit engaged health authorities, front-line health care workers, and technology providers to ensure the roll-out of its technology would be led by physicians. 1QBit's partners include the Saskatchewan Health Authority, the Fraser Health Authority, the First Nations Health Authority, Trillium Health Partners, the Vancouver Coastal Health Authority, the University of British Columbia's Faculty of Medicine as well as The Red Cross. Trans-national implementation of XrAI is now underway and will provide a comprehensive and inclusive elevation of care from West to East, including First Nations, the north, and rural communities, as well as urban centres.

1QBit is continuing to partner with new clinicians and health organizations interested in arming their teams with XrAI to enhance quality of care, and to improve the efficiency of health resources during the current COVID-19 pandemic and beyond.

About 1QBit:

1QBitis a global leader in advanced computing and software development. Founded in 2012, 1QBit builds hardware-agnostic software and partners with companies taking on computationally exhaustive problems in advanced materials, life sciences, energy, and finance. Trusted by Fortune 500 companies and top research institutions internationally, 1QBit is seen as an industry leader in quantum computing, machine learning, software development and hardware optimization. Headquartered in Vancouver, Canada, the company employs over 120 mathematicians, computer scientists, physicists, chemists, software developers, physicians, biomedical experts, and quantum computing specialists. 1QBit develops novel solutions to computational problems along the full stack of classical and quantum computing, from hardware innovations to commercial application development.

About Digital Technology Supercluster:

The Digital Technology Supercluster is led by global companies like Canfor, MDA, Microsoft, Telus, Teck Resources Limited,Mosaic Forest Management, LifeLabs, andTerramera, and tech industry leaders such asD-Wave Systems, Finger Food Advanced Technology Group, andLlamaZOO. Members also include BC's post-secondary institutions, including the Emily Carr University of Art + Design, theBritish Columbia Institute of Technology, theUniversity of British Columbia, andSimon Fraser University. A full list of members can be foundhere.

About the COVID-19 Program:

The COVID-19 Program funds projects that contribute to improving the health and safety of Canadians, supporting Canada's ability to address issues created by the COVID-19 pandemic. In addition, these projects will build the expertise and capacity needed to address and anticipate issues that may arise in future health crises. More information can be foundhere.

SOURCE 1QBit

For further information: For media requests related to 1QBit and XrAI, please contact Amanda Downs at [emailprotected] or at +1 (778) 425-4434; For media requests related to the Digital Technology Supercluster, please contact Elysa Darling at [emailprotected] or at +1 (587) 890-9833.

Homepage

View post:
1QBit and Canadian health care providers team up to empower front-line clinicians with Health Canada's first approved AI tool for radiology in the...

A Discovery That Long Eluded Physicists: Superconductivity to the Edge – SciTechDaily

Researchers at Princeton have discovered superconducting currents traveling along the outer edges of a superconductor with topological properties, suggesting a route to topological superconductivity that could be useful in future quantum computers. The superconductivity is represented by the black center of the diagram indicating no resistance to the current flow. The jagged pattern indicates the oscillation of the superconductivity which varies with the strength of an applied magnetic field. Credit: Stephan Kim, Princeton University

Princeton researchers detect a supercurrent a current flowing without energy loss at the edge of a superconductor with a topological twist.

A discovery that long eluded physicists has been detected in a laboratory at Princeton. A team of physicists detected superconducting currents the flow of electrons without wasting energy along the exterior edge of a superconducting material. The finding was published May 1 in the journal Science.

The superconductor that the researchers studied is also a topological semi-metal, a material that comes with its own unusual electronic properties. The finding suggests ways to unlock a new era of topological superconductivity that could have value for quantum computing.

To our knowledge, this is the first observation of an edge supercurrent in any superconductor, said Nai Phuan Ong, Princetons Eugene Higgins Professor of Physics and the senior author on the study.

Our motivating question was, what happens when the interior of the material is not an insulator but a superconductor? Ong said. What novel features arise when superconductivity occurs in a topological material?

Although conventional superconductors already enjoy widespread usage in magnetic resonance imaging (MRI) and long-distance transmission lines, new types of superconductivity could unleash the ability to move beyond the limitations of our familiar technologies.

Researchers at Princeton and elsewhere have been exploring the connections between superconductivity and topological insulators materials whose non-conformist electronic behaviors were the subject of the 2016 Nobel Prize in Physics for F. Duncan Haldane, Princetons Sherman Fairchild University Professor of Physics.

Topological insulators are crystals that have an insulating interior and a conducting surface, like a brownie wrapped in tin foil. In conducting materials, electrons can hop from atom to atom, allowing electric current to flow. Insulators are materials in which the electrons are stuck and cannot move. Yet curiously, topological insulators allow the movement of electrons on their surface but not in their interior.

To explore superconductivity in topological materials, the researchers turned to a crystalline material called molybdenum ditelluride, which has topological properties and is also a superconductor once the temperature dips below a frigid 100 milliKelvin, which is -459 degrees Fahrenheit.

Most of the experiments done so far have involved trying to inject superconductivity into topological materials by putting the one material in close proximity to the other, said Stephan Kim, a graduate student in electrical engineering, who conducted many of the experiments. What is different about our measurement is we did not inject superconductivity and yet we were able to show the signatures of edge states.

The team first grew crystals in the laboratory and then cooled them down to a temperature where superconductivity occurs. They then applied a weak magnetic field while measuring the current flow through the crystal. They observed that a quantity called the critical current displays oscillations, which appear as a saw-tooth pattern, as the magnetic field is increased.

Both the height of the oscillations and the frequency of the oscillations fit with predictions of how these fluctuations arise from the quantum behavior of electrons confined to the edges of the materials.

When we finished the data analysis for the first sample, I looked at my computer screen and could not believe my eyes, the oscillations we observed were just so beautiful and yet so mysterious, said Wudi Wang, who as first author led the study and earned his Ph.D. in physics from Princeton in 2019. Its like a puzzle that started to reveal itself and is waiting to be solved. Later, as we collected more data from different samples, I was surprisedat how perfectly the data fit together.

Researchers have long known that superconductivity arises when electrons, which normally move about randomly, bind into twos to form Cooper pairs, which in a sense dance to the same beat. A rough analogy is a billion couples executing the same tightly scripted dance choreography, Ong said.

The script the electrons are following is called the superconductors wave function, which may be regarded roughly as a ribbon stretched along the length of the superconducting wire, Ong said. A slight twist of the wave function compels all Cooper pairs in a long wire to move with the same velocity as a superfluid in other words acting like a single collection rather than like individual particles that flows without producing heating.

If there are no twists along the ribbon, Ong said, the Cooper pairs are stationary and no current flows. If the researchers expose the superconductor to a weak magnetic field, this adds an additional contribution to the twisting that the researchers call the magnetic flux, which, for very small particles such as electrons, follows the rules of quantum mechanics.

The researchers anticipated that these two contributors to the number of twists, the superfluid velocity and the magnetic flux, work together to maintain the number of twists as an exact integer, a whole number such as 2, 3 or 4 rather than a 3.2 or a 3.7. They predicted that as the magnetic flux increases smoothly, the superfluid velocity would increase in a saw-tooth pattern as the superfluid velocity adjusts to cancel the extra .2 or add .3 to get an exact number of twists.

The team measured the superfluid current as they varied the magnetic flux and found that indeed the saw-tooth pattern was visible.

In molybdenum ditelluride and other so-called Weyl semimetals, this Cooper-pairing of electrons in the bulk appears to induce a similar pairing on the edges.

The researchers noted that the reason why the edge supercurrent remains independent of the bulk supercurrent is currently not well understood. Ong compared the electrons moving collectively, also called condensates, to puddles of liquid.

From classical expectations, one would expect two fluid puddles that are in direct contact to merge into one, Ong said. Yet the experiment shows that the edge condensates remain distinct from that in the bulk of the crystal.

The research team speculates that the mechanism that keeps the two condensates from mixing is the topological protection inherited from the protected edge states in molybdenum ditelluride. The group hopes to apply the same experimental technique to search for edge supercurrents in other unconventional superconductors.

There are probably scores of them out there, Ong said.

Reference: Evidence for an edge supercurrent in the Weyl superconductor MoTe2 by Wudi Wang, Stephan Kim, Minhao Liu, F. A. Cevallos, Robert. J. Cava and Nai Phuan Ong, 1 May 2020, Science.DOI: 10.1126/science.aaw9270

Funding: The research was supported by the U.S. Army Research Office (W911NF-16-1-0116). The dilution refrigerator experiments were supported by the U.S. Department of Energy (DE- SC0017863). N.P.O. and R.J.C. acknowledge support from the Gordon and Betty Moore Foundations Emergent Phenomena in Quantum Systems Initiative through grants GBMF4539 (N.P.O.) and GBMF-4412 (R.J.C.). The growth and characterization of crystals were performed by F.A.C. and R.J.C., with support from the National Science Foundation (NSF MRSEC grant DMR 1420541).

Go here to read the rest:
A Discovery That Long Eluded Physicists: Superconductivity to the Edge - SciTechDaily

Determined AI makes its machine learning infrastructure free and open source – TechCrunch

Machine learning has quickly gone from niche field to crucial component of innumerable software stacks, but that doesnt mean its easy. The tools needed to create and manage it are enterprise-grade and often enterprise-only but Determined AI aims to make them more accessible than ever by open-sourcing its entire AI infrastructure product.

The company created its Determined Training Platform for developing AI in an organized, reliable way the kind of thing that large companies have created (and kept) for themselves, the team explained when they raised an $11 million Series A last year.

Machine learning is going to be a big part of how software is developed going forward. But in order for companies like Google and Amazon to be productive, they had to build all this software infrastructure, said CEO Evan Sparks. One company we worked for had 70 people building their internal tools for AI. There just arent that many companies on the planet that can withstand an effort like that.

At smaller companies, ML is being experimented with by small teams using tools intended for academic work and individual research. To scale that up to dozens of engineers developing a real product there arent a lot of options.

Theyre using things like TensorFlow and PyTorch, said Chief Scientist Ameet Talwalkar. A lot of the way that work is done is just conventions: How do the models get trained? Where do I write down the data on which is best? How do I transform data to a good format? All these are bread and butter tasks. Theres tech to do it, but its really the Wild West. And the amount of work you have to do to get it set up theres a reason big tech companies build out these internal infrastructures.

Determined AI, whose founders started out at UC Berkeleys AmpLab (home of Apache Spark), has been developing its platform for a few years, with feedback and validation from some paying customers. Now, they say, its ready for its open source debut with an Apache 2.0 license, of course.

We have confidence people can pick it up and use it on their own without a lot of hand-holding, said Sparks.

You can spin up your own self-hosted installation of the platform using local or cloud hardware, but the easiest way to go about it is probably the cloud-managed version that automatically provisions resources from AWS or wherever you prefer and tears them down when theyre no longer needed.

The hope is that the Determined AI platform becomes something of a base layer that lots of small companies can agree on, providing portability to results and standards so youre not starting from scratch at every company or project.

With machine learning development expected to expand by orders of magnitude in the coming years, even a small piece of the pie is worth claiming, but with luck, Determined AI may grow to be the new de facto standard for AI development in small and medium businesses.

You can check out the platform on GitHub or at Determined AIs developer site.

Read the original here:
Determined AI makes its machine learning infrastructure free and open source - TechCrunch

Industrial Asset Optimization: Connecting Machines Directly with Data Scientists – Machine Learning Times – machine learning & data science news -…

By: Terry Miller, Global Digital Strategy and Business Development, SiemensFor more from this author, attend his virtual presentation, Industrial Asset Optimization: Machine-to-Cloud/Edge Analytics, at Predictive Analytics World for Industry 4.0, May 31-June 4, 2020. For industrial firms to realize the benefits promised by embracing Industry 4.0, the access to clean, quality asset data must improve. Most of a data , scientists work, in any vertical, involves cleaning and contextualizing data, or data prep. In the industrial segment, this remains true, and, considerably more challenging. Enterprise-wide data ingest platforms tend to yield inefficient, incomplete data necessary to optimize assets at the application layer. In order to improve this, firms should

To view this content OR subscribe for free

Already receive the Machine Learning Times emails?The Machine Learning Times now requires legacy email subscribers to upgrade their subscription - one time only - in order to attain a password-protected login and gain complete access.

Sign up for the Newsletter with your Choice of social media account:

Read more:
Industrial Asset Optimization: Connecting Machines Directly with Data Scientists - Machine Learning Times - machine learning & data science news -...

Tecton.ai Launches with New Data Platform to Make Machine Learning Accessible to Every Company – insideBIGDATA

Tecton.ai emerged from stealth and formally launched with its data platform for machine learning. Tecton enables data scientists to turn raw data into production-ready features, the predictive signals that feed machine learning models. Tecton is in private beta with paying customers, including a Fortune 50 company.

Tecton.ai also announced $25 million in seed and Series A funding co-led by Andreessen Horowitz and Sequoia. Both Martin Casado, general partner at Andreessen Horowitz, and Matt Miller, partner at Sequoia, have joined the board.

Tecton.ai founders Mike Del Balso (CEO), Kevin Stumpf (CTO) and Jeremy Hermann (VP of Engineering) worked together at Uber when the company was struggling to build and deploy new machine learning models, so they createdUbers Michelangelo machine learning platform. Michelangelo was instrumental in scaling Ubers operations to thousands of production models serving millions of transactions per second in just a few years, and today it supports a myriad of use cases ranging from generating marketplace forecasts, calculating ETAs and automating fraud detection.

Del Balso, Stumpf and Hermann went on to found Tecton.ai to solve the data challenges that are the biggest impediment to deploying machine learning in the enterprise today. Enterprises are already generating vast amounts of data, but the problem is how to harness and refine this data into predictive signals that power machine learning models. Engineering teams end up spending the majority of their time building bespoke data pipelines for each new project. These custom pipelines are complex, brittle, expensive and often redundant. The end result is that 78% of new projects never get deployed, and 96% of projects encounter challenges with data quality and quantity(1).

Data problems all too often cause last-mile delivery issues for machine learning projects, said Mike Del Balso, Tecton.ai co-founder and CEO. With Tecton, there is no last mile. We created Tecton to empower data science teams to take control of their data and focus on building models, not pipelines. With Tecton, organizations can deliver impact with machine learning quickly, reliably and at scale.

Tecton.ai has assembled a world-class engineering team that has deep experience building machine learning infrastructure for industry leaders such as Google, Facebook, Airbnb and Uber. Tecton is the industrys first data platform that has been designed specifically to support the requirements of operational machine learning. It empowers data scientists to build great features, serve them to production quickly and reliably and do it at scale.

Tecton makes the delivery of machine learning data predictable for every company.

The ability to manage data and extract insights from it is catalyzing the next wave of business transformation, said Martin Casado, general partner at Andreessen Horowitz. The Tecton team has been on the forefront of this change with a long history of machine learning/AI and data at Google, Facebook and Airbnb and building the machine learning platform at Uber. Were very excited to be partnering with Mike, Kevin, Jeremy and the Tecton team to bring this expertise to the rest of the industry.

The founders of Tecton built a platform within Uber that took machine learning from a bespoke research effort to the core of how the company operated day-to-day, said Matt Miller, partner at Sequoia. They started Tecton to democratize machine learning across the enterprise. We believe their platform for machine learning will drive a Cambrian explosion within their customers, empowering them to drive their business operations with this powerful technology paradigm, unlocking countless opportunities. We were thrilled to partner with Tecton along with a16z at the seed and now again at the Series A. We believe Tecton has the potential to be one of the most transformational enterprise companies of this decade.

Sign up for the free insideBIGDATAnewsletter.

Continue reading here:
Tecton.ai Launches with New Data Platform to Make Machine Learning Accessible to Every Company - insideBIGDATA

How To Verify The Memory Loss Of A Machine Learning Model – Analytics India Magazine

It is a known fact that deep learning models get better with diversity in the data they are fed with. For instance, data in a use case related to healthcare data will be taken from several providers such as patient data, history, workflows of professionals, insurance providers, etc. to ensure such data diversity.

These data points that are collected through various interactions of people are fed into a machine learning model, which sits remotely in a data haven spewing predictions without exhausting.

However, consider a scenario where one of the providers ceases to offer data to the healthcare project and later requests to delete the provided information. In such a case, does the model remember or forget its learnings from this data?

To explore this, a team from the University of Edinburgh and Alan Turing Institute assumed that a model had forgotten some data and what can be done to verify the same. In this process, they investigated the challenges and also offered solutions.

The authors of this work wrote that this initiative is first of its kind and the only work that comes close is the Membership Inference Attack (MIA), which is also an inspiration to this work.

To verify if a model has forgotten specific data, the authors propose a Kolmogorov Smirnov (K-S) distance-based method. This method is used to infer whether a model is trained with the query dataset. The algorithm can be seen below:

Based on the above algorithm, the researchers have used benchmark datasets such as MNIST, SVHN and CIFAR-10 for experiments, which were used to verify the effectiveness of this new method. Later, this method was also tested on the ACDC dataset using the pathology detection component of the challenge.

The MNIST dataset contains 60,000 images of 10 digits with image size 28 28. Similar to MNIST, the SVHN dataset has over 600,000 digit images obtained from house numbers in Google Street view images. The image size of SVHN is 32 32. Since both datasets are for the task of digit recognition/classification, this dataset was considered to belong to the same domain. CIFAR-10 is used as a dataset to validate the method. CIFAR-10 has 60,000 images (size 32 32) of 10-class objects, including aeroplane, bird, etc. To train models with the same design, the images of all three datasets are preprocessed to grey-scale and rescaled to size 28 28.

Using the K-S distance statistics about the output distribution of a target model, said the authors, can be obtained without knowing the weights of the model. Since the models training data are unknown, few new models called the shadow models were trained with the query dataset and another calibration dataset.

Then by comparing the K-S values, one can conclude if the training data contains information from the query dataset or not.

Experiments have been done before to check the ownership one has over data in the world of the internet. One such attempt was made by the researchers at Stanford in which they investigated the algorithmic principles behind efficient data deletion in machine learning.

They found that for many standard ML models, the only way to completely remove an individuals data is to retrain the whole model from scratch on the remaining data, which is often not computationally practical. In a trade-off between efficiency and privacy, a challenge arises because algorithms that support efficient deletion need not be private, and algorithms that are private do not have to support efficient deletion.

Aforementioned experiments are an attempt to probe and raise new questions related to the never-ending debate about the usage of AI and privacy. The objective in these works is to investigate the idea of how much authority an individual has over specific data while also helping expose the vulnerabilities within a model if certain data is removed.

Check more about this work here.

comments

See the original post here:
How To Verify The Memory Loss Of A Machine Learning Model - Analytics India Magazine

AI, machine learning and automation in cybersecurity: The time is now – GCN.com

INDUSTRY INSIGHT

The cybersecurity skills shortage continues to plague organizations across regions, markets and sectors, and the government sector is no exception.According to (ISC)2, there are only enough cybersecurity pros to fill about 60% of the jobs that are currently open -- which means the workforce will need to grow by roughly 145% to just meet the current global demand.

The Government Accountability Office states that the federal government needs a qualified, well-trained cybersecurity workforce to protect vital IT systems, and one senior cybersecurity official at the Department of Homeland Security has described the talent gap as a national security issue. The scarcity of such workers is one reason why securing federal systems is on GAOs High Risk list.Given this situation, chief information security officers who are looking for ways to make their existing resources more effective can make great use of automation and artificial intelligence to supplement and enhance their workforce.

The overall challenge landscape

Results of our survey, Making Tough Choices: How CISOs Manage Escalating Threats and Limited Resources show that CISOs currently devote 36% of their budgets to response and 33% to prevention.However, as security needs change, many CISOs are looking to shift budget away from prevention without reducing its effectiveness. An optimal budget would reduce spend on prevention and increase spending on detection and response to 33% and 40% of the security budget, respectively.This shift would give security teams the speed and flexibility they need to react quickly in the face of a threat from cybercriminals who are outpacing agencies defensive capabilities.When breaches are inevitable, it is important to stop as many as possible at the point of intrusion, but it is even more important to detect and respond to them before they can do serious damage.

One challenge to matching the speed of todays cyberattacks is that CISOs have limited personnel and budget resources. To overcome these obstacles and attain the detection and response speeds necessary for effective cybersecurity, CISOs must take advantage of AI, machine learning and automation.These technologies will help close gaps by correlating threat intelligence and coordinating responses at machine speed. Government agencies will be able to develop a self-defending security system capable of analyzing large volumes of data, detecting threats, reconfiguring devices and responding to threats without human intervention.

The unique challenges

Federal agencies deal with a number of challenges unique to the public sector, including the age and complexity of IT systems as well as the challenges of the government budget cycle.IT teams for government agencies arent just protecting intellectual property or credit card numbers; they are also tasked with protecting citizens sensitive data and national security secrets.

Charged with this duty but constrained by limited resources, IT leaders must weigh the risks of cyber threats against the daily demands of keeping networks up and running. This balancing act becomes more difficult as agencies migrate to the cloud, adopt internet-of-things devices and transition to software-defined networks that have no perimeter. These changes mean government networks are expanding their attack surface with no additional -- or even fewerdefensive resources. Its part of the reason why the Verizon Data Breach Investigations Report found that government agencies were subjected to more security incidents and more breaches than any other sector last year.

To change that dynamic, the typical government set-up of siloed systems must be replaced with a unified platform that can provide wider and more granular network visibility and more rapid and automated response.

How AI and automation can help

The keys to making a unified platform work are AI and automation technologies. Because organizations cannot keep pace with the growing volume of threats by manual detection and response, they need to leverage AI/ML and automation to fill these gaps. AI-driven solutions can learn what normal behavior looks like in order to detect anomalous behavior.For instance, many employees typically access a specific kind of data or only log on at certain times. If an employees account starts to show activity outside of these normal parameters, an AI/ML-based solution can detect these anomalies and can inspect or quarantine the affected device or user account until it is determined to be safe or mitigating action can be taken.

If the device is infected with malware or is otherwise acting maliciously, that AI-based tool can also issue automated responses. Making these tactical tasks the responsibility of AI-driven solutions frees security teams to work on more strategic problems, develop threat intelligence or focus on more difficult tasks such as detecting unknown threats.

IT teams at government agencies that want to implement AI and automation must be sure the solution they choose can scale and operate at machine speeds to keep up with the growing complexity and speed of the threat. In selecting a solution, IT managers must take time to ensure solutions have been developed using AI best practices and training techniques and that they are powered by best-in-class threat intelligence, security research and analytics technology. Data should be collected from a variety of nodes -- both globally and within the local IT environment -- to glean the most accurate and actionable information for supporting a security strategy.

Time is of the essence

Government agencies are experiencing more cyberattacks than ever before, at a time when the nation is facing a 40% cybersecurity skills talent shortage. Time is of the essence in defending a network, but time is what under-resourced and over-tasked government IT teams typically lack. As attacks come more rapidly and adapt to the evolving IT environment and new vulnerabilities, AI/ML and automation are rapidly becoming necessities.Solutions built from the ground up with these technologies will help government CISOs counter and potentially get ahead of todays sophisticated attacks.

About the Author

Jim Richberg is a Fortinet field CISO focused on the U.S. public sector.

More here:
AI, machine learning and automation in cybersecurity: The time is now - GCN.com

Microsoft: This is how to protect your machine-learning applications – TechRepublic

Understanding failures and attacks can help us build safer AI applications.

Modern machine learning (ML) has become an important tool in a very short time. We're using ML models across our organisations, either rolling our own in R and Python, using tools like TensorFlow to learn and explore our data, or building on cloud- and container-hosted services like Azure's Cognitive Services. It's a technology that helps predict maintenance schedules, spots fraud and damaged parts, and parses our speech, responding in a flexible way.

SEE:Prescriptive analytics: An insider's guide (free PDF)(TechRepublic)

The models that drive our ML applications are incredibly complex, training neural networks on large data sets. But there's a big problem: they're hard to explain or understand. Why does a model parse a red blob with white text as a stop sign and not a soft drink advert? It's that complexity which hides the underlying risks that are baked into our models, and the possible attacks that can severely disrupt the business processes and services we're building using those very models.

It's easy to imagine an attack on a self-driving car that could make it ignore stop signs, simply by changing a few details on the sign, or a facial recognition system that would detect a pixelated bandanna as Brad Pitt. These adversarial attacks take advantage of the ML models, guiding them to respond in a way that's not how they're intended to operate, distorting the input data by changing the physical inputs.

Microsoft is thinking a lot about how to protect machine learning systems. They're key to its future -- from tools being built into Office, to its Azure cloud-scale services, and managing its own and your networks, even delivering security services through ML-powered tools like Azure Sentinel. With so much investment riding on its machine-learning services, it's no wonder that many of Microsoft's presentations at the RSA security conference focused on understanding the security issues with ML and on how to protect machine-learning systems.

Attacks on machine-learning systems need access to the models used, so you need to keep your models private. That goes for small models that might be helping run your production lines as much as the massive models that drive the likes of Google, Bing and Facebook. If I get access to your model, I can work out how to affect it, either looking for the right data to feed it that will poison the results, or finding a way past the model to get the results I want.

Much of this work has been published in a paper in conjunction with the Berkman Klein Center, on failure modes in machine learning. As the paper points out, a lot of work has been done in finding ways to attack machine learning, but not much on how to defend it. We need to build a credible set of defences around machine learning's neural networks, in much the same way as we protect our physical and virtual network infrastructures.

Attacks on ML systems are failures of the underlying models. They are responding in unexpected, and possibly detrimental ways. We need to understand what the failure modes of machine-learning systems are, and then understand how we can respond to those failures. The paper talks about two failure modes: intentional failures, where an attacker deliberately subverts a system, and unintentional failures, where there's an unsafe element in the ML model being used that appears correct but delivers bad outcomes.

By understanding the failure modes we can build threat models and apply them to our ML-based applications and services, and then respond to those threats and defend our new applications.

The paper suggests 11 different attack classifications, many of which get around our standard defence models. It's possible to compromise a machine-learning system without needing access to the underlying software and hardware, so standard authorisation techniques can't protect ML-based systems and we need to consider alternative approaches.

What are these attacks? The first, perturbation attacks, modify queries to change the response to one the attackers desire. That's matched by poisoning attacks, which achieve the same result by contaminating the training data. Machine-learning models often include important intellectual property, and some attacks like model inversion aim to extract that data. Similarly, a membership inference attack will try to determine whether specific data was in the initial training set. Closely related is the concept of model stealing, using queries to extract the model.

SEE:5G: What it means for IoT(free PDF)

Other attacks include reprogramming the system around the ML model, so that either results or inputs are changed. Closely related are adversarial attacks that change physical objects, adding duct tape to signs to confuse navigation or using specially printed bandanas to disrupt facial-recognition systems. Some attacks depend on the provider: a malicious provider can extract training data from customer systems. They can add backdoors to systems, or compromise models as they're downloaded.

While many of these attacks are new and targeted specifically at machine-learning systems, they are still computer systems and applications, and are vulnerable to existing exploits and techniques, allowing attackers to use familiar approaches to disrupt ML applications.

It's a long list of attack types, but understanding what's possible allows us to think about the threats our applications face. More importantly they provide an opportunity to think about defences and how we protect machine-learning systems: building better, more secure training sets, locking down ML platforms, and controlling access to inputs and outputs, working with trusted applications and services.

Attacks are not the only risk: we must be aware of unintended failures -- problems that come from the algorithms we use or from how we've designed and tested our ML systems. We need to understand how reinforcement learning systems behave, how systems respond in different environments, if there are natural adversarial effects, or how changing inputs can change results.

If we're to defend machine-learning applications, we need to ensure that they have been tested as fully as possible, in as many conditions as possible. The apocryphal stories of early machine-learning systems that identified trees instead of tanks, because all the training images were of tanks under trees, are a sign that these aren't new problems, and that we need to be careful about how we train, test, and deploy machine learning. We can only defend against intentional attacks if we know that we've protected ourselves and our systems from mistakes we've made. The old adage "test, test, and test again" is key to building secure and safe machine learning -- even when we're using pre-built models and service APIs.

Be your company's Microsoft insider by reading these Windows and Office tips, tricks, and cheat sheets. Delivered Mondays and Wednesdays

Here is the original post:
Microsoft: This is how to protect your machine-learning applications - TechRepublic

Machine-learning is a boon, but it still needs a human hand – Business Day

Advances in computer power, machine-learning and predictive algorithms are creating paradigm shifts in many industries. For example, when analgorithm outperformed six radiologistsin reading mammograms and accurately diagnosing breast cancer, this raised questions around the role of machine-learning in medicine and whether it will replace, or enhance, the work being done by doctors.

Similarly, when Googles AI software AlphaGo beat the worlds top Go master in what is described as humankinds most complicated board game, The New York Timesdeclared it isnt looking good for humanity when an algorithm can outperform a human in a highly complex task.

Both these examples point to narrow uses of artificial intelligence, specific types of machine-learning that are hugely effective. The medical example illustrates supervised learning, where a computer is programmed to solve a particular problem by looking for patterns. It is given labelled data sets, in this case X-rays with the diagnosis of presence or absence of breast cancer. When given a new X-ray, the computer applies an algorithm based on what it has learnt from all the previous X-rays to make a diagnosis. Unsupervised learning is a sort of self-optimisation where a computer has a set of rules, such as how to play Go, and through playing millions of games learns how to apply these rules and improve.

What is machine-learning?

Machine-learning is a phenomenal tool. To fully harness its potential it is essential to understand what machine-learning is (and isnt) and to demystify some of the hype and the fear around what it can and cant be used for. We have anthropomorphised computers; we speak about them in terms of intelligence and learning. But in essence, a machine computes it does not learn. Its algorithms are designed to mimic learning. In essence, these algorithms minimise the errors of a complicated function that maps inputs to outcomes and we interpret that as solving a problem, but the machine doesnt know what problem it is solving or that it is playing a game. The intelligence rests with the humans who design the algorithms and configure them for specific tasks.

Now, more than ever, we need intelligent and well-educated people who can apply these techniques in the correct context and interpret the results. When an algorithm fails, the consequences can be catastrophic. An obvious example is a fatal accident caused by aself-driving car. We need to build in fault tolerance. Data integrity is also an important issue what we put in is going to affect what we get out. Education is critical in making sure we get these elements right. And, of course, there are broader ethical issues to consider surrounding data collection, such as what data can be used, where it is sourced, and whether different data sets can be combined.

Machine-learning is particularly valuable in the financial sector. Many applications are already in use in banking, insurance and asset management. Financial institutions use pattern recognition successfully for fraud detection. It is also valuable for looking at trends in data sets and finding patterns that humans may not be able to identify directly, for example in profiling people who apply for credit. There are even robo-advisory applications for individual asset allocation. In financial modelling, machine-learning can be applied to pricing, calibration and hedging.

For example, valuing derivatives contracts depends on many complex factors and variables such as interest rates, exchange rates, equity values all of which fluctuate all the time. Financial mathematicians use models for this, but they are complicated and not easy to solve in a closed form. We may be able to build and apply a model to one contract, but banks have hundreds of contracts, and risk management and regulatory frameworks need to be updated all the time. Machine-learning, specifically deep learning and neural nets, provides a powerful shortcut. We can use classical numerical methods to produce financial models and then use them as labelled data sets as in the X-ray example. An algorithm can take this input to generate the output for multiple contracts.

Industries and organisations that are pulling ahead are figuring out where to replace standard methods and complex, time-consuming computations with machine-learning. They are also using it for more complex modelling approaches, adding further variables that cannot usually be factored into standard methodologies. The most obvious benefit is that it is faster machines can compute millions of times faster than humans. These techniques also have the potential to be far more accurate and allow us to make better-informed decisions.

But the human element is critical. The accuracy of potentially life-changing outcomes will depend on how we identify where we use these techniques, how we build the algorithms, how we choose and manage data and, finally, in how we interpret and act upon the results.

Prof McWalter is an applied mathematician who lectures computational finance at UCTs African Institute of Financial Markets and Risk Management. Prof Kienitz lectures at the University of Wuppertal and is an adjunct associate professor at UCT. His research interests include numerical methods in finance and machine-learning applied to financial problems and derivative instruments.

Read this article:
Machine-learning is a boon, but it still needs a human hand - Business Day

Rise in the demand for Machine Learning & AI skills in the post-COVID world – Times of India

The world has seen an unprecedented challenge and is battling this invisible enemy with all their might. The Novel coronavirus spread has left the global economies holding on to strands, businesses impacted and most people locked down. But while the physical world has come to a drastic halt or slow-down, the digital world is blooming. And in addition to understanding the possibilities of home workspaces, companies are finally understanding the scope of Machine Learning and Artificial Intelligence. A trend that was already gardening all the attention in recent years, ML & AI have particularly taken the centre-stage as more and more brands realise the possibilities of these tools. According to a research report released in February, demand for data engineers was up 50% and demand for data scientists was up 32% in 2019 compared to the prior year. Not only is machine learning being used by researchers to tackle this global pandemic, but it is also being seen as an essential tool in building a world post-COVID.

This pandemic is being fought on the basis of numbers and data. This is the key reason that has driven peoples interest in Machine Learning. It helps us in collecting, analysing and understanding a vast quantity of data. Combined with the power of Artificial Intelligence, Machine Learning has the power to help with an early understanding of problems and quick resolutions. In recent times, ML & AI are being used by doctors and medical personnel to track the virus, identify potential patients and even analyse the possible cure available. Even in the current economic crisis, jobs in data science and machine learning have been least affected. All these factors indicate that machine learning and artificial intelligence are here to stay. And this is the key reason that data science is an area you can particularly focus on, in this lockdown.

The capabilities of Machine Learning and Data Sciences One of the key reasons that a number of people have been able to shift to working from home without much hassle has to be the use of ML & AI by businesses. This shift has also motivated many businesses, both small-scale and large-scale, to re-evaluate their functioning. With companies already announcing plans to look at a more robust working mechanism, which involves less office space and more detailed and structured online working systems, the focus on Machine Learning is bound to increase considerably.

The Current PossibilitiesThe world of data science has been coming out stronger during this lockdown and the interest and importance given to the subject are on the rise. AI-powered mechanics and operations have already made it easier to manage various spaces with lower risks and this trend of turning to AI is bound to increase in the coming years. This is the reason that being educated in this field can improve your skills in this segment. If you are someone who has always been intrigued by data sciences and machine learning or are already working in this field and are looking for ways to accelerate your career, there are various courses that you can turn to. With the increased free time that staying at home has facilitated us with, beginning an additional degree to pad up your resume and also learn some cutting-edge concepts while gaining access to industry experts.

Start learning more about Machine Learning & AIIf you are wondering where to begin this journey of learning, a leading online education service provider, upGrad, has curated programs that would suit you! From Data Sciences to in-depth learnings in AI, there are multiple programs on their website that covers various domains. The PG Diploma in Machine Learning and AI, in particular, has a brilliant curriculum that will help you progress in the field of Machine Learning and Artificial Intelligence. A carefully crafted program from IIIT Bangalore which offers 450+ hours of learning with more than 10 practical hands-on capstone projects, this program has been designed to help people get a deeper understanding of the real-life problems in the field.

Understanding the PG Diploma in Machine Learning & AIThis 1-year program at upGrad has been articulated especially for working professionals who are looking for a career push. The curriculum consists of 30+ Case Studies and Assignments and 25+ Industry Mentorship Sessions, which help you to understand everything you need to know about this field. This program has the perfect balance between the practical exposure required to instil better management and problem-solving skills as well as the theoretical knowledge that will sharpen your skills in this category. The program also gives learners an IIIT Bangalore Alumni Status and Job Placement Assistance with Top Firms on successful completion.

Read the original here:
Rise in the demand for Machine Learning & AI skills in the post-COVID world - Times of India

This 17-year-old boy created a machine learning model to suggest potential drugs for Covid-19 – India Today

In keeping with its tradition of high excellence and achievements, Greenwood High International School's student Tarun Paparaju of Class 12 has achieved the 'Grand Master' level in kernels, the highest accreditation in Kaggle, holding a rank of 13 out of 118,024 Kagglers worldwide. Kaggle is the world's largest online community for Data Science and Artificial Intelligence.

There are only 20 Kernel Grandmasters out of the three million users on Kaggle worldwide, and Tarun, aged 17 years, is honored to be one of the 20 Kernel Grandmasters now. Users of Kaggle are placed at different levels based on the quality and accuracy of their solutions to real-world artificial intelligence problems. The five levels in ascending order are Novice, Contributor, Expert, Master, and Grandmaster.

Kaggle hosts several data science competitions and contestants are challenged to find solutions to these real-world challenges. Kernels are a medium through which Kagglers share their code and insights on how to solve the problem.

These kernels include in-depth data analysis, visualisation, and machine learning, usually written in Python or R programming language. Other Kagglers can up vote a kernel if they believe it provides useful insights or solves the problem. 'Kernels Grandmaster' title at Kaggle requires 15 kernels qualified with gold medals.

Tarun's passion for Calculus, Mathematical modeling, and Data science from a very young age got him interested in participating and contributing to the Kaggle community.

He loves solving real-world Data Science problems, especially in the areas based on Deep learning: Natural language processing, Signal processing. Tarun is an open-source contributor to Keras, a Deep learning framework.

He has proposed and added Capsule NN layer support to Keras framework. He writes blogs about his adventures and learnings in data science.

Now, he closely works with the Kaggle community and aspires to be a scholar in the area of Natural language processing. Additionally, he loves playing cricket and football. Sports is a large part of his life outside Data science and academics.

Read:UGC releases new academic calendar: Here are top 10 important UGC updates

Read: MPhil, PhD students to get extension of 6 months, viva-voce through video conferencing: UGC

Read: WBBSE Madhyamik Result 2020: WB Class 10 result date to be fixed after Covid-19 lockdown ends

Visit link:
This 17-year-old boy created a machine learning model to suggest potential drugs for Covid-19 - India Today

Machine Learning in Medicine Market 2020-2024 Review and Outlook – Latest Herald

ORBIS RESEARCH has recently announced Global Machine Learning in Medicine Market report with all the critical analysis on current state of industry, demand for product, environment for investment and existing competition. Global Machine Learning in Medicine Market report is a focused study on various market affecting factors and comprehensive survey of industry covering major aspects like product types, various applications, top regions, growth analysis, market potential, challenges for investor, opportunity assessments, major drivers and key players

This report is directed to arm report readers with conclusive judgment on the potential of mentioned factors that propel relentless growth in Global Machine Learning in Medicine Market. The report on Machine Learning in Medicine Market makes concrete headway in identifying and deciphering each of the market dimensions to evaluate logical derivatives which have the potential to set the growth course in Global Machine Learning in Medicine Market. Besides presenting notable insights on Machine Learning in Medicine Market factors comprising above determinants, the report further in its subsequent sections of this detailed research report on Machine Learning in Medicine Market states information on regional segmentation, as well as thoughtful perspectives on specific understanding comprising region specific developments as well as leading market players objectives to trigger maximum revenue generation and profits. This high-end research comprehension on Machine Learning in Medicine Market renders major impetus on detailed growth.

This study covers following key players:Monday.com

Request a sample of this report @ https://www.orbisresearch.com/contacts/request-sample/4328851

The report is directed to arm report readers with conclusive judgment on the potential of mentioned factors that propel relentless growth in Global Machine Learning in Medicine Market. A thorough run down on essential elements such as drivers, threats, challenges, opportunities are discussed at length in this elaborate report on Machine Learning in Medicine Market and eventually analyzed to document logical conclusions. This Machine Learning in Medicine Market also harps on competitive landscape, accurately identifying opportunities as well as threats and challenges. This report specifically unearths notable conclusions and elaborates on innumerable factors and growth triggering decisions that make this Machine Learning in Medicine Market a highly remunerative one.

This meticulous research based analytical review on Machine Learning in Medicine Market is a high end expert handbook portraying crucial market relevant information and developments, encompassing a holistic record of growth promoting triggers encapsulating trends, factors, dynamics, challenges, and threats as well as barrier analysis that accurately direct and influence profit trajectory of Machine Learning in Medicine Market. This high-end research comprehension on Machine Learning in Medicine Market renders major impetus on detailed growth facets, in terms of product section, payment and transaction platforms, further incorporating service portfolio, applications, as well as a specific compilation on technological interventions that facilitate ideal growth potential in Global Machine Learning in Medicine Market.

Access Complete Report @ https://www.orbisresearch.com/reports/index/global-machine-learning-in-medicine-market-professional-survey-2019-by-manufacturers-regions-countries-types-and-applications-forecast-to-2024

Market segment by Type, the product can be split intoOn-premisesSoftware-as-a-Service (SaaS)Cloud Based

Market segment by Application, split intoAerospaceAutomotive industryBiotech and pharmaceuticalChemical industryConsumer productsAerospace

This high-end research comprehension on Machine Learning in Medicine Market renders major impetus on detailed growth facets, in terms of product section, payment and transaction platforms, further incorporating service portfolio, applications, as well as a specific compilation on technological interventions that facilitate ideal growth potential in Global Machine Learning in Medicine Market.

The report also incorporates ample understanding on numerous analytical practices such as SWOT and PESTEL analysis to source optimum profit resources in Machine Learning in Medicine Market. Other vital factors related to the Machine Learning in Medicine Market such as scope, growth potential, profitability, and structural break-down have been distinctively documented in this keyword report to leverage holistic market growth.

For Enquiry before buying report @ https://www.orbisresearch.com/contacts/enquiry-before-buying/4328851

Some Key TOC Points:1 Industry Overview of Machine Learning in Medicin2 Industry Chain Analysis of Machine Learning in Medicine3 Manufacturing Technology of Machine Learning in Medicine4 Major Manufacturers Analysis of Machine Learning in MedicineContinued

About Us:Orbis Research (orbisresearch.com) is a single point aid for all your market research requirements. We have vast database of reports from the leading publishers and authors across the globe. We specialize in delivering customized reports as per the requirements of our clients. We have complete information about our publishers and hence are sure about the accuracy of the industries and verticals of their specialization. This helps our clients to map their needs and we produce the perfect required market research study for our clients.

Contact Us:Hector CostelloSenior Manager Client Engagements4144N Central Expressway,Suite 600, Dallas,Texas 75204, U.S.A.Phone No.: USA: +1 (972)-362-8199 | IND: +91 895 659 5155

See more here:
Machine Learning in Medicine Market 2020-2024 Review and Outlook - Latest Herald

A.I. can’t solve this: The coronavirus could be highlighting just how overhyped the industry is – CNBC

Monitors display a video showing facial recognition software in use at the headquarters of the artificial intelligence company Megvii, in Beijing, May 10, 2018. Beijing is putting billions of dollars behind facial recognition and other technologies to track and control its citizens.

Gilles Sabri | The New York Times

The world is facing its biggest health crisis in decades but one of the world's most promising technologies artificial intelligence (AI) isn't playing the major role some may have hoped for.

Renowned AI labs at the likes of DeepMind, OpenAI, Facebook AI Research, and Microsoft have remained relatively quiet as the coronavirus has spread around the world.

"It's fascinating how quiet it is," said Neil Lawrence, the former director of machine learning at Amazon Cambridge.

"This (pandemic) is showing what bulls--t most AI hype is. It's great and it will be useful one day but it's not surprising in a pandemic that we fall back on tried and tested techniques."

Those techniques include good, old-fashioned statistical techniques and mathematical models. The latter is used to create epidemiological models, which predict how a disease will spread through a population. Right now, these are far more useful than fields of AI like reinforcement learning and natural-language processing.

Of course, there are a few useful AI projects happening here and there.

In March, DeepMind announced that it hadused a machine-learning technique called "free modelling" to detail the structures of six proteins associated with SARS-CoV-2, the coronavirus that causes the Covid-19 disease.Elsewhere, Israeli start-up Aidoc is using AI imaging to flag abnormalities in the lungs and a U.K. start-up founded by Viagra co-inventor David Brown is using AI to look for Covid-19 drug treatments.

Verena Rieser, a computer science professor at Heriot-Watt University, pointed out that autonomous robots can be used to help disinfect hospitals and AI tutors can support parents with the burden of home schooling. She also said "AI companions" can help with self isolation, especially for the elderly.

"At the periphery you can imagine it doing some stuff with CCTV," said Lawrence, adding that cameras could be used to collect data on what percentage of people are wearing masks.

Separately, a facial recognition system built by U.K. firm SCC has also been adapted to spot coronavirus sufferers instead of terrorists.In Oxford, England, Exscientia is screening more than 15,000 drugs to see how effective they are as coronavirus treatments. The work is being done in partnership withDiamond Light Source, the U.K.'s national "synchotron."

But AI's role in this pandemic is likely to be more nuanced than some may have anticipated. AI isn't about to get us out of the woods any time soon.

"It's kind of indicating how hyped AI was," said Lawrence, who is now a professor of machine learning at the University of Cambridge. "The maturity of techniques is equivalent to the noughties internet."

AI researchers rely on vast amounts of nicely labeled data to train their algorithms, but right now there isn't enough reliable coronavirus data to do that.

"AI learns from large amounts of data which has been manually labeled a time consuming and expensive task," said Catherine Breslin, a machine learning consultant who used to work on Amazon Alexa.

"It also takes a lot of time to build, test and deploy AI in the real world. When the world changes, as it has done, the challenges with AI are going to be collecting enough data to learn from, and being able to build and deploy the technology quickly enough to have an impact."

Breslin agrees that AI technologies have a role to play. "However, they won't be a silver bullet," she said, adding that while they might not directly bring an end to the virus, they can make people's lives easier and more fun while they're in lockdown.

The AI community is thinking long and hard about how it can make itself more useful.

Last week, Facebook AI announced a number of partnerships with academics across the U.S.

Meanwhile, DeepMind's polymath leader Demis Hassabis is helping the Royal Society, the world's oldest independent scientific academy, on a new multidisciplinary project called DELVE (Data Evaluation and Learning for Viral Epidemics). Lawrence is also contributing.

See original here:
A.I. can't solve this: The coronavirus could be highlighting just how overhyped the industry is - CNBC