HPC In 2020: Acquisitions And Mergers As The New Normal – The Next Platform

After a decade of vendor consolidation that saw some of the worlds biggest IT firms acquire first-class HPC providers such as SGI, Cray, and Sun Microsystems, as well as smaller players like Penguin Computing, WhamCloud, Appro, and Isilon, it is natural to wonder who is next. Or maybe, more to the point, who is left?

As it turns out, there are still plenty of companies, large and small, that can fill critical holes in the product portfolios of HPC providers, or those who want to be HPC players. These niche acquisitions will be especially important to these same providers as they expand into HPC-adjacent markets such as artificial intelligence, data analytics and edge computing.

One company that can play into all of these markets is FPGA-maker Xilinx. Since Intel acquired Altera in 2015, Xilinx is the only standalone company of any size that makes reconfigurable logic devices. Give that, the natural buyer for Xilinx would be AMD, Intels arch-nemesis. AMD, of course, already has a highly competitive lineup of CPUs and GPUs to challenge its much larger rival, and the addition of an FPGA portfolio would open a third front. It would also provide AMD entry into a whole array of new application markets where FPGAs operate: ASIC prototyping, IoT, embedded aerospace/automotive, 5G communications, AI inference, database acceleration, and computational storage, to name a few.

The only problem is that Xilinxs current market cap of around $25 billion, or about half the current market cap of AMD. And if youre wondering about AMDs piggy bank, the chipmaker has $1.2 billion cash on hand as of September 2019. Which means any deal would probably take the form of a merger rather than a straight acquisition. Theres nothing wrong with that, but a merger is a more complex decision and has greater ramifications for both parties. Thats why the rumors of a Xilinx acquisition have tended to center on larger semiconductor manufacturers that might be looking to diversify their offerings, like Broadcom or Qualcomm. Those acquisitions wouldnt offer the HPC and AI technology synergies that AMD could provide, but they would likely be easier to execute.

Another area that continues to be ripe for acquisitions is the storage market. In HPC, Panasas and DataDirect Networks stand alone well, stand together as the two HPC specialists left in the market. And of those two, the more modest-sized Panasas would be easier to swallow. But most HPC OEMs, including the biggies like Hewlett Packard Enterprise, Dell Technologies, and Lenovo already have their own HPC storage and file system offerings of one sort or another, although Lenovo is probably most deficient in this regard. For what its worth though, Panasas, which has been around since 1999, has never attracted the kind of suitor willing to fold the companys rather specialized parallel file system technologies into its own product portfolio. In all honesty, we dont expect that to change.

The real storage action in the coming years in HPC, as well as in the enterprise and the cloud, is going to be in the software defined space, where companies like WekaIO, VAST Data, Excelero, and DataCore Software have built products that can virtualize all sorts of hardware. Thats because the way storage is being used and deployed in the datacenter these days is being transformed by cheaper capacity (disks) and cheaper IOPS (NVM-Express and other SSD devices), the availability of cloud storage, and the inverse trends of disaggregation and hyperconvergence.

As we noted last July: While there are plenty of NAS and SAN appliances being sold into the enterprise to support legacy applications, modern storage tends to be either disaggregated with compute and storage broken free of each other at the hardware level but glued together on the fly with software to look local or hyperconverged with the compute and block storage virtualized and running on the same physical server clusters and atop the same server virtualization hypervisors.

Any of the aforementioned SDS companies, along with others, may find themselves courted by OEMs and storage-makers, and even cloud providers. DDN has been busy in that regard, having acquired software-defined storage maker Nexenta in May 2019. We expect to see more of such deals in the coming years. Besides DDN, other storage companies like NetApp should be looking hard at bringing more SDS in-house. The big cloud providers Amazon, Microsoft, Google, and so on will also be making some big investments in SDS technologies, even if theyre not buying such companies outright.

One market that is nowhere near the consolidation stage is quantum computing. However, that doesnt mean companies wont be looking to acquire some promising startups in this area, even at this early stage. While major tech firms such as IBM, Google, Intel, Fujitsu, Microsoft, and Baidu have already invested a lot on in-house development and are busy selecting technology partners, other companies have taken a more wait-and-see approach.

In the latter category, one that particularly stands out is HPE. In this case, the company is more focused on near-term R&D, like memristors or other memory-centric technologies. While there may be some logic in letting other companies spend their money figuring out the most promising approaches for quantum computing, and then swoop in and copy (or buy) whatever technology is most viable, there is also the risk of being left behind. Thats something HPE cannot afford.

That said, HPE has recently invested in IonQ, a promising quantum computing startup that has built workable prototype using ion trap technology. The investment was provided via Pathfinder, HPEs investment arm. In an internal blog post on the subject penned by Abhishek Shukla, managing director of global venture investments, and Ray Beausoleil, Senior Fellow of large scale integrated photonics, the authors extol the virtues of IonQs technical approach:

IonQs technology has already surpassed all other quantum computers now available, demonstrating the largest number of usable qubits in the market. Its gate fidelity, which measures the accuracy of logical operations, is greater than 98 percent for both one-qubit and two-qubit operations, meaning it can handle longer calculations than other commercial quantum computers. We believe IonQs qubits and methodology are of such high quality, they will be able to scale to 100 qubits (and 10,000 gate operations) without needing any error correction.

As far as we can tell, HPE has no plans to acquire the company (and it shares investment in the startup with other companies, including Amazon, Google, and Samsung, among others). But if HPE is truly convinced IonQ is the path forward, it would make sense to pull the acquisition trigger sooner rather than later.

We have no illusions that any of this comes to pass in 2020 or ever. As logical as the deals we have suggested seem to us, the world of acquisitions and mergers is a lot more mysterious and counterintuitive than wed like to admit (cases in point: Intel buying Whamcloud or essentially buying Cloudera through such heavy investment). More certain is the fact that these deals will continue to reshape the HPC vendor landscape in the coming decade as companies go after new markets and consolidate their hold on old ones. If anything, the number of businesses bought and sold will increase as high performance computing, driven by AI and analytics, will extend into more application domains. Or as the Greeks put it more succinctly, the only constant is change.

Read more from the original source:
HPC In 2020: Acquisitions And Mergers As The New Normal - The Next Platform

Googles Quantum Supremacy will mark the End of the Bitcoin in 2020 – The Coin Republic

Ritika Sharma Monday, 13 January 2020, 03:49 EST Modified date: Monday, 13 January 2020, 05:00 EST

Quantum computing whenever hit the headlines left not just Bitcoin holders but also every Cryptocurrency holder worried about the uncertainty around their holdings.

It widely believed that the underlying technology of Bitcoin, Blockchain is immutable, meaning it cannot be changed or encrypted without authority over encryption keys.

However, with quantum computers, it is possible to break a blockchains cryptographic codes. Quantum computing can hit the most significant features of Blockchain like unchangeable data, unalterable, and security making it vulnerable.

Google has achieved quantum supremacy as of late 2019, which poses a threat to Bitcoin. It will be a threat to Blockchain, as quantum computing will affect one blockchains key features like inalterability and security, thus making Blockchain as highly vulnerable technology.

Later, china Joined Google in the quantum supremacy Race and announced working on quantum technology. With this, the year 2020 might witness the end of the Crypto Era.

How can Quantum computing break the Blockchain?

The reason behind this fear is quite genuine and straightforward: Bitcoin or any Cryptocurrency depends on cryptography, hash functions, and asymmetric cryptographic number mainly relies on the computing power of computers. The hash function calculates a random number for each block.

The results obtained by this process are effortless to verify, but challenging to find. However, quantum computing has powerful algorithmic capabilities, which is precisely the enemy of this key.

Quantum computing uses subatomic particles, which will be available in more than one state at one time. This feature makes Quantum computing faster than the technology we use today.

Quantum computers can work 100 million times faster than current systems; the computational power is capable of solving any complex mathematical equation in a matter of a few seconds, which current systems take 10,000 years to solve.

With such super computational powers, Quantum computers is capable of calculating the one-way functions that will make one-way encryption obsolete.

The risk over Blockchain is more if it gets in the wrong hands. Hackers with a quantum computer can hack the Cryptocurrency ledger and take complete control of Blockchain.

Will Googles Quantum computing wipe out your Bitcoins?

Googles quantum Supremacy only to traditional computers on classical problems; this isnt actual quantum technology. It was presented bluntly as, quantum supremacy, though it is just a step in the world of quantum computing space.

Even if Googles quantum computer demonstrates, its computing power on specific problems far exceeds the best performing supercomputing. The results of this research by Google do not have much meaning in terms of Bitcoin. This isnt even near to what we can call breaking Bitcoin or Blockchain.

However, Googles quantum supremacy does not pose any threat to Bitcoin; many people in the space still stressed about quantum threat theory. Many analysts claim that the quantum algorithm used by Shor can crack private keys, but again, there Is a long way to go before it could break bitcoins Blockchain.

According to researchers, a quantum computer with 4,000 qubits is undoubtedly able to break the Blockchain. Still, googles the quantum computer has only 53 qubits, which cannot cause any harm to Blockchain, and it is worth mentioning that The higher the qubit, the more difficult it becomes.

Satoshi Nakamotos Proposed solution to beat Quantum Supremacy

Satoshi was a true visionary, the things we are concerned about today, and had already been answered by him. In 2010, satoshi Nakamoto responded to the question about quantum computers by username llama on bitcoin talk.

He replied that If Bitcoin suddenly cracked, the signature will be destroyed; but if it is slowly changed, the system still has time to convert to a stronger function, and Re-sign all your assets. Another cruder answer to this question suggested by the author of Mastering Bitcoin, Andreas Antonopoulos, If the quantum computer comes, we will upgrade.

The Quantum supremacy threat isnt new to the crypto world, and many cryptocurrency projects such as Ethereum, quantum chains, etc., focused on making blockchain quantum resistance, experts in Cryptocurrency space also advocating the development of quantum encryption technology to ensure the security of funds.

Unless a threat of Actual Quantum computing of far more powerful processor explodes, Bitcoin and its developers still have time to secure it. With the continuous development in Quantum technology and the development of more qubit chips, still, there will be the sword of Damocles hanging on the head of the cryptocurrency.

View post:
Googles Quantum Supremacy will mark the End of the Bitcoin in 2020 - The Coin Republic

New York University Partners with IBM to Explore Quantum Computing for Simulation of Quantum Systems and Advancing Quantum Education – Quantaneo, the…

The announcement of the agreement was made during CES 2020, the annual global technology conference and showcase in Las Vegas.

Together with the Air Force Research Lab (AFRL) and IBM, NYU will explore quantum computing research to study measurement-based quantum computing, materials discovery with variational quantum eigensolver, and emulating new phases on small quantum systems.

We are excited to join AFRL and IBM to transform quantum computing concepts into a powerful technology by educating a new quantum workforce, expanding our scientific partnership and engaging in cross disciplinary collaboration, said Javad Shabani, an assistant professor of physics at NYU.

Under the agreement to join the AFRL hub, NYU will be part of a community of Fortune 500 companies, startups, academic institutions, and research labs working to advance quantum computing and explore practical applications. NYU will leverage IBMs quantum expertise and resources, Qiskit software and developer tools, and will have cloud-based access to IBMs Quantum Computation Center. IBM offers, through the cloud, 15 of the most advanced universal quantum computing systems available, including a 53-qubit qubit systemthe largest commercially available system in the industry.

Read more here:
New York University Partners with IBM to Explore Quantum Computing for Simulation of Quantum Systems and Advancing Quantum Education - Quantaneo, the...

AI, ML and quantum computing to cement position in 2020 – Tech Observer

From the emerge of cognitive intelligence, in-memory-computing, fault-tolerant quantum computing, new materials-based semiconductor devices, to faster growth of industrial IoT, large-scale collaboration between machines, production-grade blockchain applications, modular chip design, and AI technologies to protect data privacy, more technology advancements and breakthroughs are expected to gain momentum and generate big impacts on our daily life.

We are at the era of rapid technology development. In particular, technologies such as cloud computing, artificial intelligence, blockchain, and data intelligence are expected to accelerate the pace of the digital economy, said Jeff Zhang, Head of Alibaba DAMO Academy and President of Alibaba Cloud Intelligence.

The following are highlights from the Alibaba DAMO Academy predictions for the top 10 trends in the tech community for this year:

Artificial intelligence has reached or surpassed humans in the areas of perceptual intelligence such as speech to text, natural language processing, video understanding etc. but in the field of cognitive intelligence that requires external knowledge, logical reasoning, or domain migration, it is still in its infancy. Cognitive intelligence will draw inspiration from cognitive psychology, brain science, and human social history, combined with techniques such as cross domain knowledge graph, causality inference, and continuous learning to establish effective mechanisms for stable acquisition and expression of knowledge. These make machines to understand and utilize knowledge, achieving key breakthroughs from perceptual intelligence to cognitive intelligence.

In Von Neumann architecture, memory and processor are separate and the computation requires data to be moved back and forth. With the rapid development of data-driven AI algorithms in recent years, it has come to a point where the hardware becomes the bottleneck in the explorations of more advanced algorithms. In Processing-in-Memory (PIM) architecture, in contrast to the Von Neumann architecture, memory and processor are fused together and computations are performed where data is stored with minimal data movement. As such, computation parallelism and power efficiency can be significantly improved. We believe the innovations on PIM architecture are the tickets to next-generation AI.

In 2020, 5G, rapid development of IoT devices, cloud computing and edge computing will accelerate the fusion of information system, communication system, and industrial control system. Through advanced Industrial IoT, manufacturing companies can achieve automation of machines, in-factory logistics, and production scheduling, as a way to realize C2B smart manufacturing. In addition, interconnected industrial system can adjust and coordinate the production capability of both upstream and downstream vendors. Ultimately it will significantly increase the manufacturers productivity and profitability. For manufacturers with production goods that value hundreds of trillion RMB, if the productivity increases 5-10%, it means additional trillions of RMB.

Traditional single intelligence cannot meet the real-time perception and decision of large-scale intelligent devices. The development of collaborative sensing technology of Internet of things and 5G communication technology will realize the collaboration among multiple agents machines cooperate with each other and compete with each other to complete the target tasks. The group intelligence brought by the cooperation of multiple intelligent bodies will further amplify the value of the intelligent system: large-scale intelligent traffic light dispatching will realize dynamic and real-time adjustment, while warehouse robots will work together to complete cargo sorting more efficiently; Driverless cars can perceive the overall traffic conditions on the road, and group unmanned aerial vehicle (UAV) collaboration will get through the last -mile delivery more efficiently.

Traditional model of chip design cannot efficiently respond to the fast evolving, fragmented and customized needs of chip production. The open source SoC chip design based on RISC-V, high-level hardware description language, and IP-based modular chip design methods have accelerated the rapid development of agile design methods and the ecosystem of open source chips. In addition, the modular design method based on chiplets uses advanced packaging methods to package the chiplets with different functions together, which can quickly customize and deliver chips that meet specific requirements of different applications.

BaaS (Blockchain-as-a-Service) will further reduce the barriers of entry for enterprise blockchain applications. A variety of hardware chips embedded with core algorithms used in edge, cloud and designed specifically for blockchain will also emerge, allowing assets in the physical world to be mapped to assets on blockchain, further expanding the boundaries of the Internet of Value and realizing multi-chain interconnection. In the future, a large number of innovative blockchain application scenarios with multi-dimensional collaboration across different industries and ecosystems will emerge, and large-scale production-grade blockchain applications with more than 10 million DAI (Daily Active Items) will gain mass adoption.

In 2019, the race in reaching Quantum Supremacy brought the focus back to quantum computing. The demonstration, using superconducting circuits, boosts the overall confidence on superconducting quantum computing for the realization of a large-scale quantum computer. In 2020, the field of quantum computing will receive increasing investment, which comes with enhanced competitions. The field is also expected to experience a speed-up in industrialization and the gradual formation of an eco-system. In the coming years, the next milestones will be the realization of fault-tolerant quantum computing and the demonstration of quantum advantages in real-world problems. Either is of a great challenge given the present knowledge. Quantum computing is entering a critical period.

Under the pressure of both Moores Law and the explosive demand of computing power and storage, it is difficult for classic Si based transistors to maintain sustainable development of the semiconductor industry. Until now, major semiconductor manufacturers still have no clear answer and option to chips beyond 3nm. New materials will make new logic, storage, and interconnection devices through new physical mechanisms, driving continuous innovation in the semiconductor industry. For example, topological insulators, two-dimensional superconducting materials, etc. that can achieve lossless transport of electron and spin can become the basis for new high-performance logic and interconnect devices; while new magnetic materials and new resistive switching materials can realize high-performance magnetics Memory such as SOT-MRAM and resistive memory.

Abstract: The compliance costs demanded by the recent data protection laws and regulations related to data transfer are getting increasingly higher than ever before. In light of this, there have been growing interests in using AI technologies to protect data privacy. The essence is to enable the data user to compute a function over input data from different data providers while keeping those data private. Such AI technologies promise to solve the problems of data silos and lack of trust in todays data sharing practices, and will truly unleash the value of data in the foreseeable future.

With the ongoing development of cloud computing technology, the cloud has grown far beyond the scope of IT infrastructure, and gradually evolved into the center of all IT technology innovations. Cloud has close relationship with almost all IT technologies, including new chips, new databases, self-driving adaptive networks, big data, AI, IoT, blockchain, quantum computing and so forth. Meanwhile, it creates new technologies, such as serverless computing, cloud-native software architecture, software-hardware integrated design, as well as intelligent automated operation. Cloud computing is redefining every aspect of IT, making new IT technologies more accessible for the public. Cloud has become the backbone of the entire digital economy.

See the original post:
AI, ML and quantum computing to cement position in 2020 - Tech Observer

Were approaching the limits of computer power we need new programmers now – The Guardian

Way back in the 1960s, Gordon Moore, the co-founder of Intel, observed that the number of transistors that could be fitted on a silicon chip was doubling every two years. Since the transistor count is related to processing power, that meant that computing power was effectively doubling every two years. Thus was born Moores law, which for most people working in the computer industry or at any rate those younger than 40 has provided the kind of bedrock certainty that Newtons laws of motion did for mechanical engineers.

There is, however, one difference. Moores law is just a statement of an empirical correlation observed over a particular period in history and we are reaching the limits of its application. In 2010, Moore himself predicted that the laws of physics would call a halt to the exponential increases. In terms of size of transistor, he said, you can see that were approaching the size of atoms, which is a fundamental barrier, but itll be two or three generations before we get that far but thats as far out as weve ever been able to see. We have another 10 to 20 years before we reach a fundamental limit.

Weve now reached 2020 and so the certainty that we will always have sufficiently powerful computing hardware for our expanding needs is beginning to look complacent. Since this has been obvious for decades to those in the business, theres been lots of research into ingenious ways of packing more computing power into machines, for example using multi-core architectures in which a CPU has two or more separate processing units called cores in the hope of postponing the awful day when the silicon chip finally runs out of road. (The new Apple Mac Pro, for example, is powered by a 28-core Intel Xeon processor.) And of course there is also a good deal of frenzied research into quantum computing, which could, in principle, be an epochal development.

But computing involves a combination of hardware and software and one of the predictable consequences of Moores law is that it made programmers lazier. Writing software is a craft and some people are better at it than others. They write code that is more elegant and, more importantly, leaner, so that it executes faster. In the early days, when the hardware was relatively primitive, craftsmanship really mattered. When Bill Gates was a lad, for example, he wrote a Basic interpreter for one of the earliest microcomputers, the TRS-80. Because the machine had only a tiny read-only memory, Gates had to fit it into just 16 kilobytes. He wrote it in assembly language to increase efficiency and save space; theres a legend that for years afterwards he could recite the entire program by heart.

There are thousands of stories like this from the early days of computing. But as Moores law took hold, the need to write lean, parsimonious code gradually disappeared and incentives changed. Programming became industrialised as software engineering. The construction of sprawling software ecosystems such as operating systems and commercial applications required large teams of developers; these then spawned associated bureaucracies of project managers and executives. Large software projects morphed into the kind of death march memorably chronicled in Fred Brookss celebrated book, The Mythical Man-Month, which was published in 1975 and has never been out of print, for the very good reason that its still relevant. And in the process, software became bloated and often inefficient.

But this didnt matter because the hardware was always delivering the computing power that concealed the bloatware problem. Conscientious programmers were often infuriated by this. The only consequence of the powerful hardware I see, wrote one, is that programmers write more and more bloated software on it. They become lazier, because the hardware is fast they do not try to learn algorithms nor to optimise their code this is crazy!

It is. In a lecture in 1997, Nathan Myhrvold, who was once Bill Gatess chief technology officer, set out his Four Laws of Software. 1: software is like a gas it expands to fill its container. 2: software grows until it is limited by Moores law. 3: software growth makes Moores law possible people buy new hardware because the software requires it. And, finally, 4: software is only limited by human ambition and expectation.

As Moores law reaches the end of its dominion, Myhrvolds laws suggest that we basically have only two options. Either we moderate our ambitions or we go back to writing leaner, more efficient code. In other words, back to the future.

What just happened?Writer and researcher Dan Wang has a remarkable review of the year in technology on his blog, including an informed, detached perspective on the prospects for Chinese domination of new tech.

Algorithm says noTheres a provocative essay by Cory Doctorow on the LA Review of Books blog on the innate conservatism of machine-learning.

Fall of the big beastsHow to lose a monopoly: Microsoft, IBM and antitrust is a terrific long-view essay about company survival and change by Benedict Evans on his blog.

Original post:
Were approaching the limits of computer power we need new programmers now - The Guardian

Quantum Computing Technologies Market to Witness Huge Growth by 2020-2025, Latest study reveals – ReportsPioneer

The Global Quantum Computing Technologies Market has witnessed continuous growth in the past few years and is projected to grow even further during the forecast period (2020-2025). The assessment provides a 360 view and insights, outlining the key outcomes of the industry. These insights help the business decision-makers to formulate better business plans and make informed decisions for improved profitability. In addition, the study helps venture capitalists in understanding the companies better and make informed decisions. Some of the key players in the Global Quantum Computing Technologies market are Airbus Group, Cambridge Quantum Computing, IBM, Google Quantum AI Lab, Microsoft Quantum Architectures, Nokia Bell Labs, Alibaba Group Holding Limited, Intel Corporation & Toshiba

Whats keeping Airbus Group, Cambridge Quantum Computing, IBM, Google Quantum AI Lab, Microsoft Quantum Architectures, Nokia Bell Labs, Alibaba Group Holding Limited, Intel Corporation & Toshiba Ahead in the Market? Benchmark yourself with the strategic moves and findings recently released by HTF MI

Get Sample Pdf with Latest Figures @:https://www.htfmarketreport.com/sample-report/1812333-global-quantum-computing-technologies-market-3

The Major Players Covered in this Report:Airbus Group, Cambridge Quantum Computing, IBM, Google Quantum AI Lab, Microsoft Quantum Architectures, Nokia Bell Labs, Alibaba Group Holding Limited, Intel Corporation & Toshiba

By the product type, the market is primarily split into:, Software & Hardware

By the end users/application, this report covers the following segments:Government, Business, High-Tech, Banking & Securities, Manufacturing & Logistics, Insurance & Other

Regional Analysis for Quantum Computing Technologies Market:United States, Europe, China, Japan, Southeast Asia, India & Central & South America

For Consumer Centric Market, below information can be provided as part of customization

Survey Analysis will be provided by Age, Gender, Occupation, Income Level or Education

Consumer Traits (If Applicable) Buying patterns (e.g. comfort & convenience, economical, pride) Buying behavior (e.g. seasonal, usage rate) Lifestyle (e.g. health conscious, family orientated, community active) Expectations (e.g. service, quality, risk, influence)

The Global Quantum Computing Technologies Market study also covers market status, share, future patterns, development rate, deals, SWOT examination, channels, merchants, and improvement gets ready for anticipated year between 2020-2025. It aims to strategically analyse the market with respect to individual growth trends, prospects, and their contribution to the market. The report attempts to forecast the market size for 5 major regions, namely, North America, Europe, Asia Pacific (APAC), Middle East and Africa (MEA), and Latin America.

If you need any specific requirement Ask to our Expert @https://www.htfmarketreport.com/enquiry-before-buy/1812333-global-quantum-computing-technologies-market-3

The Quantum Computing Technologies market factors described in this report are:-Key Strategic Developments in Global Quantum Computing Technologies Market:The research includes the key strategic developments of the market, comprising R&D, M&A, agreements, new product launch, collaborations, partnerships, joint ventures, and regional growth of the key competitors functioning in the market on a global and regional scale.

Key Market Features in Global Quantum Computing Technologies Market:The report assessed key market features, including revenue, capacity, price, capacity utilization rate, production rate, gross, production, consumption, import/export, supply/demand, cost, market share, CAGR, and gross margin. In addition to that, the study provides a comprehensive analysis of the key market factors and their latest trends, along with relevant market segments and sub-segments.

Analytical Market Highlights & ApproachThe Global Quantum Computing Technologies Market report provides the rigorously studied and evaluated data of the top industry players and their scope in the market by means of several analytical tools. The analytical tools such as Porters five forces analysis, feasibility study, SWOT analysis, and ROI analysis have been practiced reviewing the growth of the key players operating in the market.

Table of Contents :Global Quantum Computing Technologies Market Study Coverage:It includes key manufacturers covered, key market segments, the scope of products offered in the global Colposcopy market, years considered, and study objectives. Additionally, it touches the segmentation study provided in the report on the basis of the type of product and application.

Global Quantum Computing Technologies Market Executive SummaryIt gives a summary of key studies, market growth rate, competitive landscape, market drivers, trends, and issues, and macroscopic indicators.Global Quantum Computing Technologies Market Production by RegionHere, the report provides information related to import and export, production, revenue, and key players of all regional markets studied.Global Quantum Computing Technologies Market Profile of ManufacturersEach player profiled in this section is studied on the basis of SWOT analysis, their products, production, value, capacity, and other vital factors.

For Complete table of Contents please click here @https://www.htfmarketreport.com/reports/1812333-global-quantum-computing-technologies-market-3

Key Points Covered in Quantum Computing Technologies Market Report:Quantum Computing Technologies Overview, Definition and ClassificationMarket drivers and barriersQuantum Computing Technologies Market Competition by ManufacturersQuantum Computing Technologies Capacity, Production, Revenue (Value) by Region (2020-2025)Quantum Computing Technologies Supply (Production), Consumption, Export, Import by Region (2020-2025)Quantum Computing Technologies Production, Revenue (Value), Price Trend by Type {, Software & Hardware}Quantum Computing Technologies Market Analysis by Application {Government, Business, High-Tech, Banking & Securities, Manufacturing & Logistics, Insurance & Other}Quantum Computing Technologies Manufacturers Profiles/AnalysisQuantum Computing Technologies Manufacturing Cost AnalysisIndustrial/Supply Chain Analysis, Sourcing Strategy and Downstream BuyersMarketing Strategy by Key Manufacturers/Players, Connected Distributors/TradersStandardization, Regulatory and collaborative initiativesIndustry road map and value chainMarket Effect Factors Analysis

Buy the PDF Report @https://www.htfmarketreport.com/buy-now?format=1&report=1812333

Thanks for reading this article; you can also get individual chapter wise section or region wise report version like North America, Europe or Asia.

About Author:HTF Market Report is a wholly owned brand of HTF market Intelligence Consulting Private Limited. HTF Market Report global research and market intelligence consulting organization is uniquely positioned to not only identify growth opportunities but to also empower and inspire you to create visionary growth strategies for futures, enabled by our extraordinary depth and breadth of thought leadership, research, tools, events and experience that assist you for making goals into a reality. Our understanding of the interplay between industry convergence, Mega Trends, technologies and market trends provides our clients with new business models and expansion opportunities. We are focused on identifying the Accurate Forecast in every industry we cover so our clients can reap the benefits of being early market entrants and can accomplish their Goals & Objectives.

Contact US :Craig Francis (PR & Marketing Manager)HTF Market Intelligence Consulting Private LimitedUnit No. 429, Parsonage Road Edison, NJNew Jersey USA 08837Phone: +1 (206) 317 1218[emailprotected]

Connect with us atLinkedIn|Facebook|Twitter

See the original post here:
Quantum Computing Technologies Market to Witness Huge Growth by 2020-2025, Latest study reveals - ReportsPioneer

Going Beyond Machine Learning To Machine Reasoning – Forbes

From Machine Learning to Machine Reasoning

The conversation around Artificial Intelligence usually revolves around technology-focused topics: machine learning, conversational interfaces, autonomous agents, and other aspects of data science, math, and implementation. However, the history and evolution of AI is more than just a technology story. The story of AI is also inextricably linked with waves of innovation and research breakthroughs that run headfirst into economic and technology roadblocks. There seems to be a continuous pattern of discovery, innovation, interest, investment, cautious optimism, boundless enthusiasm, realization of limitations, technological roadblocks, withdrawal of interest, and retreat of AI research back to academic settings. These waves of advance and retreat seem to be as consistent as the back and forth of sea waves on the shore.

This pattern of interest, investment, hype, then decline, and rinse-and-repeat is particularly vexing to technologists and investors because it doesn't follow the usual technology adoption lifecycle. Popularized by Geoffrey Moore in his book "Crossing the Chasm", technology adoption usually follows a well-defined path. Technology is developed and finds early interest by innovators, and then early adopters, and if the technology can make the leap across the "chasm", it gets adopted by the early majority market and then it's off to the races with demand by the late majority and finally technology laggards. If the technology can't cross the chasm, then it ends up in the dustbin of history. However, what makes AI distinct is that it doesn't fit the technology adoption lifecycle pattern.

But AI isn't a discrete technology. Rather it's a series of technologies, concepts, and approaches all aligning towards the quest for the intelligent machine. This quest inspires academicians and researchers to come up with theories of how the brain and intelligence works, and their concepts of how to mimic these aspects with technology. AI is a generator of technologies, which individually go through the technology lifecycle. Investors aren't investing in "AI, but rather they're investing in the output of AI research and technologies that can help achieve the goals of AI. As researchers discover new insights that help them surmount previous challenges, or as technology infrastructure finally catches up with concepts that were previously infeasible, then new technology implementations are spawned and the cycle of investment renews.

The Need for Understanding

It's clear that intelligence is like an onion (or a parfait) many layers. Once we understand one layer, we find that it only explains a limited amount of what intelligence is about. We discover there's another layer thats not quite understood, and back to our research institutions we go to figure out how it works. In Cognilyticas exploration of the intelligence of voice assistants, the benchmark aims to tease at one of those next layers: understanding. That is, knowing what something is recognizing an image among a category of trained concepts, converting audio waveforms into words, identifying patterns among a collection of data, or even playing games at advanced levels, is different from actually understanding what those things are. This lack of understanding is why users get hilarious responses from voice assistant questions, and is also why we can't truly get autonomous machine capabilities in a wide range of situations. Without understanding, there's no common sense. Without common sense and understanding, machine learning is just a bunch of learned patterns that can't adapt to the constantly evolving changes of the real world.

One of the visual concepts thats helpful to understand these layers of increasing value is the "DIKUW Pyramid":

DIKUW Pyramid

While the Wikipedia entry above conveniently skips the Understanding step in their entry, we believe that understanding is the next logical threshold of AI capability. And like all previous layers of this AI onion, tackling this layer will require new research breakthroughs, dramatic increases in compute capabilities, and volumes of data. What? Don't we have almost limitless data and boundless computing power? Not quite. Read on.

The Quest for Common Sense: Machine Reasoning

Early in the development of artificial intelligence, researchers realized that for machines to successfully navigate the real world, they would have to gain an understanding of how the world works and how various different things are related to each other. In 1984, the world's longest-lived AI project started. The Cyc project is focused on generating a comprehensive "ontology" and knowledge base of common sense, basic concepts and "rules of thumb" about how the world works. The Cyc ontology uses a knowledge graph to structure how different concepts are related to each other, and an inference engine that allows systems to reason about facts.

The main idea behind Cyc and other understanding-building knowledge encodings is the realization that systems can't be truly intelligent if they don't understand what the underlying things they are recognizing or classifying are. This means we have to dig deeper than machine learning for intelligence. We need to peel this onion one level deeper, scoop out another tasty parfait layer. We need more than machine learning - we need machine reasoning.

Machine reason is the concept of giving machines the power to make connections between facts, observations, and all the magical things that we can train machines to do with machine learning. Machine learning has enabled a wide range of capabilities and functionality and opened up a world of possibility that was not possible without the ability to train machines to identify and recognize patterns in data. However, this power is crippled by the fact that these systems are not really able to functionally use that information for higher ends, or apply learning from one domain to another without human involvement. Even transfer learning is limited in application.

Indeed, we're rapidly facing the reality that we're going to soon hit the wall on the current edge of capabilities with machine learning-focused AI. To get to that next level we need to break through this wall and shift from machine learning-centric AI to machine reasoning-centric AI. However, that's going to require some breakthroughs in research that we haven't realized yet.

The fact that the Cyc project has the distinction as being the longest-lived AI project is a bit of a back-handed compliment. The Cyc project is long lived because after all these decades the quest for common sense knowledge is proving elusive. Codifying commonsense into a machine-processable form is a tremendous challenge. Not only do you need to encode the entities themselves in a way that a machine knows what you're talking about but also all the inter-relationships between those entities. There are millions, if not billions, of "things" that a machine needs to know. Some of these things are tangible like "rain" but others are intangible such as "thirst". The work of encoding these relationships is being partially automated, but still requires humans to verify the accuracy of the connections... because after all, if machines could do this we would have solved the machine recognition challenge. It's a bit of a chicken and egg problem this way. You can't solve machine recognition without having some way to codify the relationships between information. But you can't scalable codify all the relationships that machines would need to know without some form of automation.

Are we still limited by data and compute power?

Machine learning has proven to be very data-hungry and compute-intensive. Over the past decade, many iterative enhancements have lessened compute load and helped to make data use more efficient. GPUs, TPUs, and emerging FPGAs are helping to provide the raw compute horsepower needed. Yet, despite these advancements, complicated machine learning models with lots of dimensions and parameters still require intense amounts of compute and data. Machine reasoning is easily one order or more of complexity beyond machine learning. Accomplishing the task of reasoning out the complicated relationships between things and truly understanding these things might be beyond today's compute and data resources.

The current wave of interest and investment in AI doesn't show any signs of slowing or stopping any time soon, but it's inevitable it will slow at some point for one simple reason: we still don't understand intelligence and how it works. Despite the amazing work of researchers and technologists, we're still guessing in the dark about the mysterious nature of cognition, intelligence, and consciousness. At some point we will be faced with the limitations of our assumptions and implementations and we'll work to peel the onion one more layer and tackle the next set of challenges. Machine reasoning is quickly approaching as the next challenge we must surmount on the quest for artificial intelligence. If we can apply our research and investment talent to tackling this next layer, we can keep the momentum going with AI research and investment. If not, the pattern of AI will repeat itself, and the current wave will crest. It might not be now or even within the next few years, but the ebb and flow of AI is as inevitable as the waves upon the shore.

See the article here:
Going Beyond Machine Learning To Machine Reasoning - Forbes

The Problem with Hiring Algorithms – Machine Learning Times – machine learning & data science news – The Predictive Analytics Times

Originally published in EthicalSystems.org, December 1, 2019

In 2004, when a webcam was relatively unheard-of tech, Mark Newman knew that it would be the future of hiring. One of the first things the 20-year old did, after getting his degree in international business, was to co-found HireVue, a company offering a digital interviewing platform. Business trickled in. While Newman lived at his parents house, in Salt Lake City, the company, in its first five years, made just $100,000 in revenue. HireVue later received some outside capital, expanded and, in 2012, boasted some 200 clientsincluding Nike, Starbucks, and Walmartwhich would pay HireVue, depending on project volume, between $5,000 and $1 million. Recently, HireVue, which was bought earlier this year by the Carlyle Group, has become the source of some alarm, or at least trepidation, for its foray into the application of artificial intelligence in the hiring process. No longer does the company merely offer clients an asynchronous interviewing service, a way for hiring managers to screen thousands of applicants quickly by reviewing their video interview HireVue can now give companies the option of letting machine-learning algorithms choose the best candidates for them, based on, among other things, applicants tone, facial expressions, and sentence construction.

If that gives you the creeps, youre not alone. A 2017 Pew Research Center report found few Americans to be enthused, and many worried, by the prospect of companies using hiring algorithms. More recently, around a dozen interviewees assessed by HireVues AI told the Washington Post that it felt alienating and dehumanizing to have to wow a computer before being deemed worthy of a companys time. They also wondered how their recording might be used without their knowledge. Several applicants mentioned passing on the opportunity because thinking about the AI interview, as one of them told the paper, made my skin crawl. Had these applicants sat for a standard 30-minute interview, comprised of a half-dozen questions, the AI could have analyzed up to 500,000 data points. Nathan Mondragon, HireVues chief industrial-organizational psychologist, told the Washington Post that each one of those points become ingredients in the persons calculated score, between 1 and 100, on which hiring decisions candepend. New scores are ranked against a store of traitsmostly having to do with language use and verbal skillsfrom previous candidates for a similar position, who went on to thrive on the job.

HireVue wants you to believe that this is a good thing. After all, their pitch goes, humans are biased. If something like hunger can affect a hiring managers decisionlet alone classism, sexism, lookism, and other ismsthen why not rely on the less capricious, more objective decisions of machine-learning algorithms? No doubt some job seekers agree with the sentiment Loren Larsen, HireVues Chief Technology Officer, shared recently with theTelegraph: I would much prefer having my first screening with an algorithm that treats me fairly rather than one that depends on how tired the recruiter is that day. Of course, the appeal of AI hiring isnt just about doing right by the applicants. As a 2019 white paper, from the Society for Industrial and Organizational Psychology, notes, AI applied to assessing and selecting talent offers some exciting promises for making hiring decisions less costly and more accurate for organizations while also being less burdensome and (potentially) fairer for job seekers.

Do HireVues algorithms treat potential employees fairly? Some researchers in machine learning and human-computer interaction doubt it. Luke Stark, a postdoc at Microsoft Research Montreal who studies how AI, ethics, and emotion interact, told the Washington Post that HireVues claimsthat its automated software can glean a workers personality and predict their performance from such things as toneshould make us skeptical:

Systems like HireVue, he said, have become quite skilled at spitting out data points that seem convincing, even when theyre not backed by science. And he finds this charisma of numbers really troubling because of the overconfidence employers might lend them while seeking to decide the path of applicants careers.

The best AI systems today, he said, are notoriously prone to misunderstanding meaning and intent. But he worried that even their perceived success at divining a persons true worth could help perpetuate a homogenous corporate monoculture of automatons, each new hire modeled after the last.

Eric Siegel, an expert in machine learning and author of Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die, echoed Starks remarks. In an email, Siegel told me, Companies that buy into HireVue are inevitably, to a great degree, falling for that feeling of wonderment and speculation that a kid has when playing with a Magic Eight Ball. That, in itself, doesnt mean HireVues algorithms are completely unhelpful. Driving decisions with data has the potential to overcome human bias in some situations, but also, if not managed correctly, could easily instill, perpetuate, magnify, and automate human biases, he said.

To continue reading this article click here.

Read more here:
The Problem with Hiring Algorithms - Machine Learning Times - machine learning & data science news - The Predictive Analytics Times

The 4 Hottest Trends in Data Science for 2020 – Machine Learning Times – machine learning & data science news – The Predictive Analytics Times

Originally published in Towards Data Science, January 8, 2020

2019 was a big year for all of Data Science.

Companies all over the world across a wide variety of industries have been going through what people are calling a digital transformation. That is, businesses are taking traditional business processes such as hiring, marketing, pricing, and strategy, and using digital technologies to make them 10 times better.

Data Science has become an integral part of those transformations. With Data Science, organizations no longer have to make their important decisions based on hunches, best-guesses, or small surveys. Instead, theyre analyzing large amounts of real data to base their decisions on real, data-driven facts. Thats really what Data Science is all about creating value through data.

This trend of integrating data into the core business processes has grown significantly, with an increase in interest by over four times in the past 5 years according to Google Search Trends. Data is giving companies a sharp advantage over their competitors. With more data and better Data Scientists to use it, companies can acquire information about the market that their competitors might not even know existed. Its become a game of Data or perish.

Google search popularity of Data Science over the past 5 years. Generated by Google Trends.

In todays ever-evolving digital world, staying ahead of the competition requires constant innovation. Patents have gone out of style while Agile methodology and catching new trends quickly is very much in.

Organizations can no longer rely on their rock-solid methods of old. If a new trend like Data Science, Artificial Intelligence, or Blockchain comes along, it needs to be anticipated beforehand and adapted quickly.

The following are the 4 hottest Data Science trends for the year 2020. These are trends which have gathered increasing interest this year and will continue to grow in 2020.

(1) Automated Data Science

Even in todays digital age, Data Science still requires a lot of manual work. Storing data, cleaning data, visualizing and exploring data, and finally, modeling data to get some actual results. That manual work is just begging for automation, and thus has been the rise of automated Data Science and Machine Learning.

Nearly every step of the Data Science pipeline has been or is in the process of becoming automated.

Auto-Data Cleaning has been heavily researched over the past few years. Cleaning big data often takes up most of a Data Scientists expensive time. Both startups and large companies such as IBM offer automation and tooling for data cleaning.

Another large part of Data Science known as feature engineering has undergone significant disruption. Featuretools offers a solution for automatic feature engineering. On top of that, modern Deep Learning techniques such as Convolutional and Recurrent Neural Networks learn their own features without the need for manual feature design.

Perhaps the most significant automation is occurring in the Machine Learning space. Both Data Robot and H2O have established themselves in the industry by offering end-to-end Machine Learning platforms, giving Data Scientists a very easy handle on data management and model building. AutoML, a method for automatic model design and training, has also boomed over 2019 as these automated models surpass the state-of-the-art. Google, in particular, is investing heavily in Cloud AutoML.

In general, companies are investing heavily in building and buying tools and services for automated Data Science. Anything to make the process cheaper and easier. At the same time, this automation also caters to smaller and less technical organizations who can leverage these tools and services to have access to Data Science without building out their own team.

(2) Data Privacy and Security

Privacy and security are always sensitive topics in technology. All companies want to move fast and innovate, but losing the trust of their customers over privacy or security issues can be fatal. So, theyre forced to make it a priority, at least to a bare minimum of not leaking private data.

Data privacy and security has become an incredibly hot topic over the past year as the issues are magnified by enormous public hacks. Just recently on November 22, 2019, an exposed server with no security was discovered on Google Cloud. The server contained the personal information of 1.2 Billion unique people including names, email addresses, phone numbers, and LinkedIn and Facebook profile information. Even the FBI came in to investigate. Its one of the largest data exposures of all time.

To continue reading this article click here.

See the original post here:
The 4 Hottest Trends in Data Science for 2020 - Machine Learning Times - machine learning & data science news - The Predictive Analytics Times

AI and machine learning trends to look toward in 2020 – Healthcare IT News

Artificial intelligence and machine learning will play an even bigger role in healthcare in 2020 than they did in 2019, helping medical professionals with everything from oncology screenings to note-taking.

On top of actual deployments, increased investment activity is also expected this year, and with deeper deployments of AI and ML technology, a broader base of test cases will be available to collect valuable best practices information.

As AI is implemented more widely in real-world clinical practice, there will be more academic reports on the clinical benefits that have arisen from the real-world use, said Pete Durlach, senior vice president for healthcare strategy and new business development at Nuance.

"With healthy clinical evidence, we'll see AI become more mainstream in various clinical settings, creating a positive feedback loop of more evidence-based research and use in the field," he explained. "Soon, it will be hard to imagine a doctor's visit, or a hospital stay that doesn't incorporate AI in numerous ways."

In addition, AI and ambient sensing technology will help re-humanize medicine by allowing doctors to focus less on paperwork and administrative functions, and more on patient care.

"As AI becomes more commonplace in the exam room, everything will be voice enabled, people will get used to talking to everything, and doctors will be able to spend 100% of their time focused on the patient, rather than entering data into machines," Durlach predicted. "We will see the exam room of the future where clinical documentation writes itself."

The adoption of AI for robotic process automation ("RPA") for common and high value administrative functions such as the revenue cycle, supply chain and patient scheduling also has the potential to rapidly increase as AI helps automate or partially automate components of these functions, driving significantly enhanced financial outcomes to provider organizations.

Durlach also noted the fear that AI will replace doctors and clinicians has dissipated, and the goal now is to figure out how to incorporate AI as another tool to help physicians make the best care decisions possible effectively augmenting the intelligence of the clinician.

"However, we will still need to protect against phenomenon like alert fatigue, which occurs when users who are faced with many low-level alerts, ignore alerts of all levels, thereby missing crucial ones that can affect the health and safety of patients," he cautioned.

In the next few years, he predicts the market will see a technology that finds a balance between being too obtrusive while supporting doctors to make the best decisions for their patients as the learn to trust the AI powered suggestions and recommendations.

"So many technologies claim they have an AI component, but often there's a blurred line in which the term AI is used in a broad sense, when the technology that's being described is actually basic analytics or machine learning," Kuldeep Singh Rajput, CEO and founder of Boston-based Biofourmis, told Healthcare IT News. "Health system leaders looking to make investments in AI should ask for real-world examples of how the technology is creating ROI for other organizations."

For example, he pointed to a study of Brigham & Women's Home Hospital program, recently published in Annals of Internal Medicine, which employed AI-driven continuous monitoring combined with advanced physiology analytics and related clinical care as a substitute for usual hospital care.

The study found that the program--which included an investment in AI-driven predictive analytics as a key component--reduced costs, decreased healthcare use, and lowered readmissions while increasing physical activity compared with usual hospital care.

"Those types of outcomes could be replicated by other healthcare organizations, which makes a strong clinical and financial case to invest in that type of AI," Rajput said.

Nathan Eddy is a healthcare and technology freelancer based in Berlin.Email the writer:nathaneddy@gmail.comTwitter:@dropdeaded209

Continued here:
AI and machine learning trends to look toward in 2020 - Healthcare IT News

Technology Trends to Keep an Eye on in 2020 – Built In Chicago

Artificial intelligence and machine learning, with an eye toward task automation.

For Senior Data Scientist James Buban at iHerb, those are just a couple of the tech trends hell be watching in 2020.

As companies enter a new decade, its important for their leaders to anticipate how the latest tech trends will evolve in order to determine how they can benefit their businesses and their customers. CEO of 20spokes Ryan Fischer said his company uses machine learning data to provide a better user experience for our clientscustomers by leveraging data on individual user behavior.

We asked Buban, Fischer and other local tech execs which trends theyre watching this year and how theyll be utilizing them to enhance their businesses. From natural language processing to computer vision, these are the trends that will be shaping tech in 2020.

As a development agency, 20spokes specializes in helping startups plan, build and scale innovative products. CEO Ryan Fischer said he is looking to AI and machine learning to design better chatbots and wrangle large data sets.

What are the top tech trends you're watching in 2020? What impact do you think these trends will have on your industry in particular?

In 2020, we expect AI to play an even bigger role for our clients. When we talk about AI, we are really discussing machine learning and using data to train a model to use patterns and inference.

Working with machine learning continues to get easier with many large providers working on simpler implementations, and we expect the barrier to entry to continue to lower in 2020. We also have more user data which allows us to use machine learning to design more tailored and intelligent experiences for users.

We areusing machine learning to improve chatbots to create more dynamic dialogue.

How are you applying these trends in your work in the year ahead?

At 20spokes, we use machine learning to provide a better user experience for our clients' customers by leveraging data on individual user behavior to make more accurate recommendations and suggestions. We're continuing to look at how we can apply it to different sets of data, from providing better insights of reports for large data sets to sending us real-time updates based on trained patterns. We are also using machine learning to improve chatbots to create more dynamic dialogue.

In order to deliver trusted insights on consumer packaged goods, Label Insights Senior Data Scientist James Buban said they have to first process large amounts of data. Using machine learning and automation, data collection processes can be finished quickly and more accurately for customers.

What are the top tech trends you're watching in 2020?

The top tech trends that well be watching in 2020 are artificial intelligence and machine learning, with an eye toward task automation. In particular, we are interested in advancements in computer vision, such as object detection and recognition. We are also interested in natural language processing, such as entity tagging and text classification. In general, we believe that machine learning automation will play a big role in both the data collection industry and in e-commerce, particularly in the relatively new addition of the food industry in the retail space.

We plan to use computer vision and natural language processing toautomate tasksthroughout 2020.

How are you applying these trends in your work in the year ahead?

At Label Insight, we are building up a large database of attributes for consumables based on package information. To do so, we first need to collect all package data, which has traditionally been accomplished through a team of dedicated data entry clerks. Due to the huge volume of products that need to be added to our database, this data entry process is expensive, tedious and time-consuming.

Therefore, we plan to use computer vision and natural language processing to begin automating these tasks throughout 2020. We are also planning to use this technology to make our e-commerce solutions more scalable.

Visit link:
Technology Trends to Keep an Eye on in 2020 - Built In Chicago

Finally, a good use for AI: Machine-learning tool guesstimates how well your code will run on a CPU core – The Register

MIT boffins have devised a software-based tool for predicting how processors will perform when executing code for specific applications.

In three papers released over the past seven months, ten computer scientists describe Ithemal (Instruction THroughput Estimator using MAchine Learning), a tool for predicting the number processor clock cycles necessary to execute an instruction sequence when looped in steady state, and include a supporting benchmark and algorithm.

Throughput stats matter to compiler designers and performance engineers, but it isn't practical to make such measurements on-demand, according to MIT computer scientists Saman Amarasinghe, Eric Atkinson, Ajay Brahmakshatriya, Michael Carbin, Yishen Chen, Charith Mendis, Yewen Pu, Alex Renda, Ondrej Sykora, and Cambridge Yang.

So most systems rely on analytical models for their predictions. LLVM offers a command-line tool called llvm-mca that can presents a model for throughput estimation, and Intel offers a closed-source machine code analyzer called IACA (Intel Architecture Code Analyzer), which takes advantage of the company's internal knowledge about its processors.

Michael Carbin, a co-author of the research and an assistant professor and AI researcher at MIT, told the MIT News Service on Monday that performance model design is something of a black art, made more difficult by Intel's omission of certain proprietary details from its processor documentation.

The Ithemal paper [PDF], presented in June at the International Conference on Machine Learning, explains that these hand-crafted models tend to be an order of magnitude faster than measuring basic block throughput sequences of instructions without branches or jumps. But building these models is a tedious, manual process that's prone to errors, particularly when processor details aren't entirely disclosed.

Using a neural network, Ithemal can learn to predict throughout using a set of labelled data. It relies on what the researchers describe as "a hierarchical multiscale recurrent neural network" to create its prediction model.

"We show that Ithemals learned model is significantly more accurate than the analytical models, dropping the mean absolute percent error by more than 50 per cent across all benchmarks, while still delivering fast estimation speeds," the paper explains.

A second paper presented in November at the IEEE International Symposium on Workload Characterization, "BHive: A Benchmark Suite and Measurement Framework for Validating x86-64 Basic Block Performance Models," describes the BHive benchmark for evaluating Ithemal and competing models, IACAm llvm-mca, and OSACA (Open Source Architecture Code Analyzer). It found Ithemal outperformed other models except on vectorized basic blocks.

And in December at the NeurIPS conference, the boffins presented a third paper titled Compiler Auto-Vectorization with Imitation Learning that describes a way to automatically generate compiler optimizations in a way that outperforms LLVMs SLP vectorizer.

The academics argue that their work shows the value of machine learning in the context of performance analysis.

"Ithemal demonstrates that future compilation and performance engineering tools can be augmented with datadriven approaches to improve their performance and portability, while minimizing developer effort," the paper concludes.

Sponsored: Detecting cyber attacks as a small to medium business

Read more here:
Finally, a good use for AI: Machine-learning tool guesstimates how well your code will run on a CPU core - The Register

How Will Your Hotel Property Use Machine Learning in 2020 and Beyond? | – Hotel Technology News

Every hotel should ask the same question. How will our property use machine learning? Its not just a matter of gaining a competitive advantage; its imperative in order to stay in business.By Jason G. Bryant, Founder and CEO, Nor1 - 1.9.2020

Artificial intelligence (AI) implementation has grown 270% over the past four years and 37% in the past year alone, according to Gartners 2019 CIO Survey of more than 3,000 executives. About the ubiquity of AI and machine learning (ML) Gartner VP Chris Howard notes, If you are a CIO and your organization doesnt use AI, chances are high that your competitors do and this should be a concern, (VentureBeat). Hotels may not have CIOs, but any business not seriously considering the implications of ML throughout the organization will find itself in multiple binds, from the inability to offer next-level guest service to operational inefficiencies.

Amazon is the poster child for a sophisticated company that is committed to machine learning both in offers (personalized commerce) as well as behind the scenes in their facilities. Amazon Founder & CEO Jeff Bezos attributes much of Amazons ongoing financial success and competitive dominance to machine learning. Further, he has suggested that the entire future of the company rests on how well it uses AI. However, as Forbes contributor Kathleen Walsh notes, There is no single AI group at Amazon. Rather, every team is responsible for finding ways to utilize AI and ML in their work. It is common knowledge that all senior executives at Amazon plan, write, and adhere to a six-page business plan. A piece of every business plan for every business function is devoted to answering the question: How will you utilize machine learning this year?

Every hotel should ask the same question. How will our property use machine learning? Its not just a matter of gaining a competitive advantage; its imperative in order to stay in business. In the 2017 Deloitte State of Cognitive Survey, which canvassed 1,500 mostly C-level executives, not a single survey respondent believed that cognitive technologies would not drive substantive change. Put more simply: every executive in every industry knows that AI is fundamentally changing the way we do business, both in services/products as well as operations. Further, 94% reported that artificial intelligence would substantially transform their companies within five years, most believing the transformation would occur by 2020.

Playing catch-up with this technology can be competitively dangerous as there is significant time between outward-facing results (when you realize your competition is outperforming you) and how long it will take you to achieve similar results and employ a productive, successful strategy. Certainly, revenue management and pricing will be optimized by ML, but operations, guest service, maintenance, loyalty, development, energy usage, and almost every single aspect of the hospitality enterprise will be impacted as well. Any facility where the speed and precision of tactical decision making can be improved will be positively impacted.

Hotels are quick to think that when ML means robotic housekeepers and facial recognition kiosks. While these are possibilities, ML can do so much more. Here are just a few of the ways hotels are using AI to save money, improve service, and become more efficient.

Hiltons Energy Program

The LightStay program at Hilton predicts energy, water, and waste usage and costs. The company can track actual consumption against predictive models, which allows them to manage year-over-year performance as well as performance against competitors. Further, some hotel brands can link in-room energy to the PMS so that when a room is empty, the air conditioner automatically turns off. The future of sustainability in the hospitality industry relies on ML to shave every bit off of energy usage and budget. For brands with hundreds and thousands of properties, every dollar saved on energy can affect the bottom line in a big way.

IHG & Human Resources

IHG employs 400,000 people across 5,723 hotels. Holding fast to the idea that the ideal guest experience begins with staff, IHG implemented AI strategies tofind the right team member who would best align and fit with each of the distinct brand personalities, notes Hazel Hogben, Head of HR, Hotel Operations, IHG Europe. To create brand personas and algorithms, IHG assessed its top customer-facing senior managers across brands using cognitive, emotional, and personality assessments. They then correlated this with KPI and customer data. Finally, this was cross-referenced with values at the different brands. The algorithms are used to create assessments to test candidates for hire against the personas using gamification-based tools, according to The People Space. Hogben notes that in addition to improving the candidate experience (they like the gamification of the experience), it has also helped in eliminating personal or preconceived bias among recruiters. Regarding ML uses for hiring, Harvard Business Review says in addition to combatting human bias by automatically flagging biased language in job descriptions, ML also identifies highly qualified candidates who might have been overlooked because they didnt fit traditional expectations.

Accor Hotels Upgrades

A 2018 study showed that 70% of hotels say they never or only sometimes promote upgrades or upsells at check-in (PhocusWire). In an effort to maximize the value of premium inventory and increase guest satisfaction, Accor Hotels partnered with Nor1 to implement eStandby Upgrade. With the ML-powered technology, Accor Hotels offers guests personalized upgrades based on previous guest behavior at a price that the guest has shown a demonstrated willingness to pay at booking and during the pre-arrival period, up to 24 hours before check-in. This allows the brand to monetize and leverage room features that cant otherwise be captured by standard room category definitions and to optimize the allocation of inventory available on the day of arrival. ML technology can create offers at any point during the guest pathway, including the front desk. Rather than replacing agents as some hotels fear, it helps them make better, quicker decisions about what to offer guests.

Understanding Travel Reviews

The luxury Dorchester Collection wanted to understand what makes their high-end guests tick. Instead of using the traditional secret shopper methods, which dont tell hotels everything they need to know about their experience, Dorchester Collection opted to analyze traveler feedback from across major review sites using ML. Much to their surprise, they discovered Dorchesters guests care a great deal more about breakfast than they thought. They also learned that guests want to customize breakfast, so they removed the breakfast menu and allowed guests to order whatever they like. As it turns out, guests love this.

In his May 2019 Google I/O Address, Google CEO Sundar Pichai said, Thanks to advances in AI, Google is moving beyond its core mission of organizing the worlds information. We are moving from a company that helps you find answers to a company that helps you get things done (ZDNet). Pichai has long held that we no longer live in a mobile-first world; we now inhabit an AI-first world. Businesses must necessarily pivot with this shift, evolving processes and products, sometimes evolving the business model, as in Googles case.

Hotels that embrace ML across operations will find that the technologies improve processes in substantive ways. ML improves the guest experience and increases revenue with precision decisioning and analysis across finance, human resources, marketing, pricing and merchandising, and guest services. Though the Hiltons, Marriotts, and IHGs of the hotel world are at the forefront of adoption, ML technologies are accessibleboth in price and implementationfor the full range of properties. The time has come to ask every hotel department: How will you use AI this year?

For more about Machine Learning and the impact on the hotel industry, download NOR1s ebook The Hospitality Executives Guide to Machine Learning: Will You Be a Leader, Follower, or Dinosaur?

Jason G. Bryant, Nor1 Founder and CEO, oversees day-to-day operations, provides visionary leadership and strategic direction for the upsell technology company. With Jason at the helm, Nor1 has matured into the technology leader in upsell solutions. Headquartered in Silicon Valley, Nor1 provides innovative revenue enhancement solutions to the hospitality industry that focus on the intersection of machine learning, guest engagement and operational efficiency. A seasoned entrepreneur, Jason has over 25 years experience building and leading international software development and operations organizations.

Related

Read more:
How Will Your Hotel Property Use Machine Learning in 2020 and Beyond? | - Hotel Technology News

Forget Machine Learning, Constraint Solvers are What the Enterprise Needs – – RTInsights

Constraint solvers take a set of hard and soft constraints in an organization and formulate the most effective plan, taking into account real-time problems.

When a business looks to implement an artificial intelligence strategy, even proper expertise can be too narrow. Its what has led many businesses to deploy machine learning or neural networks to solve problems that require other forms of AI, like constraint solvers.

Constraint solvers take a set of hard and soft constraints in an organization and formulate the most effective plan, taking into account real-time problems. It is the best solution for businesses that have timetabling, assignment or efficiency issues.

In a RedHat webinar, principal software engineer, Geoffrey De Smet, ran through three use cases for constraint solvers.

Vehicle Routing

Efficient delivery management is something Amazon has seemingly perfected, so much so its now an annoyance to have to wait 3-5 days for an item to be delivered. Using RedHats OptaPlanner, businesses can improve vehicle routing by 9 to 18 percent, by optimizing routes and ensuring drivers are able to deliver an optimal amount of goods.

To start, OptaPlanner takes in all the necessary constraints, like truck capacity and driver specialization. It also takes into account regional laws, like the amount of time a driver is legally allowed to drive per day and creates a route for all drivers in the organization.

SEE ALSO: Machine Learning Algorithms Help Couples Conceive

In a practical case, De Smet said RedHat saved a technical vehicle routing company over $100 million in savings per year with the constraint solver. Driving time was reduced by 25 percent and the business was able to reduce its headcount by 10,000.

The benefits [of OptaPlanner] are to reduce cost, improve customer satisfaction, employee well-being and save the planet, said De Smet. The nice thing about some of these are theyre complementary, for example reducing travel time also reduces fuel consumption.

Employee timetabling

Knowing who is covering what shift can be an infuriating task for managers, with all the requests for time off, illness and mandatory days off. In a place where 9 to 5 isnt regular, it can be even harder to keep track of it all.

RedHats OptaPlanner is able to take all of the hard constraints (two days off per week, no more than eight-hour shifts) and soft constraints (should have up to 10 hours rest between shifts) and can formulate a timetable that takes all that into account. When someone asks for a day off, OptaPlanner is able to reassign workers in real-time.

De Smet said this is useful for jobs that need to run 24/7, like hospitals, the police force, security firms, and international call centers. According to RedHats simulation, it should improve employee well-being by 19 to 85 percent, alongside improvements in retention and customer satisfaction.

Task assignment

Even within a single business department, there are skills only a few employees have. For instance, in a call center, only a few will be able to speak fluently in both English and French. To avoid customer annoyance, it is imperative for employees with the right skill-set to be assigned correctly.

With OptaPlanner, managers are able to add employee skills and have the AI assign employees correctly. Using the call center example again, a bilingual advisor may take all calls in French for one day when theres a high demand for it, but on others have a mix of French and English.

For customer support, the constraint solver would be able to assign a problem to the correct advisor, or to the next best thing, before the customer is connected, thus avoiding giving out the wrong advice or having to pass the customer on to another advisor.

In the webinar, De Smet said that while the constraint solver is a valuable asset for businesses looking to reduce costs, this shouldnt be their only aim.

Without having all stakeholders involved in the implementation, the AI could end up harming other areas of the business, like customer satisfaction or employee retention. This is a similar warning given from all analysts on AI implementation it needs to come from a genuine desire to improve the business to get the best outcome.

Read more from the original source:
Forget Machine Learning, Constraint Solvers are What the Enterprise Needs - - RTInsights

Tiny Machine Learning On The Attiny85 – Hackaday

We tend to think that the lowest point of entry for machine learning (ML) is on a Raspberry Pi, which it definitely is not. [EloquentArduino] has been pushing the limits to the low end of the scale, and managed to get a basic classification model running on the ATtiny85.

Using his experience of running ML models on an old Arduino Nano, he had created a generator that can export C code from a scikit-learn. He tried using this generator to compile a support-vector colour classifier for the ATtiny85, but ran into a problem with the Arduino ATtiny85 compiler not supporting a variadic function used by the generator. Fortunately he had already experimented with an alternative approach that uses a non-variadic function, so he was able to dust that off and get it working. The classifier accepts inputs from an RGB sensor to identify a set of objects by colour. The model ended up easily fitting into the capabilities of the diminutive ATtiny85, using only 41% of the available flash and 4% of the available ram.

Its important to note what [EloquentArduino] isnt doing here: running an artificial neural network. Theyre just too inefficient in terms of memory and computation time to fit on an ATtiny. But neural nets arent the only game in town, and if your task is classifying something based on a few inputs, like reading a gesture from accelerometer data, or naming a color from a color sensor, the approach here will serve you well. We wonder if this wouldnt be a good solution to the pesky problem of identifying bats by their calls.

We really like how approachable machine learning has become and if youre keen to give ML a go, have a look at the rest of the EloquentArduino blog, its a small goldmine.

Were getting more and more machine learning related hacks, like basic ML on an Arduino Uno, and Lego sortings using ML on a Raspberry Pi.

See the original post here:
Tiny Machine Learning On The Attiny85 - Hackaday

Dell’s Latitude 9510 shakes up corporate laptops with 5G, machine learning, and thin bezels – PCWorld

Dell's Latitude 9510 shakes up corporate laptops with 5G, machine learning, and thin bezels | PCWorld ');consent.ads.queue.push(function(){ try { IDG.GPT.addDisplayedAd("gpt-superstitial", "true"); $('#gpt-superstitial').responsiveAd({screenSize:'971 1115', scriptTags: []}); IDG.GPT.log("Creating ad: gpt-superstitial [971 1115]"); }catch (exception) {console.log("Error with IDG.GPT: " + exception);} }); This business workhorse has a lot to like.

Dell Latitude 9510 hands-on: The three best features

Dell's Latitude 9510 has three features we especially love: The integrated 5G, the Dell Optimizer Utility that tunes the laptop to your preferences, and the thin bezels around the huge display.

Today's Best Tech Deals

Picked by PCWorld's Editors

Top Deals On Great Products

Picked by Techconnect's Editors

The Dell Latitude 9510 is a new breed of corporate laptop. Inspired in part by the companys powerful and much-loved Dell XPS 15, its the first model in an ultra-premium business line packed with the best of the best, tuned for business users.

Announced January 2 and unveiled Monday at CES in Las Vegas, the Latitude 9510 weighs just 3.2 pounds and promises up to 30 hours of battery life.PCWorld had a chance to delve into the guts of the Latitude 9510, learning more about whats in it and how it was built. Here are the coolest things we saw:

The Dell Latitude 9510 is shown disassembled, with (top, left to right) the magnesium bottom panel, the aluminum display lid, and the internals; and (bottom) the array of ports, speaker chambers, keyboard, and other small parts.

The thin bezels around the 15.6-inch screen (see top of story) are the biggest hint that the Latitude 9510 took inspiration from its cousin, the XPS 15. Despite the size of the screen, the Latitude 9510 is amazingly compact. And yet, Dell managed to squeeze in a camera above the displaythanks to a teeny, tiny sliver of a module.

A closer look at the motherboard of the Dell Latitude 9510 shows the 52Wh battery and the areas around the periphery where Dell put the 5G antennas.

The Latitude 9510 is one of the first laptops weve seen with integrated 5G networking. The challenge of 5G in laptops is integrating all the antennas you need within a metal chassis thats decidedly radio-unfriendly.

Dell made some careful choices, arraying the antennas around the edges of the laptop and inserting plastic pieces strategically to improve reception. Two of the antennas, for instance, are placed underneath the plastic speaker components and plastic speaker grille.

The Dell Latitude 9510 incorporated plastic speaker panels to allow reception for the 5G antennas underneath.

Not ready for 5G? No worries. Dell also offers the Latitude 9510 with Wi-Fi 6, the latest wireless networking standard.

You are constantly asking your PC to do things for you, usually the same things, over and over. Dells Optimizer software, which debuts on the Latitude 9510, analyzes your usage patterns and tries to save you time with routine tasks.

For instance, the Express SignIn feature logs you in faster. The ExpressResponse feature learns which applications you fire up first and loads them faster for you. Express Charge watches your battery usage and will adjust settings to save bettery, or step in with faster charging when you need some juice, pronto. Intelligent Audio will try to block out background noise so you can videoconference with less distraction.

The Dell Latitude 9510s advanced features and great looks should elevate corporate laptops in performance as well as style.It will come in clamshell and 2-in-1 versions, and is due to ship March 26. Pricing is not yet available.

Melissa Riofrio spent her formative journalistic years reviewing some of the biggest iron at PCWorld--desktops, laptops, storage, printers. As PCWorld's Executive Editor she leads PCWorlds content direction and covers productivity laptops and Chromebooks.

Original post:
Dell's Latitude 9510 shakes up corporate laptops with 5G, machine learning, and thin bezels - PCWorld

Limits of machine learning – Deccan Herald

Suppose you are driving a hybrid car with a personalised Alexa prototype and happen to witness a road accident. Will your Alexa automatically stop the car to help the victim or call an ambulance? Probably,it would act according tothe algorithmprogrammed into itthat demands the users command.

But as a fellow traveller with Alexa, what would you do? If you areanempathetic human being, you would try to administer first aid and take the victim to a nearby hospital in your car. This empathy is what is missing in the machines, largely in the technocratic conquered education which parents are banking upon these days.

Tech-buddies

With the advancement of bots or robots teaching in our classrooms, theteachersof millennials are worried. Recently, a WhatsApp video of AI-teacher engaging class in one of the schools of Bengaluru went viral. Maybe in a decade or two, academic robots in our classrooms would teach mathematics. Or perhaps they will teach children the algorithmsthatbrings them to life and togetherthey can create another generation of tech-buddies.

I was informed by a friend that coding is taught atprimary level now which was indeed a surprise for me. Then what about other skills? Maybe life skills like swimming, cooking could also be taught by a combination of YouTube and personal robots. However, we have the edge over the machines in at least one area and thats basic human values. This is where human intervention cant be eliminated at all.

The values are not taught; rather they are ingrained at every phase of life by various people who we meet including parents, teachers, peers, and anyone around us alongside practising them. Say for example, how does one teach kids to care for the elderly at home?

Unless one feels the same emotional turmoilas the elderly before them as they are raised and apply the compassionate values, they wouldnt be motivated to take care of them.

The missing link in academia

The discussions on trans-disciplinary or interdisciplinary courses often put forward multiple subjects as well as unconventional subjects to study together. Like engineering and terracotta designs or literature and agriculture. However, the objection comes within academia citing a lack of career prospects.

We tend to forget the fact that the best mathematicians were also musicians and the best medicinal practitioners were botanists or farmers too. Interest in one subject might trigger gaining expertise in others and connect the discreet dots to create a completely new concept.

Life skills like agriculture, pottery, animal care, gardening, andhousing are essentialskills that have many benefits.Every rural person is equipped with these skills through surrounding experiences. Rather than in a classroom session, these learning takes place by seeing, interacting as well as making mistakes.

A friend who homeschooled both her kids had similar concerns. She was firmly against the formalised education which teaches a limited amount of information mostly based on memorisation taking out the natural interest of the child. Several such institutes are functioning to serve the same goals of lifelong learning. Such schools aiming at understanding human-nature, emotional wellbeing, artistic and critical thinking are fundamentally guided on the idea of learning in a fear-free environment.

When scrolling on the admissions page in these schools, I was surprised that the admissions for the 2021 academic year were already completed.This reflects the eagerness of many parents looking for such alternative education systems.

These analogies bring back the basic question of why education? If it is merely for technology-driven jobs, probably by the time your kids grow there wouldnt be many jobs as themachines would have snatched them.

Also, the country is moving towards a technology-driven economy and may not need many skilled labourers. Surely, a few post-millennials would survive in any condition if they are extremely smart and adoptive butthey may need to stop and reboot if theireducation has not prepared them for uncertainties to come.

(The writer is with Christ, Bengaluru)

Read the original post:
Limits of machine learning - Deccan Herald

Pear Therapeutics Expands Pipeline with Machine Learning, Digital Therapeutic and Digital Biomarker Technologies – Business Wire

BOSTON & SAN FRANCISCO--(BUSINESS WIRE)--Pear Therapeutics, Inc., the leader in Prescription Digital Therapeutics (PDTs), announced today that it has entered into agreements with multiple technology innovators, including Firsthand Technology, Inc., leading researchers from the Karolinska Institute in Sweden, Cincinnati Childrens Hospital Medical Center, Winterlight Labs, Inc., and NeuroLex Laboratories, Inc. These new agreements continue to bolster Pears PDT platform, by adding to its library of digital biomarkers, machine learning algorithms, and digital therapeutics.

Pears investment in these cutting-edge technologies further supports its strategy to create the broadest and deepest toolset for the development of PDTs that redefine standard of care in a range of therapeutic areas. With access to these new technologies, Pear is positioned to develop PDTs in new disease areas, while leveraging machine learning to personalize and improve its existing PDTs.

We are excited to announce these agreements, which expand the leading PDT platform, said Corey McCann, M.D., Ph.D., President and CEO of Pear. "Accessing external technologies allows us to continue to broaden the scope and efficacy of PDTs.

The field of digital health is evolving rapidly, and PDTs are going to increasingly play a big part because they are designed to allow doctors to treat disease in combination with drug products more effectively than with drugs alone, said Alex Pentland, Ph.D., a leading expert in voice analytics and MIT Professor. For PDTs to make their mark in healthcare, they will need to continually evolve. Machine learning and voice biomarker algorithms are key to guide that evolution and personalization.

About Pear Therapeutics

Pear Therapeutics, Inc. is the leader in prescription digital therapeutics. We aim to redefine medicine by discovering, developing, and delivering clinically validated software-based therapeutics to provide better outcomes for patients, smarter engagement and tracking tools for clinicians, and cost-effective solutions for payers. Pear has a pipeline of products and product candidates across therapeutic areas, including severe psychiatric and neurological conditions. Our lead product, reSET, for the treatment of Substance Use Disorder, was the first prescription digital therapeutic to receive marketing authorization from the FDA to treat disease. Pears second product, reSET-O, for the treatment of Opioid Use Disorder, received marketing authorization from the FDA in December 2018. For more information, visit us at http://www.peartherapeutics.com.

________________________________

1. Jones, T., Moore, T., & Choo, J. (2016). The Impact of Virtual Reality on Chronic Pain. PloS one, 11(12), e0167523. doi:10.1371/journal.pone.0167523

2. Ljtsson B, Hesser H, Andersson E, Lackner JM, Alaoui El S, Falk L, Aspvall K, Fransson J, Hammarlund K, Lfstrm A, Nowinski S, Lindfors P, Hedman E. Provoking symptoms to relieve symptoms: A randomized controlled dismantling study of exposure therapy in irritable bowel syndrome. Beh Res Ther. 2014 Feb 10;55C:2739. PMID:24584055

3. Ljtsson B, Hedman E, Andersson E, Hesser H, Lindfors P, Hursti T, Rydh S, Rck C, Lindefors N, Andersson G. Internet-delivered exposure-based treatment vs. stress management for irritable bowel syndrome: a randomized trial. Am J Gastroenterol. 2011 Aug;106(8):148191. PMID:21537360

4. Ljtsson B, Andersson G, Andersson E, Hedman E, Lindfors P, Andrewitch S, Rck C, Lindefors N. Acceptability, effectiveness, and cost-effectiveness of internet-based exposure treatment for irritable bowel syndrome in a clinical sample: a randomized controlled trial. BMC Gastroenterol. 2011;11(1):110. PMID:21992655

5. Ljtsson B, Falk L, Vesterlund AW, Hedman E, Lindfors P, Rck C, Hursti T, Andrewitch S, Jansson L, Lindefors N, Andersson G. Internet-delivered exposure and mindfulness based therapy for irritable bowel syndrome - a randomized controlled trial. Beh Res Ther. 2010 Jun;48(6):5319. PMID:20362976

Read the original post:
Pear Therapeutics Expands Pipeline with Machine Learning, Digital Therapeutic and Digital Biomarker Technologies - Business Wire

IBM research director at CES 2020: We will hit the quantum advantage this decade – TechRepublic

Q Network users are studying carbon chemistry, route optimization, and risk analysis on IBM's quantum computer.

IBM research director Dario Gil started his session at CES 2020 with a science lesson to explain the basics of quantum computing. He hit the highlights of superposition, interference, and entanglement.

After this primer, Gil said that the promise of quantum computing is that it offers the power to model natural processes and understand how they work.

"Quantum is the only technology we know that alters the equation of what is possible to solve versus impossible to solve," he said.

SEE:CES 2020: The big trends for business(ZDNet/TechRepublic special feature)

Jeannette Garcia, senior manager for quantum applications algorithms and theory at IBM Research, shared some of the real-world problems that IBM is working on:

Garcia's focus is battery research, which is also the topic of a new IBM partnership with Daimler. She said researchers are using quantum computing to figure out quantum chemistry.

"We are looking at the fundamental behavior of atoms on a molecular scale," she said.

IBM launched the Q Network a year ago and the growth has been impressive:

Gil said that the numbers show that people want access to quantum hardware. Users in the IBM Q network are studying these topics:

Gil said that it's easy to print more qubits, the hard part is making the interactions among qubits high quality. IBM reports that researchers are making progress on that metric as well.

Quantum volume is a metric that incorporates the number of qubits and the error rate of interactions. IBM has doubled the quantum volume of the system every year, with the latest improvement increasing the current volume to 32.

"This is the fourth time we doubled the quantum volume of a quantum computer," he said.

"We call this Gambetta's Law after our head of science and technology who came up with the methodology of measuring the power of quantum computing," he said.

Gil said that the "quantum ready" era started in 2016 and the next phase will start when the technology improves enough to achieve "quantum advantage.""First, a whole generation of developers is going to need to learn how to program these computers," he said. "Then when we hit quantum advantage, we'll be able to solve real-world problems and it's absolutely going to happen this decade."

For more CES 2020 coverage, check out this list of the top products of CES 2020.

Master the fundamentals of big data analytics by following these expert tips, and by reading insights about data science innovations. Delivered Mondays

At CES 2020, IBM research director Dario Gil gave the audience a primer on quantum computing and predicted that the industry will achieve quantum advantage this decade.

Image: IBM

See the original post:
IBM research director at CES 2020: We will hit the quantum advantage this decade - TechRepublic

Google and IBM square off in Schrodingers catfight over quantum supremacy – The Register

Column Just before Christmas, Google claimed quantum supremacy. The company had configured a quantum computer to produce results that would take conventional computers some 10,000 years to replicate - a landmark event.

Bollocks, said IBM - which also has big investments both in quantum computing and not letting Google get away with stuff. Using Summit, the world's largest conventional supercomputer at the Oak Ridge National Laboratories in Tennessee, IBM claimed it could do the same calculation in a smidge over two days.

As befits all things quantum, the truth is a bit of both. IBM's claim is fair enough - but it's right at the edge of Summit's capability and frankly a massive waste of its time. Google could, if it wished, tweak the quantum calculation to move it out of that range. And it might: the calculation was chosen precisely not because it was easy, but because it was hard. Harder is better.

Google's quantum CPU has 54 qubits, quantum bits that can stay in a state of being simultaneously one and zero. The active device itself is remarkably tiny, a silicon chip around a centimetre square, or four times the size of the Z80 die in your childhood ZX Spectrum. On top of the silicon, a nest of aluminium tickled by microwaves hosts the actual qubits. The aluminium becomes superconducting below around 100K, but the very coldest part of the circuit is just 15 millikelvins. At this temperature the qubits have low enough noise to survive long enough to be useful

By configuring the qubits in a circuit, setting up data and analysing the patterns that emerge when the superpositions are observed and thus collapse to either one or zero, Google can determine the probable correct outcome for the problem the circuit represents. 54 qubits, if represented in conventional computer terms, would need 254 bits of RAM to represent each step of the calculation, or two petabytes' worth. Manipulating this much data many times over gives the 10 millennia figure Google claims.

IBM, on the other hand, says that it has just enough disk space on Summit to store the complete calculation. However you do it, though, it's not very useful; the only application is in random number generation. That's a fun, important and curiously nuanced field, but you don't really need a refrigerator stuffed full of qubits to get there. You certainly don't need the 27,648 NVidia Tesla GPUs in Summit chewing through 16 megawatts of power.

What Google is actually doing is known in the trade as "pulling a Steve", from the marketing antics of the late Steve Jobs. In particular, his tour at NeXT Inc, the company he started in the late 1980s to annoy Apple and produce idiosyncratic workstations. Hugely expensive to make and even more so to buy, the NeXT systems were never in danger of achieving dominance - but you wouldn't know that from Jobs' pronouncements. He declared market supremacy at every opportunity, although in carefully crafted phrases that critics joked defined the market as "black cubic workstations running NeXTOS."

Much the same is true of Google's claim. The calculation is carefully crafted to do precisely the things that Google's quantum computer can do - the important thing isn't the result, but the journey. Perhaps the best analogy is with the Wright Brothers' first flight: of no practical use, but tremendous significance.

What happened to NeXT? It got out of hardware and concentrated on software, then Jobs sold it - and himself - to Apple, and folded in some of that software into MacOS development. Oh, and some cat called Berners-Lee built something called the World Wide Web on a Next Cube.

Nothing like this will happen with Google's technology. There's no new web waiting to be borne on the wings of supercooled qubits. Even some of the more plausible things, like quantum decryption of internet traffic, is a very long way from reality - and, once it happens, it's going to be relatively trivial to tweak conventional encryption to defeat it. But the raw demonstration, that a frozen lunchbox consuming virtually no power in its core can outperform a computer chewing through enough wattage to keep a small town going, is a powerful inducement for more work.

That's Google's big achievement. So many new and promising technologies have failed not because they could never live up to expectations but because they cant survive infancy. Existing, established technology has all the advantages: it generates money, it has distribution channels, it has an army of experts behind it, and it can adjust to close down challengers before they get going. To take just one company - Intel has tried for decades to break out of the x86 CPU prison. New wireless standards, new memory technologies, new chip architectures, new display systems, new storage and security ideas - year after year, the company casts about for something new that'll make money. It never gets there.

Google's "quantum supremacy" isn't there either, but it has done enough to protect its infant prince in its superconducting crib. That's worth a bit of hype.

Sponsored: Detecting cyber attacks as a small to medium business

Go here to see the original:
Google and IBM square off in Schrodingers catfight over quantum supremacy - The Register