Monthly Archives: May 2021

Quantum Computing – Intel

Posted: May 20, 2021 at 4:43 am

Ongoing Development in Partnership with Industry and AcademiaThe challenges in developing functioning quantum computing systems are manifold and daunting. For example, qubits themselves are extremely fragile, with any disturbance including measurement causing them to revert from their quantum state to a classical (binary) one, resulting in data loss. Tangle Lake also must operate at profoundly cold temperatures, within a small fraction of one kelvin from absolute zero.

Moreover, there are significant issues of scale, with real-world implementations at commercial scale likely requiring at least one million qubits. Given that reality, the relatively large size of quantum processors is a significant limitation in its own right; for example, Tangle Lake is about three inches square. To address these challenges, Intel is actively developing design, modeling, packaging, and fabrication techniques to enable the creation of more complex quantum processors.

Intel began collaborating with QuTech, a quantum computing organization in the Netherlands, in 2015; that involvement includes a US$50M investment by Intel in QuTech to provide ongoing engineering resources that will help accelerate developments in the field. QuTech was created as an advanced research and education center for quantum computing by the Netherlands Organisation for Applied Research and the Delft University of Technology. Combined with Intels expertise in fabrication, control electronics, and architecture, this partnership is uniquely suited to the challenges of developing the first viable quantum computing systems.

Currently, Tangle Lake chips produced in Oregon are being shipped to QuTech in the Netherlands for analysis. QuTech has developed robust techniques for simulating quantum workloads as a means to address issues such as connecting, controlling, and measuring multiple, entangled qubits. In addition to helping drive system-level design of quantum computers, the insights uncovered through this work contribute to faster transition from design and fabrication to testing of future generations of the technology.

In addition to its collaboration with QuTech, Intel Labs is also working with other ecosystem members both on fundamental and system-level challenges on the entire quantum computing stack. Joint research being conducted with QuTech, the University of Toronto, the University of Chicago, and others builds upward from quantum devices to include mechanisms such as error correction, hardware- and software-based control mechanisms, and approaches and tools for developing quantum applications.

Beyond Superconduction: The Promise of Spin QubitsOne approach to addressing some of the challenges that are inherent to quantum processors such as Tangle Lake that are based on superconducting qubits is the investigation of spin qubits by Intel Labs and QuTech. Spin qubits function on the basis of the spin of a single electron in silicon, controlled by microwave pulses. Compared to superconducting qubits, spin qubits far more closely resemble existing semiconductor components operating in silicon, potentially taking advantage of existing fabrication techniques. In addition, this promising area of research holds the potential for advantages in the following areas:

Operating temperature:Spin qubits require extremely cold operating conditions, but to a lesser degree than superconducting qubits (approximately one degree kelvin compared to 20 millikelvins); because the difficulty of achieving lower temperatures increases exponentially as one gets closer to absolute zero, this difference potentially offers significant reductions in system complexity.

Stability and duration:Spin qubits are expected to remain coherent for far longer than superconducting qubits, making it far simpler at the processor level to implement them for algorithms.

Physical size:Far smaller than superconducting qubits, a billion spin qubits could theoretically fit in one square millimeter of space. In combination with their structural similarity to conventional transistors, this property of spin qubits could be instrumental in scaling quantum computing systems upward to the estimated millions of qubits that will eventually be needed in production systems.

To date, researchers have developed a spin qubit fabrication flow using Intels 300-millimeter process technology that is enabling the production of small spin-qubit arrays in silicon. In fact, QuTech has already begun testing small-scale spin-qubit-based quantum computer systems. As a publicly shared software foundation, QuTech has also developed the Quantum Technology Toolbox, a Python package for performing measurements and calibration of spin-qubits.

Originally posted here:

Quantum Computing - Intel

Posted in Quantum Computing | Comments Off on Quantum Computing – Intel

Cloud-Based Quantum Computing, Explained | Freethink

Posted: at 4:43 am

Name a scientific breakthrough made in the last 50 years, and a computer probably played a role in it. Now, consider what sorts of breakthroughs may be possible with quantum computers.

These next-gen systems harness the weird physics of the subatomic world to complete computations far faster than classical computers, and that processing power promises to revolutionize everything from finance and healthcare to energy and aerospace.

But today's quantum computers are complex, expensive devices, not unlike those first gigantic modern computers a person can't exactly pop down to an electronics retailer to pick one up (not yet, anyways).

However, there is a way for us to get a taste of that future, today: cloud-based quantum computing.

Cloud computing is the delivery of computing resources data storage, processing power, software, etc. on-demand over the internet.

Today, there are countless cloud computing service providers, but a few of the biggest are Amazon Web Services (AWS), Microsoft Azure, and Google Cloud.

Amazon, Microsoft, and Google are massive companies, and their computing resources are equally expansive AWS alone offers more than 175 cloud computing services, supported by more than 100 data centers across the globe.

A person might never be able to buy the resources those companies own, but through cloud computing, they can essentially rent them, only paying for what they actually use.

A scientist, for example, could pay AWS for 10 hours of access to one of the company's powerful virtual computers to run an experiment, rather than spending far more money to buy a comparable system.

That's just one use, though, and there are countless real-world examples of people using cloud computing services. The shows you watch on Netflix? The company stores them in a database in the cloud. The emails in your Gmail inbox? They're in the cloud, too.

Cloud-based quantum computing combines the benefits of the cloud with the next generation of computers.

In 2016, IBM connected a small quantum computer to the cloud, giving people their first chance to create and run small programs on a quantum computer online.

Since then, IBM has expanded its cloud-based quantum computing offerings and other companies, including Amazon, have developed and launched their own services.

In 2019, Microsoft announced one such service, Azure Quantum, which includes access to quantum algorithms, hardware, and software. It made that service available to a small number of partners during a limited preview in May 2020.

"Azure Quantum enables every developer, every person, every enterprise to really tap in and succeed in their businesses and their endeavors with quantum solutions," Krysta Svore, the GM of Microsoft Quantum, said in 2019. "And that's incredibly powerful."

Now, Microsoft has expanded its Azure Quantum preview to the public, giving anyone with the funds access to the cutting-edge quantum resources.

Researchers at Case Western Reserve University have already used Microsoft's cloud-based quantum computing service to develop a way to improve the quality and speed of MRI scans for cancer.

Ford, meanwhile, is using it to try to solve the problem of traffic congestion.

Now that anyone can use the services, we could be seeing far more projects like these in the near future rather than years from now when quantum systems are (maybe) as accessible as classical computers are today.

We'd love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [emailprotected].

Here is the original post:

Cloud-Based Quantum Computing, Explained | Freethink

Posted in Quantum Computing | Comments Off on Cloud-Based Quantum Computing, Explained | Freethink

Google wants to build a useful quantum computer by 2029 – The Verge

Posted: at 4:43 am

Google is aiming to build a useful, error-corrected quantum computer by the end of the decade, the company explained in a blog post. The search giant hopes the technology will help solve a range of big problems like feeding the world and climate change to developing better medicines. To develop the technology, Google has unveiled a new Quantum AI campus in Santa Barbara containing a quantum data center, hardware research labs, and quantum processor chip fabrication facilities. It will spend billions developing the technology over the next decade, The Wall Street Journal reports.

The target announced at Google I/O on Tuesday comes a year and a half after Google said it had achieved quantum supremacy, a milestone where a quantum computer has performed a calculation that would be impossible on a traditional classical computer. Google says its quantum computer was able to perform a calculation in 200 seconds that would have taken 10,000 years or more on a traditional supercomputer. But competitors racing to build quantum computers of their own cast doubt on Googles claimed progress. Rather than taking 10,000 years, IBM argued at the time that a traditional supercomputer could actually perform the task in 2.5 days or less.

This extra processing power could be useful to simulate molecules, and hence nature, accurately, Google says. This might help us design better batteries, creating more carbon-efficient fertilizer, or develop more targeted medicines, because a quantum computer could run simulations before a company invests in building real-world prototypes. Google also expects quantum computing to have big benefits for AI development.

Despite claiming to have hit the quantum supremacy milestone, Google says it has a long way to go before such computers are useful. While current quantum computers are made up of less than 100 qubits, Google is targeting machine built with 1,000,000. Getting there is a multistage process. Google says it first needs to cut down on the errors qubits make, before it can think about building 1,000 physical qubits together into a single logical qubit. This will lay the groundwork for the quantum transistor, a building block of future quantum computers.

Despite the challenges ahead, Google is optimistic about its chances. We are at this inflection point, the scientist in charge of Googles Quantum AI program, Hartmut Neven, told the Wall Street Journal, We now have the important components in hand that make us confident. We know how to execute the road map. Googles eventually plans to offer quantum computing services over the cloud.

Originally posted here:

Google wants to build a useful quantum computer by 2029 - The Verge

Posted in Quantum Computing | Comments Off on Google wants to build a useful quantum computer by 2029 – The Verge

27 Milestones In The History Of Quantum Computing – Forbes

Posted: at 4:43 am

circa 1931: German-born physicist Albert Einstein (1879 - 1955) standing beside a blackboard with ... [+] chalk-marked mathematical calculations written across it. (Photo by Hulton Archive/Getty Images)

40 years ago, Nobel Prize-winner Richard Feynman argued that nature isn't classical, dammit, and if you want to make a simulation of nature, you'd better make it quantum mechanical. This was later perceived as a rallying cry for developing a quantum computer, leading to todays rapid progress in the search for quantum supremacy. Heres a very short history of the evolution of quantum computing.

1905Albert Einstein explains the photoelectric effectshining light on certain materials can function to release electrons from the materialand suggests that light itself consists of individual quantum particles or photons.

1924The term quantum mechanics is first used in a paper by Max Born

1925Werner Heisenberg, Max Born, and Pascual Jordan formulate matrix mechanics, the first conceptually autonomous and logically consistent formulation of quantum mechanics

1925 to 1927Niels Bohr and Werner Heisenberg develop the Copenhagen interpretation, one of the earliest interpretations of quantum mechanics which remains one of the most commonly taught

1930Paul Dirac publishes The Principles of Quantum Mechanics, a textbook that has become a standard reference book that is still used today

1935Albert Einstein, Boris Podolsky, and Nathan Rosen publish a paper highlighting the counterintuitive nature of quantum superpositions and arguing that the description of physical reality provided by quantum mechanics is incomplete

1935Erwin Schrdinger, discussing quantum superposition with Albert Einstein and critiquing the Copenhagen interpretation of quantum mechanics, develops a thought experiment in which a cat (forever known as Schrdingers cat) is simultaneously dead and alive; Schrdinger also coins the term quantum entanglement

1947Albert Einstein refers for the first time to quantum entanglement as spooky action at a distance in a letter to Max Born

1976Roman Stanisaw Ingarden of the Nicolaus Copernicus University in Toru, Poland, publishes one of the first attempts at creating a quantum information theory

1980Paul Benioff of the Argonne National Laboratory publishes a paper describing a quantum mechanical model of a Turing machine or a classical computer, the first to demonstrate the possibility of quantum computing

1981In a keynote speech titled Simulating Physics with Computers, Richard Feynman of the California Institute of Technology argues that a quantum computer had the potential to simulate physical phenomena that a classical computer could not simulate

1985David Deutsch of the University of Oxford formulates a description for a quantum Turing machine

1992The DeutschJozsa algorithm is one of the first examples of a quantum algorithm that is exponentially faster than any possible deterministic classical algorithm

1993The first paper describing the idea of quantum teleportation is published

1994Peter Shor of Bell Laboratories develops a quantum algorithm for factoring integers that has the potential to decrypt RSA-encrypted communications, a widely-used method for securing data transmissions

1994The National Institute of Standards and Technology organizes the first US government-sponsored conference on quantum computing

1996Lov Grover of Bell Laboratories invents the quantum database search algorithm

1998First demonstration of quantum error correction; first proof that a certain subclass of quantum computations can be efficiently emulated with classical computers

1999Yasunobu Nakamura of the University of Tokyo and Jaw-Shen Tsai of Tokyo University of Science demonstrate that a superconducting circuit can be used as a qubit

2002The first version of the Quantum Computation Roadmap, a living document involving key quantum computing researchers, is published

2004First five-photon entanglement demonstrated by Jian-Wei Pan's group at the University of Science and Technology in China

2011The first commercially available quantum computer is offered by D-Wave Systems

2012 1QB Information Technologies (1QBit), the first dedicated quantum computing software company, is founded

2014Physicists at the Kavli Institute of Nanoscience at the Delft University of Technology, The Netherlands, teleport information between two quantum bits separated by about 10 feet with zero percent error rate

2017 Chinese researchers report the first quantum teleportation of independent single-photon qubits from a ground observatory to a low Earth orbit satellite with a distance of up to 1400 km

2018The National Quantum Initiative Act is signed into law by President Donald Trump, establishing the goals and priorities for a 10-year plan to accelerate the development of quantum information science and technology applications in the United States

2019Google claims to have reached quantum supremacy by performing a series of operations in 200 seconds that would take a supercomputer about 10,000 years to complete; IBM responds by suggesting it could take 2.5 days instead of 10,000 years, highlighting techniques a supercomputer may use to maximize computing speed

The race for quantum supremacy is on, to being able to demonstrate a practical quantum device that can solve a problem that no classical computer can solve in any feasible amount of time. Speedand sustainabilityhas always been the measure of the jump to the next stage of computing.

In 1944, Richard Feynman, then a junior staff member at Los Alamos, organized a contest between human computers and the Los Alamos IBM facility, with both performing a calculation for the plutonium bomb. For two days, the human computers kept up with the machines. But on the third day, recalled an observer, the punched-card machine operation began to move decisively ahead, as the people performing the hand computing could not sustain their initial fast pace, while the machines did not tire and continued at their steady pace (seeWhen Computers Were Human, by David Alan Greer).

Nobel Prize winning physicist Richard Feynman stands in front of a blackboard strewn with notation ... [+] in his lab in Los Angeles, Californina. (Photo by Kevin Fleming/Corbis via Getty Images)

Visit link:

27 Milestones In The History Of Quantum Computing - Forbes

Posted in Quantum Computing | Comments Off on 27 Milestones In The History Of Quantum Computing – Forbes

Researchers design new experiments to map and test the quantum realm – Harvard Gazette

Posted: at 4:43 am

Calculating exactly how energy redistributes during a reaction between four atoms is beyond the power of todays best computers, Ni said. A quantum computer might be the only tool that could one day achieve such a complex calculation.

In the meantime, calculating the impossible requires a few well-reasoned assumptions and approximations (picking one location for one of those electrons, for example) and specialized techniques that grant Ni and her team ultimate control over their reaction.

One such technique was another recent Ni lab discovery: She and her team exploited a reliable feature of molecules their highly stable nuclear spin to control the quantum state of the reacting molecules all the way through to the product, work they chronicled in a recent study published in Nature Chemistry. They also discovered a way to detect products from a single collision reaction event, a difficult feat when 10,000 molecules could be reacting simultaneously. With these two novel methods, the team could identify the unique spectrum and quantum state of each product molecule, the kind of precise control necessary to measure all 57 pathways their potassium rubidium reaction could take.

Over several months during the COVID-19 pandemic, the team ran experiments to collect data on each of those 57 possible reaction channels, repeating each channel once every minute for several days before moving on to the next. Luckily, once the experiment was set up, it could be run remotely: Lab members could stay home, keeping the lab re-occupancy at COVID-19 standards, while the system churned on.

The test, said Matthew Nichols, a postdoctoral scholar in the Ni lab and an author on both papers, indicates good agreement between the measurement and the model for a subset containing 50 state-pairs but reveals significant deviations in several state-pairs.

In other words, their experimental data confirmed that previous predictions based on statistical theory (one far less complex than Schrdingers equation) are accurate mostly. Using their data, the team could measure the probability that their chemical reaction would take each of the 57 reaction channels. Then, they compared their percentages with the statistical model. Only seven of the 57 showed a significant enough divergence to challenge the theory.

We have data that pushes this frontier, Ni said. To explain the seven deviating channels, we need to calculate Schrdingers equation, which is still impossible. So now, the theory has to catch up and propose new ways to efficiently perform such exact quantum calculations.

Next, Ni and her team plan to scale back their experiment and analyze a reaction between only three atoms (one molecule is made of two atoms, which is then forced to react with a single atom). In theory, this reaction, which has far fewer dimensions than a four-atom reaction, should be easier to calculate and study in the quantum realm. Yet, already, the team has discovered something strange: The intermediate phase of the reaction lives on for many orders of magnitude longer than the theory predicts.

There is already mystery, Ni said. Its up to the theorists now.

This work was supported by the Department of Energy, the David and Lucile Packard Foundation, the Arnold O. Beckman Postdoctoral Fellowship in Chemical Sciences, and the National Natural Science Foundation of China.

Link:

Researchers design new experiments to map and test the quantum realm - Harvard Gazette

Posted in Quantum Computing | Comments Off on Researchers design new experiments to map and test the quantum realm – Harvard Gazette

Agnostiq Secures $2M Seed Round to Further Develop SaaS-based Quantum Solutions – HPCwire

Posted: at 4:43 am

TORONTO, May 13, 2021 Agnostiq, Inc., a new quantum computing SaaS startup, has raised $2 million in seed funding to support the continued development of its software platform. The growth financing is led by Differential Ventures, with follow-on participation from Scout Ventures, Tensility Venture Partners, Boost VC, and Green Egg Ventures. The company previously raised $830 thousand in pre-Seed funding, with the majority coming from current investors Differential Ventures and Boost VC.

Quantum computers are inevitable, and their game-changing nature makes it imperative that businesses invest in developing in-house expertise, says David Magerman, managing partner of Differential Ventures. Agnostiqs tools address key challenges when it comes to developing proprietary research in the space, which is ultimately what led us to invest.

Co-founded by CEO Oktay Goktas and COO Elliot MacGowan in 2018, the duo aims to build a company at the forefront of enterprise quantum computing. Goktas, a physicist by training, received his PhD from the Max-Planck-Institute in Stuttgart, Germany where he worked under the supervision of Nobel laureate Klaus von Klitzing. Prior to founding Agnostiq, Goktas was a postdoctoral researcher at the Weizmann Institute of Science in Israel and a visiting researcher at the University of Toronto. Prior to Agnostiq, MacGowan worked at Bell Canada in various operational and strategic roles. He received his MBA from the University of Toronto.

We are extremely excited to further strengthen our relationship with David and officially have him on our board. With this new funding and our new partners, we are going to bring our products to the next level, says Goktas.

Quantum computing is poised to have a transformative impact in the coming years, much like machine learning. But, it remains largely inaccessible to the enterprise, due mainly to the novelty of the technology and the high level of expertise required to build applications. In addition, quantum computing is entirely cloud based and vulnerable to traditional security threats, requiring new methods for data security.

One of only a handful of available SaaS-based quantum solutions hosted on the cloud, Agnostiqs platform is comprised of three main technologies that make it easier for enterprises to build their own quantum computing applications:

Our goal is to help clients build quantum computing into their workflows sooner by making it more practical, more accessible, and more secure, says MacGowan. Were solving many of the biggest challenges that machine learning companies faced in the past ten years and that we all take for granted today.

ABOUT AGNOSTIQ, INC.:

Agnostiq, Inc. is an interdisciplinary team of physicists, computer scientists, and mathematicians with the shared aim of using cutting edge technology to build practical applications for industry. The company combines best-in-class quantum applications, privacy tools, and support for all of the latest quantum hardware into a powerful and easy-to-use platform designed to help organizations solve mission critical tasks. Learn more at http://www.agnostiq.ai.

Source: AGNOSTIQ, INC.

Original post:

Agnostiq Secures $2M Seed Round to Further Develop SaaS-based Quantum Solutions - HPCwire

Posted in Quantum Computing | Comments Off on Agnostiq Secures $2M Seed Round to Further Develop SaaS-based Quantum Solutions – HPCwire

The Worldwide Quantum Technology Industry will Reach $31.57 Billion by 2026 – North America to be the Biggest Region – PRNewswire

Posted: at 4:43 am

DUBLIN, May 18, 2021 /PRNewswire/ -- The "Quantum Technology Market by Computing, Communications, Imaging, Security, Sensing, Modeling and Simulation 2021 - 2026" report has been added to ResearchAndMarkets.com's offering.

This report provides a comprehensive analysis of the quantum technology market. It assesses companies/organizations focused on quantum technology including R&D efforts and potential gaming-changing quantum tech-enabled solutions. The report evaluates the impact of quantum technology upon other major technologies and solution areas including AI, Edge Computing, Blockchain, IoT, and Big Data Analytics. The report provides an analysis of quantum technology investment, R&D, and prototyping by region and within each major country globally.

The report also provides global and regional forecasts as well as the outlook for quantum technology's impact on embedded hardware, software, applications, and services from 2021 to 2026. The report provides conclusions and recommendations for a wide range of industries and commercial beneficiaries including semiconductor companies, communications providers, high-speed computing companies, artificial intelligence vendors, and more.

Select Report Findings:

Much more than only computing, the quantum technology market provides a foundation for improving all digital communications, applications, content, and commerce. In the realm of communications, quantum technology will influence everything from encryption to the way that signals are passed from point A to point B. While currently in the R&D phase, networked quantum information and communications technology (ICT) is anticipated to become a commercial reality that will represent nothing less than a revolution for virtually every aspect of ICT.

However, there will be a need to integrate the ICT supply chain with quantum technologies in a manner that does not attempt to replace every aspect of classical computing but instead leverages a hybrid computational framework. Traditional High-Performance Computing (HPC) will continue to be used for many existing problems for the foreseeable future, while quantum technologies will be used for encrypting communications, signaling, and will be the underlying basis in the future for all commerce transactions. This does not mean that quantum encryption will replace Blockchain, but rather provide improved encryption for blockchain technology.

The quantum technology market will be a substantial enabler of dramatically improved sensing and instrumentation. For example, gravity sensors may be made significantly more precise through quantum sensing. Quantum electromagnetic sensing provides the ability to detect minute differences in the electromagnetic field. This will provide a wide-ranging number of applications, such as within the healthcare arena wherein quantum electromagnetic sensing will provide the ability to provide significantly improved mapping of vital organs. Quantum sensing will also have applications across a wide range of other industries such as transportation wherein there is the potential for substantially improved safety, especially for self-driving vehicles.

Commercial applications for the quantum imaging market are potentially wide-ranging including exploration, monitoring, and safety. For example, gas image processing may detect minute changes that could lead to early detection of tank failure or the presence of toxic chemicals. In concert with quantum sensing, quantum imaging may also help with various public safety-related applications such as search and rescue. Some problems are too difficult to calculate but can be simulated and modeled. Quantum simulations and modeling is an area that involves the use of quantum technology to enable simulators that can model complex systems that are beyond the capabilities of classical HPC. Even the fastest supercomputers today cannot adequately model many problems such as those found in atomic physics, condensed-matter physics, and high-energy physics.

Key Topics Covered:

1.0 Executive Summary

2.0 Introduction

3.0 Quantum Technology and Application Analysis3.1 Quantum Computing3.2 Quantum Cryptography Communication3.3 Quantum Sensing and Imaging3.4 Quantum Dots Particles3.5 Quantum Cascade Laser3.6 Quantum Magnetometer3.7 Quantum Key Distribution3.8 Quantum Cloud vs. Hybrid Platform3.9 Quantum 5G Communication3.10 Quantum 6G Impact3.11 Quantum Artificial Intelligence3.12 Quantum AI Technology3.13 Quantum IoT Technology3.14 Quantum Edge Network3.15 Quantum Blockchain

4.0 Company Analysis4.1 1QB Information Technologies Inc.4.2 ABB (Keymile)4.3 Adtech Optics Inc.4.4 Airbus Group4.5 Akela Laser Corporation4.6 Alibaba Group Holding Limited4.7 Alpes Lasers SA4.8 Altairnano4.9 Amgen Inc.4.10 Anhui Qasky Science and Technology Limited Liability Company (Qasky)4.11 Anyon Systems Inc.4.12 AOSense Inc.4.13 Apple Inc. (InVisage Technologies)4.14 Biogen Inc.4.15 Block Engineering4.16 Booz Allen Hamilton Inc.4.17 BT Group4.18 Cambridge Quantum Computing Ltd.4.19 Chinese Academy of Sciences4.20 D-Wave Systems Inc.4.21 Emerson Electric Corporation4.22 Fujitsu Ltd.4.23 Gem Systems4.24 GeoMetrics Inc.4.25 Google Inc.4.26 GWR Instruments Inc.4.27 Hamamatsu Photonics K.K.4.28 Hewlett Packard Enterprise4.29 Honeywell International Inc.4.30 HP Development Company L.P.4.31 IBM Corporation4.32 ID Quantique4.33 Infineon Technologies4.34 Intel Corporation4.35 KETS Quantum Security4.36 KPN4.37 LG Display Co. Ltd.4.38 Lockheed Martin Corporation4.39 MagiQ Technologies Inc.4.40 Marine Magnetics4.41 McAfee LLC4.42 MicroSemi Corporation4.43 Microsoft Corporation4.44 Mirsense4.45 Mitsubishi Electric Corp.4.46 M-Squared Lasers Limited4.47 Muquans4.48 Nanoco Group PLC4.49 Nanoplus Nanosystems and Technologies GmbH4.50 Nanosys Inc.4.51 NEC Corporation4.52 Nippon Telegraph and Telephone Corporation4.53 NN-Labs LLC.4.54 Nokia Corporation4.55 Nucrypt4.56 Ocean NanoTech LLC4.57 Oki Electric4.58 Oscilloquartz SA4.59 OSRAM4.60 PQ Solutions Limited (Post-Quantum)4.61 Pranalytica Inc.4.62 QC Ware Corp.4.63 QD Laser Co. Inc.4.64 QinetiQ4.65 Quantum Circuits Inc.4.66 Quantum Materials Corp.4.67 Qubitekk4.68 Quintessence Labs4.69 QuSpin4.70 QxBranch LLC4.71 Raytheon Company4.72 Rigetti Computing4.73 Robert Bosch GmbH4.74 Samsung Electronics Co. Ltd. (QD Vision Inc.)4.75 SeQureNet (Telecom ParisTech)4.76 SK Telecom4.77 ST Microelectronics4.78 Texas Instruments4.79 Thorlabs Inc4.80 Toshiba Corporation4.81 Tristan Technologies4.82 Twinleaf4.83 Universal Quantum Devices4.84 Volkswagen AG4.85 Wavelength Electronics Inc.4.86 ZTE Corporation

5.0 Quantum Technology Market Analysis and Forecasts 2021 - 20265.1 Global Quantum Technology Market 2021 - 20265.2 Global Quantum Technology Market by Technology 2021 - 20265.3 Quantum Computing Market 2021 - 20265.4 Quantum Cryptography Communication Market 2021 - 20265.5 Quantum Sensing and Imaging Market 2021 - 20265.6 Quantum Dots Market 2021 - 20265.7 Quantum Cascade Laser Market 2021 - 20265.8 Quantum Magnetometer Market 2021 - 20265.9 Quantum Key Distribution Market 2021 - 20265.9.1 Global Quantum Key Distribution Market by Technology5.9.1.1 Global Quantum Key Distribution Market by Infrastructure Type5.9.2 Global Quantum Key Distribution Market by Industry Vertical5.9.2.1 Global Quantum Key Distribution (QKD) Market by Government5.9.2.2 Global Quantum Key Distribution Market by Enterprise/Civilian Industry5.10 Global Quantum Technology Market by Deployment5.11 Global Quantum Technology Market by Sector5.12 Global Quantum Technology Market by Connectivity5.13 Global Quantum Technology Market by Revenue Source5.14 Quantum Intelligence Market 2021 - 20265.15 Quantum IoT Technology Market 2021 - 20265.16 Global Quantum Edge Network Market5.17 Global Quantum Blockchain Market5.18 Global Quantum Exascale Computing Market5.19 Regional Quantum Technology Market 2021 - 20265.19.1 Regional Comparison of Global Quantum Technology Market5.19.2 Global Quantum Technology Market by Region5.19.2.1 North America Quantum Technology Market by Country5.19.2.2 Europe Quantum Technology Market by Country5.19.2.3 Asia Pacific Quantum Technology Market by Country5.19.2.4 Middle East and Africa Quantum Technology Market by Country5.19.2.5 Latin America Quantum Technology Market by Country

6.0 Conclusions and Recommendations

For more information about this report visit https://www.researchandmarkets.com/r/6syb13

Media Contact:

Research and Markets Laura Wood, Senior Manager [emailprotected]

For E.S.T Office Hours Call +1-917-300-0470 For U.S./CAN Toll Free Call +1-800-526-8630 For GMT Office Hours Call +353-1-416-8900

U.S. Fax: 646-607-1907 Fax (outside U.S.): +353-1-481-1716

SOURCE Research and Markets

http://www.researchandmarkets.com

Here is the original post:

The Worldwide Quantum Technology Industry will Reach $31.57 Billion by 2026 - North America to be the Biggest Region - PRNewswire

Posted in Quantum Computing | Comments Off on The Worldwide Quantum Technology Industry will Reach $31.57 Billion by 2026 – North America to be the Biggest Region – PRNewswire

Following Atoms in Real Time Could Lead to New Types of Materials and Quantum Technology Devices – SciTechDaily

Posted: at 4:43 am

Researchers have used a technique similar to MRI to follow the movement of individual atoms in real time as they cluster together to form two-dimensional materials, which are a single atomic layer thick.

The results, reported in the journalPhysical Review Letters, could be used to design new types of materials and quantum technology devices. The researchers, from the University of Cambridge, captured the movement of the atoms at speeds that are eight orders of magnitude too fast for conventional microscopes.

Two-dimensional materials, such as graphene, have the potential to improve the performance of existing and new devices, due to their unique properties, such as outstanding conductivity and strength. Two-dimensional materials have a wide range of potential applications, from bio-sensing and drug delivery to quantum information and quantum computing. However, in order for two-dimensional materials to reach their full potential, their properties need to be fine-tuned through a controlled growth process.

This technique isnt a new one, but its never been used in this way, to measure the growth of a two-dimensional material. Nadav Avidor

These materials normally form as atoms jump onto a supporting substrate until they attach to a growing cluster. Being able to monitor this process gives scientists much greater control over the finished materials. However, for most materials, this process happens so quickly and at such high temperatures that it can only be followed using snapshots of a frozen surface, capturing a single moment rather than the whole process.

Now, researchers from the University of Cambridge have followed the entire process in real time, at comparable temperatures to those used in industry.

The researchers used a technique known as helium spin-echo, which has been developed in Cambridge over the last 15 years. The technique has similarities to magnetic resonance imaging (MRI), but uses a beam of helium atoms to illuminate a target surface, similar to light sources in everyday microscopes.

Using this technique, we can do MRI-like experiments on the fly as the atoms scatter, said Dr Nadav Avidor from Cambridges Cavendish Laboratory, the papers senior author. If you think of a light source that shines photons on a sample, as those photons come back to your eye, you can see what happens in the sample.

Instead of photons however, Avidor and his colleagues use helium atoms to observe what happens on the surface of the sample. The interaction of the helium with atoms at the surface allows the motion of the surface species to be inferred.

Using a test sample of oxygen atoms moving on the surface of ruthenium metal, the researchers recorded the spontaneous breaking and formation of oxygen clusters, just a few atoms in size, and the atoms that quickly diffuse between the clusters.

This technique isnt a new one, but its never been used in this way, to measure the growth of a two-dimensional material, said Avidor. If you look back on the history of spectroscopy, light-based probes revolutionized how we see the world, and the next step electron-based probes allowed us to see even more.

Were now going another step beyond that, to atom-based probes, allowing us to observe more atomic scale phenomena. Besides its usefulness in the design and manufacture of future materials and devices, Im excited to find out what else well be able to see.

Reference: Ultrafast Diffusion at the Onset of Growth: O/Ru(0001) by Jack Kelsall, Peter S.M. Townsend, John Ellis, Andrew P. Jardine and Nadav Avidor, 12 April 2021, Physical Review Letters.DOI: 10.1103/PhysRevLett.126.155901

The research was conducted in the Cambridge Atom Scattering Centre and supported by the Engineering and Physical Sciences Research Council (EPSRC).

View post:

Following Atoms in Real Time Could Lead to New Types of Materials and Quantum Technology Devices - SciTechDaily

Posted in Quantum Computing | Comments Off on Following Atoms in Real Time Could Lead to New Types of Materials and Quantum Technology Devices – SciTechDaily

How AI can truly advance healthcare and research, and where it’s gone wrong – Healthcare IT News

Posted: at 4:42 am

Artificial intelligence and machine learning can be used to advance healthcare and accelerate life sciences research. And there are many companies on the market today with AI offerings to do just that.

Derek Baird is president of North America at Sensyne Health, which offers AI-based remote patient monitoring for healthcare provider organizations and helps life sciences companies develop new medicines.

Baird believes some large companies have missed the mark on AI and ultimately dismantled public trust in these types of technologies, but that some companies have cracked the code by starting with the basics. He also believes AI success hinges on solving non-glamorous issues like data normalization, interoperability, clinical workflow integration and change management.

Healthcare IT News interviewed Baird to discuss the role of AI in healthcare today and how the technology can solve common problems and advance research.

Q: How can artificial intelligence and machine learning be used to advance healthcare and accelerate life sciences research? Where is it happening now, and what are the heavy lifts for the future?

A: AI is having a profound impact todayon the ways we research drugs, conduct clinical trials and understand diseases.And while I think the current role of AI is just the tip of a very big iceberg, I think that as an industry we need to be much more careful about the ways we describe the promise of AI in healthcare.

There has been so much hyperbole around AI that I think we tend to forget that these technologies are just part of the healthcare equation, not some silver bullet that will suddenly solve everything. Biology is hard, complex and still very mysterious in many ways, but AI is already showing promise by allowing us to sift through vast amounts of data much faster and more intelligently than we could before.

One of the ways we are seeing the promise of AI come to life today is in its ability to help us go beyond symptomology and into a deeper understanding of how diseases work in individuals and populations. Let's look at COVID-19.

Our big challenge with COVID-19 is not diagnosis, but that when we have a positive patient,we don't know how sick they will get. We have an understanding of the clinical characteristics of the disease, but the risk-factors underlying the transition from mild to severe remain poorly understood. Using AI, it is possible to analyze millions of patient records at a speed and level of detail that humans just cannot come close to.

We have been able to correlate many of the factors that contribute to severe cases, and are now able to predict those who will most likely be admitted to the ICU, who will need ventilation and what their chances of survival are.

In addition to a powerful individual and population-level predictive capability, these analyses have also given us a great start in understanding the mechanisms of the disease that can in turn accelerate the development and testing of therapeutics.

That is just one example.There are many more. The last 12 months have been a time of rapidly accelerating progress for AI in healthcare, enabling more coherent diagnoses for poorly understood diseases, personalized treatment plans based on the genetics of treatment response, and of course, drug discovery.

AI is being used in labs right now to help discover novel new drugs and new uses for existing drugs, and doing so faster and cheaper than ever before, so freeing up precious resources.

There are many technical heavy lifts, but innovation in the field of AI right now is truly incredible, and is progressing exponentially. I think the bigger lift right now is trust. The AI industry has done a disservice to itself and science by relentlessly overhyping it, obfuscating the way it works, overstating its role in the overall healthcare equationand raising fears around what is being done with everyone's data.

We need to start talking about AI in terms of what it is really doing today and its role in science and care, and we need to be much more transparent about data: how we get it, use itand generally ensure we are using it intelligently, responsibly andwith respect for patient privacy.

Q: You say that some large organizations missed the mark on AI and ultimately dismantled public trust in these types of technologies. Please elaborate. You believe AI success hinges on solving non-glamorous issues like data normalization, interoperability, clinical workflow integration and change management. Why?

A: Companies spent billions of dollars amassing unimaginably vast data sets and promised super-intelligent systems that could predict, diagnose and treat disease better than humans. Public expectations were out of whack, but so were the expectations of the healthcare organizations that invested in these solutions.

I think Big Tech operating in healthcare has not been helpful overall,not just in setting unreasonable expectations, but often in putting profit before privacy, and by bringing the bias of seeing people as products "users" to be monetized,rather than patients to be helped.

Public and industry confidence needs to be restored, and we need to correct the asymmetry between societal benefit and the ambitions of multinational technology platforms. In order to achieve that, the life sciences industry, clinicians, hospitals and patients need to know that data has been ethically sourced, and is secure, anonymous and being used for the direct benefit of the individuals who shared it.

Patients and provider organizations are rightfully concerned about how their data is secured, handled and kept private, and we have made it a top priority to build a business model with transparency at its center, being clear with stakeholders from the start about what data we are using and how it will be used.

As an industrywe have to be clear, not just about the breakthrough medical developments we achieve, but about the specific types of real-world evidence we are using such as genetic markers, heart rates and MRI images and how we protect it throughout the process.

Once we get these basics of trust and transparency right, we can begin to talk again about more aspirational plans for AI. This means investing in robust data storage solutions, collaborating with regulators and policymakers, and educating the wider industry on the power of anonymized data.

These kinds of initiatives and partnerships will ensure we are all meeting the highest standards and allow us to rebuild longer-term trust.

Q: In your experience, what kinds of results is healthcare seeing from using clinical AI to support medical research, therapeutic development, personalized care and population health-level analyses?

A: With AI, pharma companies can collect, store and analyze large data sets at a far quicker rate than by manual processes. This enables them to carry out research faster and more efficiently, based on data about genetic variation from many patients, and develop targeted therapies effectively. In addition, it gives a clearer view on how specific groups of patients with certain shared characteristics react to treatments, helping to precisely map the right quantities and doses of treatments to prescribe.

For example, do all patients with heart failure respond the same to the standard course of treatment? Clinical AI has told us the answer is no, by dividing patients into subgroups with similar traits and looking at the variations in treatment response.

You need AI to break a population down based on many traits, and groupings of traits,to get this kind of answer, because the level of complexity quickly becomes too cumbersome for human processing. In this case, sophisticated patient stratification was used to improve clinical trial design and ultimately ensure heart failure patients are receiving the right course of treatment.

Comprehensive analysis of de-identified patient data using AI and machine learning has the ability to drastically transform the healthcare industry. Ultimately, we want to prevent disease, and by having more information about why, how and in which people diseases develop, we can introduce preventative measures and treatments much earlier, sometimes even before a patient starts to show symptoms.

AI is also increasingly being used for operational purposes in hospitals. For example, during the pandemicAI was used to predict demand for mechanical ventilators need.

AI will continue to be a driving force behind future breakthroughs. There are still many challenges that lie ahead for personalized medicine, and still a way to go for it to be perfected, but as AI becomes more widely adopted in medicinea future of workable, effective and personalized healthcare may certainly be achievable.

Twitter:@SiwickiHealthITEmail the writer:bsiwicki@himss.orgHealthcare IT News is a HIMSS Media publication.

Follow this link:

How AI can truly advance healthcare and research, and where it's gone wrong - Healthcare IT News

Posted in Ai | Comments Off on How AI can truly advance healthcare and research, and where it’s gone wrong – Healthcare IT News

114 Milestones In The History Of Artificial Intelligence (AI) – Forbes

Posted: at 4:42 am

Sixty-five years ago, 10 computer scientists convened in Dartmouth, NH, for a workshop on artificial intelligence, defined a year earlier in the proposal for the workshop as making a machine behave in ways that would be called intelligent if a human were so behaving.

It was the event that initiated AI as a research discipline, which grew to encompass multiple approaches, from the symbolic AI of the 1950s and 1960s to the statistical analysis and machine learning of the 1970s and 1980s to todays deep learning, the statistical analysis of big data. But the preoccupation with developing practical methods for making machines behave as if they were humans emerged already 7 centuries ago.

HAL (Heuristically programmed ALgorithmic computer) 9000, a sentient artificial general intelligence ... [+] computer and star of the 1968 film 2001: A Space Odyssey

1308Catalan poet and theologian Ramon Llull publishes Ars generalis ultima (The Ultimate General Art), further perfecting his method of using paper-based mechanical means to create new knowledge from combinations of concepts.

1666Mathematician and philosopher Gottfried Leibniz publishes Dissertatio de arte combinatoria (On the Combinatorial Art), following Ramon Llull in proposing an alphabet of human thought and arguing that all ideas are nothing but combinations of a relatively small number of simple concepts.

1726Jonathan Swift publishes Gulliver's Travels, which includes a description of the Engine, a machine on the island of Laputa (and a parody of Llull's ideas): "a Project for improving speculative Knowledge by practical and mechanical Operations." By using this "Contrivance," "the most ignorant Person at a reasonable Charge, and with a little bodily Labour, may write Books in Philosophy, Poetry, Politicks, Law, Mathematicks, and Theology, with the least Assistance from Genius or study."

1755Samuel Johnson defines intelligence in A Dictionary of the English Language as Commerce of information; notice; mutual communication; account of things distant or discreet.

1763Thomas Bayes develops a framework for reasoning about the probability of events. Bayesian inference will become a leading approach in machine learning.

1854George Boole argues that logical reasoning could be performed systematically in the same manner as solving a system of equations.

1865 Richard Millar Devens describes in the Cyclopdia of Commercial and Business Anecdotes how the banker Sir Henry Furnese profited by receiving and acting upon information prior to his competitors: Throughout Holland, Flanders, France, and Germany, he maintained a complete and perfect train of business intelligence.

1898At an electrical exhibition in the recently completed Madison Square Garden, Nikola Tesla demonstrates the worlds first radio-controlled vessel. The boat was equipped with, as Tesla described it, a borrowed mind.

1910Belgian lawyers Paul Otlet and Henri La Fontaine establish the Mundaneum where they wanted to gather together all the world's knowledge and classify it according to their Universal Decimal Classification.

1914The Spanish engineer Leonardo Torres y Quevedo demonstrates the first chess-playing machine, capable of king and rook against king endgames without any human intervention.

1921Czech writer Karel apek introduces the word "robot" in his play R.U.R. (Rossum's Universal Robots). The word "robot" comes from the word "robota" (work).

1925Houdina Radio Control releases a radio-controlled driverless car, travelling the streets of New York City.

1927The science-fiction film Metropolis is released. It features a robot double of a peasant girl, Maria, which unleashes chaos in Berlin of 2026it was the first robot depicted on film, inspiring the Art Deco look of C-3PO in Star Wars.

1929Makoto Nishimura designs Gakutensoku, Japanese for "learning from the laws of nature," the first robot built in Japan. It could change its facial expression and move its head and hands via an air pressure mechanism.

1937British science fiction writer H.G. Wells predicts that the whole human memory can be, and probably in short time will be, made accessible to every individual and that any student, in any part of the world, will be able to sit with his [microfilm] projector in his own study at his or her convenience to examine any book, any document, in an exact replica."

1943Warren S. McCulloch and Walter Pitts publish A Logical Calculus of the Ideas Immanent in Nervous Activity in the Bulletin of Mathematical Biophysics. This influential paper, in which they discussed networks of idealized and simplified artificial neurons and how they might perform simple logical functions, will become the inspiration for computer-based neural networks (and later deep learning) and their popular description as mimicking the brain.

1947Statistician John W. Tukey coins the term bit to designate a binary digit, a unit of information stored in a computer.

1949Edmund Berkeley publishes Giant Brains: Or Machines That Think in which he writes: Recently there have been a good deal of news about strange giant machines that can handle information with vast speed and skill.These machines are similar to what a brain would be if it were made of hardware and wire instead of flesh and nerves A machine can handle information; it can calculate, conclude, and choose; it can perform reasonable operations with information. A machine, therefore, can think.

1949Donald Hebb publishes Organization of Behavior: A Neuropsychological Theory in which he proposes a theory about learning based on conjectures regarding neural networks and the ability of synapses to strengthen or weaken over time.

1950Claude Shannons Programming a Computer for Playing Chess is the first published article on developing a chess-playing computer program.

1950Alan Turing publishes Computing Machinery and Intelligence in which he proposes the imitation game which will later become known as the Turing Test.

1951Marvin Minsky and Dean Edmunds build SNARC (Stochastic Neural Analog Reinforcement Calculator), the first artificial neural network, using 3000 vacuum tubes to simulate a network of 40 neurons.

1952Arthur Samuel develops the first computer checkers-playing program and the first computer program to learn on its own.

August 31, 1955The term artificial intelligence is coined in a proposal for a 2 month, 10 man study of artificial intelligence submitted by John McCarthy (Dartmouth College), Marvin Minsky (Harvard University), Nathaniel Rochester (IBM), and Claude Shannon (Bell Telephone Laboratories). The workshop, which took place a year later, in July and August 1956, is generally considered as the official birthdate of the new field.

December 1955Herbert Simon and Allen Newell develop the Logic Theorist, the first artificial intelligence program, which eventually would prove 38 of the first 52 theorems in Whitehead and Russell's Principia Mathematica.

1957Frank Rosenblatt develops the Perceptron, an early artificial neural network enabling pattern recognition based on a two-layer computer learning network. The New York Times reported the Perceptron to be "the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence." The New Yorker called it a remarkable machine capable of what amounts to thought.

1957In the movie Desk Set, when a methods engineer (Spencer Tracy) installs the fictional computer EMERAC, the head librarian (Katharine Hepburn) tells her anxious colleagues in the research department: They cant build a machine to do our job; there are too many cross-references in this place.

1958Hans Peter Luhn publishes "A Business Intelligence System" in the IBM Journal of Research and Development. It describes an "automatic method to provide current awareness services to scientists and engineers."

1958John McCarthy develops programming language Lisp which becomes the most popular programming language used in artificial intelligence research.

1959Arthur Samuel coins the term machine learning, reporting on programming a computer so that it will learn to play a better game of checkers than can be played by the person who wrote the program.

1959Oliver Selfridge publishes Pandemonium: A paradigm for learning in the Proceedings of the Symposium on Mechanization of Thought Processes, in which he describes a model for a process by which computers could recognize patterns that have not been specified in advance.

1959John McCarthy publishes Programs with Common Sense in the Proceedings of the Symposium on Mechanization of Thought Processes, in which he describes the Advice Taker, a program for solving problems by manipulating sentences in formal languages with the ultimate objective of making programs that learn from their experience as effectively as humans do.

1961The first industrial robot, Unimate, starts working on an assembly line in a General Motors plant in New Jersey.

1961James Slagle develops SAINT (Symbolic Automatic INTegrator), a heuristic program that solved symbolic integration problems in freshman calculus.

1962Statistician John W. Tukey writes in the Future of Data Analysis: Data analysis, and the parts of statistics which adhere to it, musttake on the characteristics of science rather than those of mathematics data analysis is intrinsically an empirical science.

1964Daniel Bobrow completes his MIT PhD dissertation titled Natural Language Input for a Computer Problem Solving System and develops STUDENT, a natural language understanding computer program.

August 16, 1964Isaac Asimov writes in the New York Times: The I.B.M. exhibit at the [1964 Worlds Fair] is dedicated to computers, which are shown in all their amazing complexity, notably in the task of translating Russian into English. If machines are that smart today, what may not be in the works 50 years hence? It will be such computers, much miniaturized, that will serve as the brains of robots Communications will become sight-sound and you will see as well as hear the person you telephone. The screen can be used not only to see the people you call but also for studying documents and photographs and reading passages from books.

1965Herbert Simon predicts that "machines will be capable, within twenty years, of doing any work a man can do."

1965Hubert Dreyfus publishes "Alchemy and AI," arguing that the mind is not like a computer and that there were limits beyond which AI would not progress.

1965I.J. Good writes in "Speculations Concerning the First Ultraintelligent Machine" that the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

1965Joseph Weizenbaum develops ELIZA, an interactive program that carries on a dialogue in English language on any topic. Weizenbaum, who wanted to demonstrate the superficiality of communication between man and machine, was surprised by the number of people who attributed human-like feelings to the computer program.

1965Edward Feigenbaum, Bruce G. Buchanan, Joshua Lederberg, and Carl Djerassi start working on DENDRAL at Stanford University. The first expert system, it automated the decision-making process and problem-solving behavior of organic chemists, with the general aim of studying hypothesis formation and constructing models of empirical induction in science.

1966Shakey the robot is the first general-purpose mobile robot to be able to reason about its own actions. In a Life magazine 1970 article about this first electronic person, Marvin Minsky is quoted saying with certitude: In from three to eight years we will have a machine with the general intelligence of an average human being.

1968The film 2001: Space Odyssey is released, featuring HAL 9000, a sentient computer.

1968Terry Winograd develops SHRDLU, an early natural language understanding computer program.

1969Arthur Bryson and Yu-Chi Ho describe backpropagation as a multi-stage dynamic system optimization method. A learning algorithm for multi-layer artificial neural networks, it has contributed significantly to the success of deep learning in the 2000s and 2010s, once computing power has sufficiently advanced to accommodate the training of large networks.

1969Marvin Minsky and Seymour Papert publish Perceptrons: An Introduction to Computational Geometry, highlighting the limitations of simple neural networks.In an expanded edition published in 1988, they responded to claims that their 1969 conclusions significantly reduced funding for neural network research: Our version is that progress had already come to a virtual halt because of the lack of adequate basic theories by the mid-1960s there had been a great many experiments with perceptrons, but no one had been able to explain why they were able to recognize certain kinds of patterns and not others.

1970The first anthropomorphic robot, the WABOT-1, is built at Waseda University in Japan. It consisted of a limb-control system, a vision system and a conversation system.

1971Michael S. Scott Morton publishes Management Decision Systems: Computer-Based Support for Decision Making, summarizing his studies of the various ways by which computers and analytical models could assist managers in making key decisions.

1971Arthur Miller writes inThe Assault on Privacythat Too many information handlers seem to measure a man by the number of bits of storage capacity his dossier will occupy.

1972MYCIN, an early expert system for identifying bacteria causing severe infections and recommending antibiotics, is developed at Stanford University.

1973James Lighthill reports to the British Science Research Council on the state artificial intelligence research, concluding that "in no part of the field have discoveries made so far produced the major impact that was then promised," leading to drastically reduced government support for AI research.

1976Computer scientist Raj Reddy publishes Speech Recognition by Machine: A Review in the Proceedings of the IEEE, summarizing the early work on Natural Language Processing (NLP).

1978The XCON (eXpert CONfigurer) program, a rule-based expert system assisting in the ordering of DEC's VAX computers by automatically selecting the components based on the customer's requirements, is developed at Carnegie Mellon University.

1979The Stanford Cart successfully crosses a chair-filled room without human intervention in about five hours, becoming one of the earliest examples of an autonomous vehicle.

1979Kunihiko Fukushima develops the neocognitron, a hierarchical, multilayered artificial neural network.

1980I.A. Tjomsland applies Parkinsons First Law to the storage industry: Data expands to fill the space available.

1980Wabot-2 is built at Waseda University in Japan, a musician humanoid robot able to communicate with a person, read a musical score and play tunes of average difficulty on an electronic organ.

1981The Japanese Ministry of International Trade and Industry budgets $850 million for the Fifth Generation Computer project. The project aimed to develop computers that could carry on conversations, translate languages, interpret pictures, and reason like human beings.

1981The Chinese Association for Artificial Intelligence (CAAI) is established.

1984Electric Dreams is released, a film about a love triangle between a man, a woman and a personal computer.

1984At the annual meeting of AAAI, Roger Schank and Marvin Minsky warn of the coming AI Winter, predicting an immanent bursting of the AI bubble (which did happen three years later), similar to the reduction in AI investment and research funding in the mid-1970s.

1985The first business intelligence system is developed for Procter & Gamble by Metaphor Computer Systems to link sales information and retail scanner data.

1986First driverless car, a Mercedes-Benz van equipped with cameras and sensors, built at Bundeswehr University in Munich under the direction of Ernst Dickmanns, drives up to 55 mph on empty streets.

October 1986David Rumelhart, Geoffrey Hinton, and Ronald Williams publish Learning representations by back-propagating errors, in which they describe a new learning procedure, back-propagation, for networks of neuron-like units.

1987The video Knowledge Navigator, accompanying Apple CEO John Sculleys keynote speech at Educom, envisions a future in which knowledge applications would be accessed by smart agents working over networks connected to massive amounts of digitized information.

1988Judea Pearl publishes Probabilistic Reasoning in Intelligent Systems. His 2011 Turing Award citation reads: Judea Pearl created the representational and computational foundation for the processing of information under uncertainty. He is credited with the invention of Bayesian networks, a mathematical formalism for defining complex probability models, as well as the principal algorithms used for inference in these models. This work not only revolutionized the field of artificial intelligence but also became an important tool for many other branches of engineering and the natural sciences.

1988Rollo Carpenter develops the chat-bot Jabberwacky to "simulate natural human chat in an interesting, entertaining and humorous manner." It is an early attempt at creating artificial intelligence through human interaction.

1988Members of the IBM T.J. Watson Research Center publish A statistical approach to language translation, heralding the shift from rule-based to probabilistic methods of machine translation, and reflecting a broader shift to machine learning based on statistical analysis of known examples, not comprehension and understanding of the task at hand (IBMs project Candide, successfully translating between English and French, was based on 2.2 million pairs of sentences, mostly from the bilingual proceedings of the Canadian parliament).

1988Marvin Minsky and Seymour Papert publish an expanded edition of their 1969 book Perceptrons. In Prologue: A View from 1988 they wrote: One reason why progress has been so slow in this field is that researchers unfamiliar with its history have continued to make many of the same mistakes that others have made before them.

1989Yann LeCun and other researchers at AT&T Bell Labs successfully apply a backpropagation algorithm to a multi-layer neural network, recognizing handwritten ZIP codes. Given the hardware limitations at the time, it took about 3 days to train the network, still a significant improvement over earlier efforts.

March 1989Tim Berners-Lee writes Information Management: A Proposal, and circulates it at CERN.

1990Rodney Brooks publishes Elephants Dont Play Chess, proposing a new approach to AIbuilding intelligent systems, specifically robots, from the ground up and on the basis of ongoing physical interaction with the environment: The world is its own best model The trick is to sense it appropriately and often enough.

October 1990Tim Berners-Lee begins writing code for a client program, a browser/editor he calls WorldWideWeb, on his new NeXT computer.

1993Vernor Vinge publishes The Coming Technological Singularity, in which he predicts that within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.

September 1994 BusinessWeek publishes a cover story on Database Marketing: Companies are collecting mountains of information about you, crunching it to predict how likely you are to buy a product, and using that knowledge to craft a marketing message precisely calibrated to get you to do so. many companies believe they have no choice but to brave the database-marketing frontier.

1995Richard Wallace develops the chatbot A.L.I.C.E (Artificial Linguistic Internet Computer Entity), inspired by Joseph Weizenbaum's ELIZA program, but with the addition of natural language sample data collection on an unprecedented scale, enabled by the advent of the Web.

1997Sepp Hochreiter and Jrgen Schmidhuber propose Long Short-Term Memory (LSTM), a type of a recurrent neural network used today in handwriting recognition and speech recognition.

October 1997Michael Cox and David Ellsworth publish Application-controlled demand paging for out-of-core visualization in the Proceedings of the IEEE 8th conference on Visualization. They start the article with Visualization provides an interesting challenge for computer systems: data sets are generally quite large, taxing the capacities of main memory, local disk, and even remote disk. We call this the problem ofbig data. When data sets do not fit in main memory (in core), or when they do not fit even on local disk, the most common solution is to acquire more resources. It is the first article in the ACM digital library to use the term big data.

1997Deep Blue becomes the first computer chess-playing program to beat a reigning world chess champion.

1998The first Google index has 26 million Web pages.

1998Dave Hampton and Caleb Chung create Furby, the first domestic or pet robot.

1998Yann LeCun, Yoshua Bengio and others publish papers on the application of neural networks to handwriting recognition and on optimizing backpropagation.

October 1998K.G. Coffman and Andrew Odlyzko publish The Size and Growth Rate of the Internet. They conclude that the growth rate of traffic on the public Internet, while lower than is often cited, is still about 100% per year, much higher than for traffic on other networks. Hence, if present growth trends continue, data traffic in the U. S. will overtake voice traffic around the year 2002 and will be dominated by the Internet.

2000Googles index of the Web reaches the one-billion mark.

2000MITs Cynthia Breazeal develops Kismet, a robot that could recognize and simulate emotions.

2000Honda's ASIMO robot, an artificially intelligent humanoid robot, is able to walk as fast as a human, delivering trays to customers in a restaurant setting.

October 2000Peter Lyman and Hal R. Varian at UC Berkeley publish How Much Information? It is the first comprehensive study to quantify, in computer storage terms, the total amount of new and original information (not counting copies) created in the world annually and stored in four physical media: paper, film, optical (CDs and DVDs), and magnetic.The study finds that in 1999, the world produced about 1.5 exabytes of unique information, or about 250 megabytes for every man, woman, and child on earth. It also finds that a vast amount of unique information is created and stored by individuals (what it calls the democratization of data) and that not only is digital information production the largest in total, it is also the most rapidly growing. Calling this finding dominance of digital, Lyman and Varian state that even today, most textual information is born digital, and within a few years this will be true for images as well. A similar study conducted in 2003 by the same researchersfoundthat the world produced about 5 exabytes of new information in 2002 and that 92% of the new information was stored on magnetic media, mostly in hard disks.

2001A.I. Artificial Intelligence is released, a Steven Spielberg film about David, a childlike android uniquely programmed with the ability to love.

2003Paro, a therapeutic robot baby harp seal designed by Takanori Shibata of the Intelligent System Research Institute of Japan's AIST is selected as a "Best of COMDEX" finalist

2004The first DARPA Grand Challenge, a prize competition for autonomous vehicles, is held in the Mojave Desert. None of the autonomous vehicles finished the 150-mile route.

2006Oren Etzioni, Michele Banko, and Michael Cafarella coin the term machine reading,defining it as an inherently unsupervised autonomous understanding of text.

2006Geoffrey Hinton publishes Learning Multiple Layers of Representation, summarizing the ideas that have led to multilayer neural networks that contain top-down connections and training them to generate sensory data rather than to classify it, i.e., the new approaches to deep learning.

2006The Dartmouth Artificial Intelligence Conference: The Next Fifty Years (AI@50), commemorates the 50th anniversary of the 1956 workshop. The conference director concludes: Although AI has enjoyed much success over the last 50 years, numerous dramatic disagreements remain within the field. Different research areas frequently do not collaborate, researchers utilize different methodologies, and there still is no general theory of intelligence or learning that unites the discipline.

2007Fei-Fei Li and colleagues at Princeton University start to assemble ImageNet, a large database of annotated images designed to aid in visual object recognition software research.

2007John F. Gantz, David Reinsel and other researchers at IDC release a white paper titled The Expanding Digital Universe: A Forecast of Worldwide Information Growth through 2010. It is the first study to estimate and forecast the amount of digital data created and replicated each year. IDC estimates that in 2006, the world created 161 exabytes of data and forecasts that between 2006 and 2010, the information added annually to the digital universe will increase more than six fold to 988 exabytes, or doubling every 18 months. According to the2010 and2012 releases of the same study, the amount of digital data created annually surpassed this forecast, reaching 1,227 exabytes in 2010, and growing to 2837 exabytes in 2,012. In 2020, IDC estimated that 59,000 exabytes of data will be created, captured, copied, and consumed in the world that year.

2009 Hal Varian, Googles Chief Economist, tells the McKinsey Quarterly: I keep saying the sexy job in the next ten years will be statisticians. People think Im joking, but who wouldve guessed that computer engineers wouldve been the sexy job of the 1990s? The ability to take datato be able to understand it, to process it, to extract value from it, to visualize it, to communicate itthats going to be a hugely important skill in the next decades.

See more here:

114 Milestones In The History Of Artificial Intelligence (AI) - Forbes

Posted in Ai | Comments Off on 114 Milestones In The History Of Artificial Intelligence (AI) – Forbes