With Launch of COVID-19 Data Hub, The White House Issues A ‘Call To Action’ For AI Researchers – Machine Learning Times – machine learning & data…

Originally published in TechCrunch, March 16, 2020

In a briefing on Monday, research leaders across tech, academia and the government joined the White House to announce an open data set full of scientific literature on the novel coronavirus. The COVID-19 Open Research Dataset, known as CORD-19, will also add relevant new research moving forward, compiling it into one centralized hub. The new data set is machine readable, making it easily parsed for machine learning purposes a key advantage according to researchers involved in the ambitious project.

In a press conference, U.S. CTO Michael Kratsios called the new data set the most extensive collection of machine readable coronavirus literature to date. Kratsios characterized the project as a call to action for the AI community, which can employ machine learning techniques to surface unique insights in the body of data. To come up with guidance for researchers combing through the data, the National Academies of Sciences, Engineering, and Medicine collaborated with the World Health Organization to come up with high priority questions about the coronavirus related to genetics, incubation, treatment, symptoms and prevention.

The partnership, announced today by the White House Office of Science and Technology Policy, brings together the Chan Zuckerberg Initiative, Microsoft Research, the Allen Institute for Artificial Intelligence, the National Institutes of Healths National Library of Medicine, Georgetown Universitys Center for Security and Emerging Technology, Cold Spring Harbor Laboratory and the Kaggle AI platform, owned by Google.

The database brings together nearly 30,000 scientific articles about the virus known as SARS-CoV-2. as well as related viruses in the broader coronavirus group. Around half of those articles make the full text available. Critically, the database will include pre-publication research from resources like medRxiv and bioRxiv, open access archives for pre-print health sciences and biology research.

To continue reading this article, click here.

Go here to read the rest:
With Launch of COVID-19 Data Hub, The White House Issues A 'Call To Action' For AI Researchers - Machine Learning Times - machine learning & data...

University Students Are Learning To Collaborate on AI Projects – Forbes

Penn States Nittany AI Challenge is teaching students the true meaning of collaboration in the age of Artificial Intelligence.

Nittany AI Challenge LionPlanner

This year, artificial intelligence is the buzzword. On university campuses, students who just graduated high school are checking out the latest computer science course offerings to see if they can take classes in machine learning. The truth about the age of Artificial Intelligence has caught many university administrators attention. In the age of AI, to be successful, everyone, no matter what jobs, skill sets, or majors will at some point encounter AI in their work and their life.Penn Statesaw the benefits of working on AI projects early, specifically when it comes to teamwork and collaboration. Since 2017, their successfulNittany AI Challengeeach year, has helped to teach students what it means to collaborate in the age of Artificial Intelligence.

Every university has challenges. Students bring a unique perspective and understanding of these challenges. The Nittany AI Challenge was created to provide a framework and support structure to enable students to form teams and collaborate on ideas that could address a problem or opportunity, using AI technology as the enabler. The Nittany AI Challenge is our innovation engine, ultimately putting students on stage to demonstrate how AI and machine learning can be leveraged to have a positive impact on the university.

The Nittany AI Challenge runs for 8 months each year. It has multiple phases such as the idea phase, the prototype phase, and the MVP phase. In the end, theres a pitch competition between 5 to 10 teams and they compete for a pool of $25,000. The challenge incentivizes students to keep going by awarding the best teams at each phase of the competition with another combined total of $25,000 during the 8 months of competition. By the time pitching comes around for the top 5 to 10 teams, these teams not only have figured out how they can work together as a team, but they have also experienced what it means to receive funding.

This year, the Nittany AI Challenge has expanded from asking students to solve the universitys problems using AI to broader categories based on the theme of AI for Good. Students are competing in additional categories such as humanitarianism, healthcare, and sustainability/climate change.

In the last two years, students formed teams amongst friends within their circle. But, as the competition matured, now, theres an online system that allows students to sign up for teams.

Students often form teams with students coming from different backgrounds and majoring in different disciplines based on their shared interest on a project. Christie Warren, the app designer from theLionPlanner team, helped her team to create a 4-year degree planning tool that won 2018s competition. She credits the competition for giving her a clear path to a career in app design and teaching her how to collaborate with developers.

For me the biggest learning curve is to learn to work alongside developers, as far as when to start to go into the high fidelity designs, wait for people to figure out the features that need to be developed, etc. Just looking at my designs and being really open to improvements and going through iterations of the design with the team helped me overcome the learning curve.

Early on, technology companies such as Microsoft, Google Cloud, IBM Watson, and Amazon Web Services recognized the value of an on-campus AI competition such as the Nittany AI Competition to provide teamwork education to students before they embark on internships with technology companies. Theyve been sponsoring the competition since its inception.

Both the students and us from Microsoft benefit from the time working together, in that we learn about each others culture, needs and aspirations. Challenges like the Nittany AI Challenge highlight that studying in Higher Education should be a mix of learning and fun. If we can help the students learn and enjoy the experience then we also help them foster a positive outlook about their future of work.

While having fun, some students like Michael D. Roos, project manager and backend developer from the LionPlanner team have seen synergy between his internships and his project for the Nittany AI competition. He credits the competition as giving him a pathway to success beyond simply a college education. Hes a lot more confident stepping out into the real world whether its working for a startup or a large technology company because of the experience gained.

I was doing my internship with Microsoft during a part of the competition. Some of the technology skills I learned at my internship I could then apply to my project for the competition. Also, having the cumulative experience of working on the project for the Nittany AI competition before going into my internship helped me with my internship. Even though I was interning at Microsoft, my team had similar startup vibes as the competition, my role on the team was similar to my role on the project. I felt I had a headstart in that role because of my experience in the competition.

One of the biggest myths that the Nittany AI Challenge helped to debunk is that AI implementations require only the skills of technologists. While computer science students who take a keen interest in machine learning and AI are central to every project inside the Nittany AI Challenge, its often the people who are the visionary project managers, creative designers, and students who are majoring in other disciplines such as healthcare, biological sciences, and business who end up making the most impactful contributions to the team.

The AI Alliance makes the challenge really accessible. For people like me who dont know AI, we can learn AI along the way.

The LionPlanner Team that won the competition in 2018 contributes their success mainly to the outstanding design that won over the judges. Christie, the app designer on the team credits her success to the way the team collaborated which enabled her to communicate with developers effectively.

Nyansapo_Team_pic

Every member of the Nyansapo Team that is trying to bring English education to remote parts of Kenya via NLP learning software contributes their success to the energy and the motivation behind the vision of the project. Because everyone felt strongly about the vision, even though they have one of the biggest teams in the competition, everyones pulling together and collaborating.

I really like to learn by doing. Everybody on the team joined, not just because they had something to offer, but because the vision was exciting. We are all behind this problem of education inequality in Kenya. We all want to get involved to solve this problem. We are this excited to want to go the extra step.

Not only does the Nittany AI challenge teach students the art of interdisciplinary collaboration, but it also teaches students time management, stress management, and how to overcome difficulties. During the competition, students are often juggling difficult coursework, internships, and other extracurricular activities. They often feel stressed and overwhelmed. This can pose tremendous challenges for team communication. But, as many students pointed out to me, these challenges are opportunities to learn how to work together.

There was a difficult moment yesterday in between my classes, where I had to schedule a meeting with Edward to discuss the app interface later during the day, at times, everything can feel a bit chaotic. In the back of my head, when I think about the vision of our project, how much Im learning on the project, and how Im working with all my friends, these are the things that keep me going even through hard times.

One of the projects from the Nittany AI Challenge that the university is integrating into their systems is the LionPlanner tool. It uses AI algorithms to help students match their profiles with clubs and extracurricular activities they might be interested in. It also helps students plan their courses to customize their degree to allow them to complete on time while keeping the cost of their degree as low as possible.

The students who worked on the project are now working to create a Prospective Student Planning Tool that can integrate into the University Admissions Office systems to be used by transfer students.

Currently, in the U.S., theres a skill gap of almost 1.5 million high tech jobs. Companies are having a hard time hiring people who have the skills to work in innovative companies. We now have coding camps, apprenticeships, and remote coding platforms.

Why not also have university-sponsored AI challenges where students can demonstrate their potential and abilities to collaborate?

The Nittany AI Challenge from Penn State presents a unique solution in the age of innovation that many employees are trying to solve. By sitting in the audience as judges, companies can follow the teams progress and watch students shine in their perspective areas. Students are not pitching their skills. Students are pitching their work products. They are showing what they can do in real-time for 8 months.

This could be a new way for companies to recruit. We have NFL drafts. Why not have drafts for star players on these AI teams that work especially well with others?

This year, Penn State introduced the Nittany AI Associates program where students can continue their work from the Nittany AI Challenge so that they can develop their ideas further.

So while thechallengeis the "Innovation Engine", theNittanyAIAssociates provides students the opportunity to work on managed projects with an actual client, funding to the students to reduce their debt (paid internships), a low cost, low risk avenue for the university (and other clients) to innovate, while providingAIknowledge transfer to client staff (the student becomes the teacher).

In the age of AI, education is becoming more multidisciplinary. When higher education institutions can evolve the way that they teach their students to enable both innovation and collaboration, then the potential they unleash in their graduates can have an exponential effect on their career and the companies that hire them. Creating competitions and collaborative work projects such as the Nittany AI Challenge within the university that fosters win-win thinking might just be the key to the type of innovations we need in higher education to keep up in the age of AI.

Original post:
University Students Are Learning To Collaborate on AI Projects - Forbes

Novi Releases v2.0 of Prediction Engine, Adding Critical Economics to Its Machine Learning Outputs – Benzinga

AUSTIN, Texas, March 23, 2020 /PRNewswire-PRWeb/ --Novi Labs ("Novi") today announced the release of Novi Prediction Engine version 2.0. This provides critical economic data to E&P workflows such as well planning or acquisition & divestitures. Novi customers can now run a wide range of large-scale scenarios in minutes and get immediate feedback on the economic feasibility of each plan. As price headwinds face the industry, having the ability to quickly and easily evaluate hundreds of scenarios allows operators to efficiently allocate capital.

In addition to the economic outputs, Novi Prediction Engine 2.0 also includes new features targeting enhanced usability and increased efficiency. Novi is now publishing confidence intervals as a standard output for every prediction. This allows customers to understand how confident the model is of each prediction it makes, which is critical decision-making criterion. A video demonstration of Novi Prediction Engine version 2.0 is available at https://novilabs.com/prediction-engine-v2/.

"With the integration of economic outputs and confidence intervals into Novi Prediction Engine, customers have increased leverage, transparency and certainty in what the Novi models are providing in support of their business decisions. This form of rapid scenario driven testing that is unlocked by the Novi platform is vital in today's uncertain market," said Scott Sherwood, Novi's CEO.

About Novi Labs Novi Labs, Inc. ("Novi") is the leading developer of artificial intelligence driven business applications that help the oil & gas industry optimize the economic value of drilling programs and acquisition & divestiture decisions. Leveraging cutting-edge data science, Novi delivers intuitive analytics that simplify complex decisions with actionable data and insights needed optimize capital allocation. Novi was founded in 2014 and is headquartered in Austin, TX. For more information, please visit http://www.novilabs.com.

SOURCE Novi Labs

See the rest here:
Novi Releases v2.0 of Prediction Engine, Adding Critical Economics to Its Machine Learning Outputs - Benzinga

Artificial intelligence for fraud detection is bound to save billions – ZME Science

Fraud mitigation is one of the most sought-after artificial intelligence (AI) services because it can provide an immediate return on investment. Already, many companies are experiencing lucrative profits thanks to AI and machine learning (ML) systems that detect and prevent fraud in real-time.

According to a new report, Highmark Inc.s Financial Investigations and Provider Review (FIPR) department generated $260 million in savings that would have otherwise been lost to fraud, waste, and abuse in 2019. In the last five years, the company saved $850 million.

We know the overwhelming majority of providers do the right thing. But we also know year after year millions of health care dollars are lost to fraud, waste and abuse, said Melissa Anderson, executive vice president and chief audit and compliance officer, Highmark Health. By using technology and working with other Blue Plans and law enforcement, we have continually evolved our processes and are proud to be among the best nationally.

FIPR detects fraud across its clients services with the help of an internal team made up of investigators, accountants, and programmers, as well as seasoned professionals with an eye for unusual activity such as registered nurses and former law enforcement agents. Human audits performed to detect unusual claims and assess the appropriateness of provider payments are used as training data for AI systems, which can adapt and react more rapidly to suspicious changing consumer behavior.

As fraudulent actors have become increasingly aggressive and cunning with their tactics, organizations are looking to AI to mitigate rising threats.

We know it is much easier to stop these bad actors before the money goes out the door then pay and have to chase them, said Kurt Spear, vice president of financial investigations at Highmark Inc.

Elsewhere, Teradata, an AI firm specialized in selling fraud detection solutions to banks, claims in a case study that it helped Danske Bank reduce its false positives by 60% and increased real fraud detection by 50%.

Other service operators are looking to AI fraud detection with a keen eye, especially in the health care sector. A recent survey performed by Optum found that 43% of health industry leaders said they strongly agree that AI will become an integral part of detecting telehealth fraud, waste, or abuse in reimbursement.

In fact, AI spending is growing tremendously with total operating spending set to reach $15 billion by 2024, the most sought-after solutions being network optimization and fraud mitigation. According to theAssociation of Certified Fraud Examiners (ACFE)inauguralAnti-Fraud Technology Benchmarking Report,the amount organizations are expected to spend on AI and machine learning to reduce online fraud is expected to triple by 2021.

Mitigating fraud in healthcare would be a boon for an industry that is plagued with many structural inefficiencies.

The United States spends about $3.5 trillion on healthcare-related services every year. This staggering sum corresponds to about 18% of the countrys GDP and is more than twice the average among developed countries. However, despite this tremendous spending, healthcare service quality is lacking. According to a now-famous 2017 study, the U.S. has fewer hospital beds and doctors per capita than any other developed country.

A 2019 study found that the countrys healthcare system is incredibly inefficient, burning through roughly 25% of all its finances which basically go to waste thats $760 billion annually in the best case scenario and up to $935 billion annually.

Most money is being wasted due to unnecessary administrative complexity, including billing and coding waste this alone is responsible for $265.6 billion annually. Drug pricing is another major source of waste, account for around $240 billion. Finally, over-treatment and failure of care delivery incurred another $300 billion in wasted costs.

And even these astronomical costs may be underestimated. According to management firm Numerof and Associates, the 25% waste estimate might be conservative. Instead, the firm believes that as much as 40% of the countrys healthcare spending is wasted, mostly due to administrative complexity. The firm adds that fraud and abuse account for roughly 8% of waste in healthcare.

Most cases of fraud in the healthcare sector are committed by organized crime groups and a fraction of some healthcare providers that are dishonest.

According to the National Healthcare Anti-Fraud Association, the most common types of healthcare frauds in the United States are:

Traditionally, the most prevalent method for fraud management has been human-generated rule sets. To this day, this is the most common practice but thanks to a quantum leap in computing and Big Data, AI-based solutions based on machine learning algorithms are becoming increasingly appealing and most importantly practical.

But what is machine learning anyway? Machine learning refers to algorithms that are designed learn like humans do and continuously tweak this learning process over time without human supervision. The algorithms output accuracy can be improved continuously by feeding them data and information in the form of observations and real-world interactions.

In other words, machine learning is the science of getting computers to act without being explicitly programmed.

There are all sorts of various machine learning algorithms, depending on the requirements of each situation and industry. Hundreds of new machine learning algorithms are published on a daily basis. Theyre typically grouped by:

In a healthcare fraud analytics context, machine learning eliminates the use of preprogrammed rule sets even those of phenomenal complexity.

Machine learning enables companies to efficiently determine what transactions or set of behaviors are most likely to be fraudulent, while reducing false positives.

In an industry where there can be billions of different transactions on a daily basis, AI-based analytics can be an amazing fit thanks to their ability to automatically discover patterns across large volumes of data.

The process itself can be complex since the algorithms have to interpret patterns in the data and apply data science in real-time in order to distinguish between normal behavior and abnormal behavior.

This can be a problem since an improper understanding of how AI works and fraud-specific data science techniques can lead you to develop algorithms that essentially learn to do the wrong things. Just like people can learn bad habits, so too can a poorly designed machine learning model.

In order for online fraud detection based on AI technology to succeed, these platforms need to check three very important boxes.

First, supervised machine learning algorithms have to be trained and fine-tuned based on decades worth of transaction data to keep false positives to a minimum and improve reaction time. This is harder said than done because the data needs to be structured and properly labeled depending on the size of the project, this could take staff even years to solve.

Secondly, unsupervised machine learning needs to keep up with increasingly sophisticated forms of online fraud. After all, AI is used by both auditors and fraudsters. And, finally, for AI fraud detection platforms to scale, they require a large-scale, universal data network of activity (i.e. transactions, filed documents, etc) to scale the ML algorithms and improve the accuracy of fraud detection scores.

According to a new market research report released earlier this year, the healthcare fraud analytics market is projected to reach $4.6 billion by 2025 from $1.2 billion in 2020.

This growth is attributed to more numerous and complex fraudulent activity in the healthcare sector.

In order to tackle rising healthcare fraud, companies offer various analytics solutions that flag fraudulent activity some are rule-based models, but AI-based technologies are expected to form the backbone of all types of analytics used in the future. These include descriptive, predictive, and prescriptive analytics.

Some of the most important companies operating today in the healthcare fraud analytics market include IBM Corporation (US), Optum (US), SAS Institute (US), Change Healthcare (US), EXL Service Holdings (US), Cotiviti (US), Wipro Limited (Wipro) (India), Conduent (US), HCL (India), Canadian Global Information Technology Group (Canada), DXC Technology Company (US), Northrop Grumman Corporation (US), LexisNexis Group (US), and Pondera Solutions (US).

That being said, there is a wide range of options in place today to prevent fraud. However, the evolving landscape of e-commerce and hacking pose new challenges all the time. To keep up, these challenges require innovation that can respond and react rapidly to fraud. The common denominator, from payment fraud to abuse, seems to be machine learning, which can easily scale to meet the demands of big data with far more flexibility than traditional methods.

Original post:
Artificial intelligence for fraud detection is bound to save billions - ZME Science

How Bruce K. Shibuya Is Changing The Game Of Business Intelligence In The Auto Industry – Yahoo Finance

SANTA CLARA, CA / ACCESSWIRE / March 23, 2020 / Bruce K. Shibuya has an impressive resume, but one aspect stands out above the rest: he's an expert in business intelligence, and he's proving that day after day. Bruce Shibuya's a mover and shaker in the automobile industry and has transformed the approach to artificial intelligence and machine learning.

Using predictive analytics and quality applications, Bruce K. Shibuya is working to change how business is done in auto manufacturing plants. Bruce. K. Shibuya works to use the data collected to identify unusual trends. This allows Shibuya and his team to find areas of manufacturing that aren't working well, and change them to create better business outcomes.

Instead of taking the standard approach of looking at what's happening currently in manufacturing, Bruce K. Shibuya works to use historical data to understand what created the current manufacturing situation within an industry. Armed with this information, he's able to make decisions that positively affect the manufacturing plant moving forward.

In order to move forward with the technology currently available, Bruce K. Shibuya believes it's time to focus on using data to inform machine learning. Machine learning is a relatively new field of artificial intelligence, and few people are pioneering the charge like Shibuya.

Using the approach of analyzing historical data, Bruce K. Shibuya is able to solve manufacturing, design, and supply chain issues. Problems within manufacturing that typically take months to solve are able to be remedied in days.

As the Senior Director of Quality Engineering at Jabil, Bruce K. Shibuya's unique approach to business intelligence, artificial intelligence, and machine learning are being used to make widespread changes in the auto industry.

This is nothing new for Bruce K. Shibuya. In 2004, Hyundai outranked Toyota in JD Power & Associates for the first time ever, while Bruce K. Shibuya was serving as the vice president of quality at Hyundai. Shibuya's business intelligence program was built in partnership with Microsoft and continues to inform business decisions in the auto industry today.

After serving as an executive engineer for Toyota, Bruce K. Shibuya was awarded the Toyota Executive Management Award for attention to detail.

The auto industry is changing quickly, in no small part to contributions from Bruce K. Shibuya. As artificial intelligence and machine learning continue to play large roles in the auto development process, it's expected that the contributions from Bruce K. Shibuya will continue to prove invaluable.

CONTACT:

Caroline HunterWeb Presence, LLC+1 7865519491

SOURCE: Web Presence, LLC

View source version on accesswire.com: https://www.accesswire.com/582175/How-Bruce-K-Shibuya-Is-Changing-The-Game-Of-Business-Intelligence-In-The-Auto-Industry

See the original post here:
How Bruce K. Shibuya Is Changing The Game Of Business Intelligence In The Auto Industry - Yahoo Finance

Google open-sources framework that reduces AI training costs by up to 80% – VentureBeat

Google researchers recently published a paper describing a framework SEED RL that scales AI model training to thousands of machines. They say that it could facilitate training at millions of frames per second on a machine while reducing costs by up to 80%, potentially leveling the playing field for startups that couldnt previously compete with large AI labs.

Training sophisticated machine learning models in the cloud remains prohibitively expensive. According to a recent Synced report, the University of Washingtons Grover, which is tailored for both the generation and detection of fake news, cost $25,000 to train over the course of two weeks. OpenAI racked up $256 per hour to train its GPT-2 language model, and Google spent an estimated $6,912 training BERT, a bidirectional transformer model that redefined the state of the art for 11 natural language processing tasks.

SEED RL, which is based on Googles TensorFlow 2.0 framework, features an architecture that takes advantage of graphics cards and tensor processing units (TPUs) by centralizing model inference. To avoid data transfer bottlenecks, it performs AI inference centrally with a learner component that trains the model using input from distributed inference. The target models variables and state information are kept local, while observations are sent to the learner at every environment step and latency is kept to a minimum thanks to a network library based on the open source universal RPC framework.

SEED RLs learner component can be scaled across thousands of cores (e.g., up to 2,048 on Cloud TPUs), and the number of actors which iterate between taking steps in the environment and running inference on the model to predict the next action can scale up to thousands of machines. One algorithm V-trace predicts an action distribution from which an action can be sampled, while another R2D2 selects an action based on the predicted future value of that action.

To evaluate SEED RL, the research team benchmarked it on the commonly used Arcade Learning Environment, several DeepMind Lab environments, and the Google Research Football environment. They say that they managed to solve a previously unsolved Google Research Football task and that they achieved 2.4 million frames per second with 64 Cloud TPU cores, representing an improvement over the previous state-of-the-art distributed agent of 80 times.

This results in a significant speed-up in wall-clock time and, because accelerators are orders of magnitude cheaper per operation than CPUs, the cost of experiments is reduced drastically, wrote the coauthors of the paper. We believe SEED RL, and the results presented, demonstrate that reinforcement learning has once again caught up with the rest of the deep learning field in terms of taking advantage of accelerators.

Read more:
Google open-sources framework that reduces AI training costs by up to 80% - VentureBeat

The Well-matched Combo of Quantum Computing and Machine Learning – Analytics Insight

The pace of improvement in quantum computing mirrors the fast advances made in AI and machine learning. It is normal to ask whether quantum technologies could boost learning algorithms: this field of inquiry is called quantum-improved machine learning.

Quantum computers are gadgets that work dependent on principles from quantum physics. The computers that we at present use are constructed utilizing transistors and the information is stored as double 0 and 1. Quantum computers are manufactured utilizing subatomic particles called quantum bits, qubits for short, which can be in numerous states simultaneously. The principal advantage of quantum computers is that they can perform exceptionally complex tasks at supersonic velocities. In this way, they take care of issues that are not presently feasible.

The most significant advantage of quantum computers is the speed at which it can take care of complex issues. While theyre lightning speedy at what they do, they dont give abilities to take care of issues from undecidable or NP-Hard problem classes. There is a problem set that quantum computing will have the option to explain, anyway, its not applicable for all computing problems.

Ordinarily, the issue set that quantum computers are acceptable at solving includes number or data crunching with an immense amount of inputs, for example, complex optimisation problems and communication systems analysis problemscalculations that would normally take supercomputers days, years, even billions of years to brute force.

The application that is routinely mentioned as an instance that quantum computers will have the option to immediately solve is solid RSA encryption. A recent report by the Microsoft Quantum Team recommends this could well be the situation, figuring that itd be feasible with around a 2330 qubit quantum computer.

Streamlining applications leading the pack makes sense well since theyre at present to a great extent illuminated utilizing brute force and raw computing power. If quantum computers can rapidly observe all the potential solutions, an ideal solution can become obvious all the more rapidly. Streamlining stands apart on the grounds that its significantly more natural and simpler to get a hold on.

The community of people who can fuse optimization and robust optimization is a whole lot bigger. The machine learning community, the coinciding between the innovation and the requirements are technical; theyre just pertinent to analysts. Whats more, theres a much smaller network of statisticians on the planet than there are of developers.

Specifically, the unpredictability of fusing quantum computing into the machine learning workflow presents an impediment. For machine learning professionals and analysts, its very easy to make sense of how to program the system. Fitting that into a machine learning workflow is all the more challenging since machine learning programs are getting very complex. However, teams in the past have published a lot of research on the most proficient method to consolidate it in a training workflow that makes sense.

Undoubtedly, ML experts at present need another person to deal with the quantum computing part: Machine learning experts are searching for another person to do the legwork of building the systems up to the expansions and demonstrating that it can fit.

In any case, the intersection of these two fields goes much further than that, and its not simply AI applications that can benefit. There is a meeting area where quantum computers perform machine learning algorithms and customary machine learning strategies are utilized to survey the quantum computers. This region of research is creating at such bursting speeds that it has produced a whole new field called Quantum Machine Learning.

This interdisciplinary field is incredibly new, however. Recent work has created quantum algorithms that could go about as the building blocks of machine learning programs, yet the hardware and programming difficulties are as yet significant and the development of fully functional quantum computers is still far off.

The future of AI sped along by quantum computing looks splendid, with real-time human-imitable practices right around an inescapable result. Quantum computing will be capable of taking care of complex AI issues and acquiring multiple solutions for complex issues all the while. This will bring about artificial intelligence all the more effectively performing complex tasks in human-like ways. Likewise, robots that can settle on optimised decisions in real-time in practical circumstances will be conceivable once we can utilize quantum computers dependent on Artificial Intelligence.

How away will this future be? Indeed, considering just a bunch of the worlds top organizations and colleges as of now are growing (genuinely immense) quantum computers that right now do not have the processing power required, having a multitude of robots mirroring humans running about is presumably a reasonable way off, which may comfort a few people, and disappoint others. Building only one, however? Perhaps not so far away.

Quantum computing and machine learning are incredibly well matched. The features the innovation has and the requirements of the field are extremely close. For machine learning, its important for what you have to do. Its difficult to reproduce that with a traditional computer and you get it locally from the quantum computer. So those features cant be unintentional. Its simply that it will require some time for the people to locate the correct techniques for integrating it and afterwards for the innovation to embed into that space productively.

Go here to see the original:
The Well-matched Combo of Quantum Computing and Machine Learning - Analytics Insight

Quantum computing is right around the corner, but cooling is a problem. What are the options? – Diginomica

(Shutterstock.com)

Why would you be thinking about quantum computing? Yes, it may be two years or more before quantum computing will be widely available, but there are already quite a few organizations that are pressing ahead. I'll get into those use cases, but first - Lets start with the basics:

Classical computers require built-in fans and other ways to dissipate heat, and quantum computers are no different. Instead of working with bits of information that can be either 0 or 1, as in a classical machine, a quantum computer relies on "qubits," which can be in both states simultaneously called a superposition thanks to the quirks of quantum mechanics. Those qubits must be shielded from all external noise, since the slightest interference will destroy the superposition, resulting in calculation errors. Well-isolated qubits heat up quickly, so keeping them cool is a challenge.

The current operating temperature of quantum computers is 0.015 Kelvin or -273C or -460F. That is the only way to slow down the movement of atoms, so a "qubit" can hold a value.

There have been some creative solutions proposed for this problem, such as the nanofridge," which builds a circuit with an energy gap dividing two channels: a superconducting fast lane, where electrons can zip along with zero resistance, and a slow resistive (non-superconducting) lane. Only electrons with sufficient energy to jump across that gap can get to the superconductor highway; the rest are stuck in the slow lane. This has a cooling effect.

Just one problem though: The inventor, MikkoMttnen, is confident enough in the eventual success that he has applied for a patent for the device. However, "Maybe in 10 to 15 years, this might be commercially useful, he said. Its going to take some time, but Im pretty sure well get there."

Ten to fifteen years? It may be two years or more before quantum computing will be widely available, but there are already quite a few organizations that are pressing ahead in the following sectors:

An excellent, detailed report on the quantum computing ecosystem is: The Next Decade in Quantum Computingand How to Play.

But the cooling problem must get sorted. It may be diamonds that finally solve some of the commercial and operational/cost issues in quantum computing: synthetic, also known as lab-grown diamonds.

The first synthetic diamond was grown by GE in 1954. It was an ugly little brown thing. By the '70s, GE and others were growing up to 1-carat off-color diamonds for industrial use. By the '90s, a company called Gemesis (renamed Pure Grown Diamonds) successfully created one-carat flawless diamonds graded ILA, meaning perfect. Today designer diamonds come in all sizes and colors: adding Boron to make them pink or nitrogen to make them yellow.

Diamonds have unique properties. They have high thermal conductivity (meaning they don't melt like silicon). The thermal conductivity of a pure diamond is the highest of any known solid. They are also an excellent electrical insulator. In its center, it has an impurity called an N-V center, where a carbon atom is replaced by a nitrogen atom leaving a gap where an unpaired electron circles the nitrogen gap and can be excited or polarized by a laser. When excited, the electron gives off a single photon leaving it in a reduced energy state. Somehow, and I admit I dont completely understand this, the particle is placed into a quantum superposition. In quantum-speak, that means it can be two things, two values, two places at once, where it has both spin up and spin down. That is the essence of quantum computing, the creation of a "qubit," something that can be both 0 and 1 at the same time.

If that isnt weird enough, there is the issue of entanglement. A microwave pulse can be directed at a pair of qubits, placing them both in the same state. But you can "entangle" them so that they are always in the same state. In other words, if you change the state of one of them, the other also changes, even if great distances separate them, a phenomenon Einstein dubbed, spooky action at a distance. Entangled photons don't need bulky equipment to keep them in their quantum state, and they can transmit quantum information across long distances.

At least in the theory of the predictive nature of entanglement, adding qubits explodes a quantum computer's computing power. In telecommunications, for example, entangled photons that span the traditional telecommunications spectrum have enormous potential for multi-channel quantum communication.

News Flash: Physicists have just demonstrated a 3-particle entanglement. This increases the capacity of quantum computing geometrically.

The cooling of qubits is the stumbling block. Diamonds seem to offer a solution, one that could quantum computing into the mainstream. The impurities in synthetic diamonds can be manipulated, and the state of od qubit can held at room temperature, unlike other potential quantum computing systems, and NV-center qubits (described above) are long-lived. There are still many issues to unravel to make quantum computers feasible, but today, unless you have a refrigerator at home that can operate at near absolute-zero, hang on to that laptop.

But doesnt diamonds in computers sound expensive, flagrant, excessive? It begs the question, What is anything worth? Synthetic diamonds for jewelry are not as expensive as mined gems, but the price one pays at retail s burdened by the effect of monopoly, and so many intermediaries, distributors, jewelry companies, and retailers.

A recent book explored the value of fine things and explains the perceived value which only has a psychological basis.In the 1930s, De Beers, which had a monopoly on the world diamond market and too many for the weak demand, engaged the N. W. Ayers advertising agency realizing that diamonds were only sold to the very rich, while everyone else was buying cars and appliances. They created a market for diamond engagement rings and introduced the idea that a man should spend at least three months salary on a diamond for his betrothed.

And in classic selling of an idea, not a brand, they used their earworm taglines like diamonds are forever. These four iconic words have appeared in every single De Beers advertisement since 1948, and AdAge named it the #1 slogan of the century in 1999. Incidentally, diamonds arent forever. That diamond on your finger is slowly evaporating.

The worldwide outrage over the Blood Diamond scandal is increasing supply and demand for fine jewelry applications of synthetic diamonds. If quantum computers take off, and a diamond-based architecture becomes a standard, it will spawn a synthetic diamond production boom, increasing supply and drastically lowering the cost, making it feasible.

Many thanks to my daughter, Aja Raden, an author, jeweler, and behavioral economist for her insights about the diamond trade.

See the original post here:
Quantum computing is right around the corner, but cooling is a problem. What are the options? - Diginomica

Research by University of Chicago PhD Student and EPiQC Wins IBM Q Best Paper – HPCwire

March 23, 2020 A new approach for using a quantum computer to realize a near-term killer app for the technology received first prize in the 2019 IBM Q Best Paper Awardcompetition, the company announced. The paper, Minimizing State Preparations in Variational Quantum Eigensolver (VQE) by Partitioning into Commuting Families, was authored by UChicago CS graduate studentPranav Gokhaleand fellow researchers from theEnabling Practical-Scale Quantum Computing (EPiQC)team.

The interdisciplinary team of researchers from UChicago, University of California, Berkeley, Princeton University and Argonne National Laboratory won the $2,500 first-place award for Best Paper. Their research examined how the VQE quantum algorithm could improve the ability of current and near-term quantum computers to solve highly complex problems, such as finding the ground state energy of a molecule, an important and computationally difficult chemical calculation the authors refer to as a killer app for quantum computing.

Quantum computers are expected to perform complex calculations in chemistry, cryptography and other fields that are prohibitively slow or even impossible for classical computers. A significant gap remains, however, between the capabilities of todays quantum computers and the algorithms proposed by computational theorists.

VQE can perform some pretty complicated chemical simulations in just 1,000 or even 10,000 operations, which is good, Gokhale says. The downside is that VQE requires millions, even tens of millions, of measurements, which is what our research seeks to correct by exploring the possibility of doing multiple measurements simultaneously.

Gokhale explains the research inthis video.

With their approach, the authors reduced the computational cost of running the VQE algorithm by 7-12 times. When they validated the approach on one of IBMs cloud-service 20-qubit quantum computers, they also found lower error as compared to traditional methods of solving the problem. The authors have shared theirPython and Qiskit codefor generating circuits for simultaneous measurement, and have already received numerous citations in the months since the paper was published.

For more on the research and the IBM Q Best Paper Award, see theIBM Research Blog. Additional authors on the paper include ProfessorFred Chongand PhD studentYongshan Dingof UChicago CS, Kaiwen Gui and Martin Suchara of the Pritzker School of Molecular Engineering at UChicago, Olivia Angiuli of University of California, Berkeley, and Teague Tomesh and Margaret Martonosi of Princeton University.

About The University of Chicago

The University of Chicago is an urban research university that has driven new ways of thinking since 1890. Our commitment tofree and open inquirydraws inspired scholars to ourglobal campuses, where ideas are born that challenge and change the world. We empower individuals to challenge conventional thinking in pursuit of original ideas. Students in theCollegedevelop critical, analytic, and writing skills in ourrigorous, interdisciplinary core curriculum. Throughgraduate programs, students test their ideas with UChicago scholars, and become the next generation of leaders in academia, industry, nonprofits, and government.

Source: The University of Chicago

Read more:
Research by University of Chicago PhD Student and EPiQC Wins IBM Q Best Paper - HPCwire

Picking up the quantum technology baton – The Hindu

In the Budget 2020 speech, Finance Minister Nirmala Sitharaman made a welcome announcement for Indian science over the next five years she proposed spending 8,000 crore (~ $1.2 billion) on a National Mission on Quantum Technologies and Applications. This promises to catapult India into the midst of the second quantum revolution, a major scientific effort that is being pursued by the United States, Europe, China and others. In this article we describe the scientific seeds of this mission, the promise of quantum technology and some critical constraints on its success that can be lifted with some imagination on the part of Indian scientific institutions and, crucially, some strategic support from Indian industry and philanthropy.

Quantum mechanics was developed in the early 20th century to describe nature in the small at the scale of atoms and elementary particles. For over a century it has provided the foundations of our understanding of the physical world, including the interaction of light and matter, and led to ubiquitous inventions such as lasers and semiconductor transistors. Despite a century of research, the quantum world still remains mysterious and far removed from our experiences based on everyday life. A second revolution is currently under way with the goal of putting our growing understanding of these mysteries to use by actually controlling nature and harnessing the benefits of the weird and wondrous properties of quantum mechanics. One of the most striking of these is the tremendous computing power of quantum computers, whose actual experimental realisation is one of the great challenges of our times. The announcement by Google, in October 2019, where they claimed to have demonstrated the so-called quantum supremacy, is one of the first steps towards this goal.

Besides computing, exploring the quantum world promises other dramatic applications including the creation of novel materials, enhanced metrology, secure communication, to name just a few. Some of these are already around the corner. For example, China recently demonstrated secure quantum communication links between terrestrial stations and satellites. And computer scientists are working towards deploying schemes for post-quantum cryptography clever schemes by which existing computers can keep communication secure even against quantum computers of the future. Beyond these applications, some of the deepest foundational questions in physics and computer science are being driven by quantum information science. This includes subjects such as quantum gravity and black holes.

Pursuing these challenges will require an unprecedented collaboration between physicists (both experimentalists and theorists), computer scientists, material scientists and engineers. On the experimental front, the challenge lies in harnessing the weird and wonderful properties of quantum superposition and entanglement in a highly controlled manner by building a system composed of carefully designed building blocks called quantum bits or qubits. These qubits tend to be very fragile and lose their quantumness if not controlled properly, and a careful choice of materials, design and engineering is required to get them to work. On the theoretical front lies the challenge of creating the algorithms and applications for quantum computers. These projects will also place new demands on classical control hardware as well as software platforms.

Globally, research in this area is about two decades old, but in India, serious experimental work has been under way for only about five years, and in a handful of locations. What are the constraints on Indian progress in this field? So far we have been plagued by a lack of sufficient resources, high quality manpower, timeliness and flexibility. The new announcement in the Budget would greatly help fix the resource problem but high quality manpower is in global demand. In a fast moving field like this, timeliness is everything delayed funding by even one year is an enormous hit.

A previous programme called Quantum Enabled Science and Technology has just been fully rolled out, more than two years after the call for proposals. Nevertheless, one has to laud the governments announcement of this new mission on a massive scale and on a par with similar programmes announced recently by the United States and Europe. This is indeed unprecedented, and for the most part it is now up to the government, its partner institutions and the scientific community to work out details of the mission and roll it out quickly.

But there are some limits that come from how the government must do business with public funds. Here, private funding, both via industry and philanthropy, can play an outsized role even with much smaller amounts. For example, unrestricted funds that can be used to attract and retain high quality manpower and to build international networks all at short notice can and will make an enormous difference to the success of this enterprise. This is the most effective way (as China and Singapore discovered) to catch up scientifically with the international community, while quickly creating a vibrant intellectual environment to help attract top researchers.

Further, connections with Indian industry from the start would also help quantum technologies become commercialised successfully, allowing Indian industry to benefit from the quantum revolution. We must encourage industrial houses and strategic philanthropists to take an interest and reach out to Indian institutions with an existing presence in this emerging field. As two of us can personally attest, the Tata Institute of Fundamental Research (TIFR), home to Indias first superconducting quantum computing lab, would be delighted to engage.

R. Vijayaraghavan is Associate Professor of Physics at the Tata Institute of Fundamental Research and leads its experimental quantum computing effort; Shivaji Sondhi is Professor of Physics at Princeton University and has briefed the PM-STIAC on the challenges of quantum science and technology development; Sandip Trivedi, a Theoretical Physicist, is Distinguished Professor and Director of the Tata Institute of Fundamental Research; Umesh Vazirani is Professor of Computer Science and Director, Berkeley Quantum Information and Computation Center and has briefed the PM-STIAC on the challenges of quantum science and technology development

You have reached your limit for free articles this month.

Register to The Hindu for free and get unlimited access for 30 days.

Find mobile-friendly version of articles from the day's newspaper in one easy-to-read list.

Enjoy reading as many articles as you wish without any limitations.

A select list of articles that match your interests and tastes.

Move smoothly between articles as our pages load instantly.

A one-stop-shop for seeing the latest updates, and managing your preferences.

We brief you on the latest and most important developments, three times a day.

Not convinced? Know why you should pay for news.

*Our Digital Subscription plans do not currently include the e-paper ,crossword, iPhone, iPad mobile applications and print. Our plans enhance your reading experience.

View post:
Picking up the quantum technology baton - The Hindu

The growth of an organism rides on a pattern of waves – MIT News

When an egg cell of almost any sexually reproducing species is fertilized, it sets off a series of waves that ripple across the eggs surface. These waves are produced by billions of activated proteins that surge through the eggs membrane like streams of tiny burrowing sentinels, signaling the egg to start dividing, folding, and dividing again, to form the first cellular seeds of an organism.

Now MIT scientists have taken a detailed look at the pattern of these waves, produced on the surface of starfish eggs. These eggs are large and therefore easy to observe, and scientists consider starfish eggs to be representative of the eggs of many other animal species.

In each egg, the team introduced a protein to mimic the onset of fertilization, and recorded the pattern of waves that rippled across their surfaces in response. They observed that each wave emerged in a spiral pattern, and that multiple spirals whirled across an eggs surface at a time. Some spirals spontaneously appeared and swirled away in opposite directions, while others collided head-on and immediately disappeared.

The behavior of these swirling waves, the researchers realized, is similar to the waves generated in other, seemingly unrelated systems, such as the vortices in quantum fluids, the circulations in the atmosphere and oceans, and the electrical signals that propagate through the heart and brain.

Not much was known about the dynamics of these surface waves in eggs, and after we started analyzing and modeling these waves, we found these same patterns show up in all these other systems, says physicist Nikta Fakhri, the Thomas D. and Virginia W. Cabot Assistant Professor at MIT. Its a manifestation of this very universal wave pattern.

It opens a completely new perspective, adds Jrn Dunkel, associate professor of mathematics at MIT. You can borrow a lot of techniques people have developed to study similar patterns in other systems, to learn something about biology.

Fakhri and Dunkel have published their results today in the journal Nature Physics. Their co-authors are Tzer Han Tan, Jinghui Liu, Pearson Miller, and Melis Tekant of MIT.

Finding ones center

Previous studies have shown that the fertilization of an egg immediately activates Rho-GTP, a protein within the egg which normally floats around in the cells cytoplasm in an inactive state. Once activated, billions of the protein rise up out of the cytoplasms morass to attach to the eggs membrane, snaking along the wall in waves.

Imagine if you have a very dirty aquarium, and once a fish swims close to the glass, you can see it, Dunkel explains. In a similar way, the proteins are somewhere inside the cell, and when they become activated, they attach to the membrane, and you start to see them move.

Fakhri says the waves of proteins moving across the eggs membrane serve, in part, to organize cell division around the cells core.

The egg is a huge cell, and these proteins have to work together to find its center, so that the cell knows where to divide and fold, many times over, to form an organism, Fakhri says. Without these proteins making waves, there would be no cell division.

MIT researchers observe ripples across a newly fertilized egg that are similar to other systems, from ocean and atmospheric circulations to quantum fluids. Courtesy of the researchers.

In their study, the team focused on the active form of Rho-GTP and the pattern of waves produced on an eggs surface when they altered the proteins concentration.

For their experiments, they obtained about 10 eggs from the ovaries of starfish through a minimally invasive surgical procedure. Theyintroduced a hormone to stimulate maturation, and alsoinjected fluorescent markers to attach to any active forms of Rho-GTP thatrose up in response. They then observed each egg through a confocal microscope and watched as billions of the proteins activated and rippled across the eggs surface in response to varying concentrations of the artificial hormonal protein.

In this way, we created a kaleidoscope of different patterns and looked at their resulting dynamics, Fakhri says.

Hurricane track

The researchers first assembled black-and-white videos of each egg, showing the bright waves that traveled over its surface. The brighter a region in a wave, the higher the concentration of Rho-GTP in that particular region. For each video, they compared the brightness, or concentration of protein from pixel to pixel, and used these comparisons to generate an animation of the same wave patterns.

From their videos, the team observed that waves seemed to oscillate outward as tiny, hurricane-like spirals. The researchers traced the origin of each wave to the core of each spiral, which they refer to as a topological defect. Out of curiosity, they tracked the movement of these defects themselves. They did some statistical analysis to determine how fast certain defects moved across an eggs surface, and how often, and in what configurations the spirals popped up, collided, and disappeared.

In a surprising twist, they found that their statistical results, and the behavior of waves in an eggs surface, were the same as the behavior of waves in other larger and seemingly unrelated systems.

When you look at the statistics of these defects, its essentially the same as vortices in a fluid, or waves in the brain, or systems on a larger scale, Dunkel says. Its the same universal phenomenon, just scaled down to the level of a cell.

The researchers are particularly interested in the waves similarity to ideas in quantum computing. Just as the pattern of waves in an egg convey specific signals, in this case of cell division, quantum computing is a field that aims to manipulate atoms in a fluid, in precise patterns, in order to translate information and perform calculations.

Perhaps now we can borrow ideas from quantum fluids, to build minicomputers from biological cells, Fakhri says. We expect some differences, but we will try to explore [biological signaling waves] further as a tool for computation.

This research was supported, in part, by the James S. McDonnell Foundation, the Alfred P. Sloan Foundation, and the National Science Foundation.

See original here:
The growth of an organism rides on a pattern of waves - MIT News

Quantum Computing Market Increase In Analysis & Development Activities Is More Boosting Demands – Daily Science

Quantum Computing Market report covers the worldwide top manufacturers like (D-Wave Systems, Google, IBM, Intel, Microsoft, 1QB Information Technologies, Anyon Systems, Cambridge Quantum Computing, ID Quantique, IonQ, QbitLogic, QC Ware, Quantum Circuits, Qubitekk, QxBranch, Rigetti Computing) which including information such as: Capacity, Production, Price, Revenue, Cost, Shipment,Gross, Gross Profit, Interview Record, Business Distribution etc., these data help the consumer know about the competitors better. This Quantum Computing Marketreport includes (6 Year Forecast 2020-2026)Overview, Classification, Industry Value, Price, Cost and Gross Profit.This Quantum Computing industry report also covers all the regions and countries of the world, which shows a regional development status, including market size, volume and value, as well as price data.

Get Free Sample PDF (including full TOC, Tables and Figures)of Quantum Computing[emailprotected]https://www.researchmoz.us/enquiry.php?type=S&repid=2040997

Target Audience of the Global Quantum Computing Market in Market Study: Key Consulting Companies & Advisors, Large, medium-sized, and small enterprises, Venture capitalists, Value-Added Resellers (VARs), Manufacturers, Third-party knowledge providers, Equipment Suppliers/ Buyers, Industry Investors/Investment Bankers, Research Professionals, Emerging Companies, Service Providers.

Scope of Quantum Computing Market: Quantum computing is a technology that applies the laws of quantum mechanics to computational ability. It includes three states, namely 1, 0 as well as the superposition of 1 and 0. Superposition indicates that two states exist at the same time. These bits are known as quantum bits or qubits. The global quantum computing market consists of the hardware that is required to develop quantum computers and its peripherals.

North America accounted for the largest share of the overall quantum computing market in 2017. On the other hand, Asia Pacific (APAC) would be the fastest growing region for quantum computing during the forecast period. This growth can be attributed to the increasing demand for quantum technology to solve the most tedious and complex problems in the defense and banking & finance industry.

On the basis of product type, this report displays the shipments, revenue (Million USD), price, and market share and growth rate of each type.

Hardware Software Services

On the basis on the end users/applications,this report focuses on the status and outlook for major applications/end users, shipments, revenue (Million USD), price, and market share and growth rate foreach application.

Defense Healthcare & pharmaceuticals Chemicals Banking & finance Energy & power

Do You Have Any Query Or Specific Requirement? Ask to Our Industry[emailprotected]https://www.researchmoz.us/enquiry.php?type=E&repid=2040997

Geographically, the report includes the research on production, consumption, revenue, Quantum Computing market share and growth rate, and forecast (2020-2026) of the following regions:

Important Key Questions Answered In Quantum Computing Market Report:

Contact:

ResearchMozMr. NachiketGhumare,Tel: +1-518-621-2074USA-Canada Toll Free: 866-997-4948Email:[emailprotected]

Browse More Reports Visit @https://www.mytradeinsight.blogspot.com/

Read the original post:
Quantum Computing Market Increase In Analysis & Development Activities Is More Boosting Demands - Daily Science

Hulu’s Devs Just Confirmed The [SPOILER] Exists – Here’s Why It Matters – Screen Rant

Hulu's Devsepisode 4 made a bombshell revelation - the Multiverse exists in this universe, and it has the power to radically change everything they know about the projectthey're working on. Devs is an eight-part miniseriesdirected by veteran sci-fi filmmaker Alex Garland, and produced as part of the new FX on Hulu banner. The story follows computer engineer Lilly Chan as she dives into the seedy underbelly of the quantum computing company Amaya, a corporation which she believes is responsible for the mysterious death of her boyfriend.

In Devs episode 1, audiences are slowly keyed into the fact that Amaya is working on some kind of mysterious project, one that revolves around the deterministic De Brogile-Bohm theory of quantum mechanics. Throughout episodes 2 and 3, however, the project becomes clearer: the Devs team has created a quantum computer capable of projecting the past and predicting the future. Despite the major ethical and existential questions posed by the existence of the software, it hasn't been perfected just yet, and that's precisely what Forest, the CEO of Amaya, wants.But inDevsepisode 4, the team takes a huge step towards perfecting the projection project, while also inadvertently making a massive discovery about the nature of their reality. And not only does their discovery change the nature of their work with the projection project, but it also might become distinctly important to Forest andhis reasoning behind creating Amayain the first place.

Related: Hulu's Devs Cast & Character Guide

So far, the projection project has simply been an abstract visual rendering of code, which occasionally coalesces to provide an image of a historical event, such as the crucifixion of Christ in the second episode, or a night of love-making betweenMarilyn Monroe and Arthur Miller. All the visuals are in black-and-white and presented like a matrix of binary code. This is because the basis of the program is modeled after the De Brogile-Bohm theory, a quantum mechanics theory that postulates our universe, and the sequence of events that take place within it, are entirely pre-determined as a result of cause and effect. Free will is a myth, and the reason the projection project can predict the future is because it's simply a matter of data.

This theory is preferred by Forest, and it's responsible for the progress that the Devs team has already made. However, Lyndon, an audio engineer for the project, decides to experiment around and develop a new algorithm for sound waves, replacing the single-universe De Brogile-Bohm theory with Hugh Everett's many-worlds theory, which assumes that all possible outcomes of quantum measurements are physically realized in some world. This immediately creates tangible results, and Lyndon produces crisp and clear audio of Jesus Christ of Nazareth praying on the cross.

While it seems as if everyone would be ecstatic about this breakthrough, Forest is furious. In his opinion, swapping out for the many-worlds theory is cheating, because while the audio is clear and Jesus, indeed, can be heard, it's not the Jesus who died on the cross in their universe, thus making it invalid. When Lyndon tries to argue his point, Forest retaliates by firing him. It's clear that whatever Forest plans on using the projection project for, he doesn't want to cut any corners.

Devsepisode 4 also continues dropping subtle hints at Forest's trauma and his reasoning for creating Amaya. When Katie, his closest confidant, accosts him for firing Lyndon, he fires back by telling her that an alternate universe means it "won't be his Amaya," referring to the deceased daughter he mentioned to Lilly back in episode 2. Katie then tells Forest that she's swapped the many-worlds algorithm into the project's light waves as well, and uses it to project a crystal clear image of Forest's daughter that leaves him in tears.

While all but confirmed at this point, it seems as if the projection project represents a crucial step in Forest's journey to get some closure regarding his daughter. And if the ending of the episode is any indication at all, we might see Forest compromise for an alternate universe version of his daughter. Devs has already been a uniquely mind-bending experience, but the introduction of the multiverse might take it even further.

More: 25 Best Movies On Hulu Right Now

Walking Dead Reveals What Happened To Rick After Andrew Lincoln's Exit

Chrishaun Baker is a Feature Writer for Screen Rant, with a host of interests ranging from horror movies to video games to superhero films. A soon-to-be graduate of Western Carolina University, he spends his time reading comic books and genre fiction, directing short films, writing screenplays, and getting increasingly frustrated at the state of film discourse in 2020. You can find him discussing movies on Letterboxd or working up a migraine over American politics on Twitter.

Read more from the original source:
Hulu's Devs Just Confirmed The [SPOILER] Exists - Here's Why It Matters - Screen Rant

Honeywell Achieves Breakthrough That Will Enable The Worlds Most Powerful Quantum Computer #47655 – New Kerala

The company also announced it has made strategic investments in two leading quantum computing software providers and will work together to develop quantum computing algorithms with JPMorgan Chase. Together, these announcements demonstrate significant technological and commercial progress for quantum computing and change the dynamics in the quantum computing industry.

Within the next three months, Honeywell will bring to market the world's most powerful quantum computer in terms of quantum volume, a measure of quantum capability that goes beyond the number of qubits. Quantum volume measures computational ability, indicating the relative complexity of a problem that can be solved by a quantum computer. When released, Honeywell's quantum computer will have a quantum volume of at least 64, twice that of the next alternative in the industry.

In a scientific paper that will be posted to the online repository arXiv later today and is available now on Honeywell's website, Honeywell has demonstrated its quantum charge coupled device (QCCD) architecture, a major technical breakthrough in accelerating quantum capability. The company also announced it is on a trajectory to increase its computer's quantum volume by an order of magnitude each year for the next five years.

This breakthrough in quantum volume results from Honeywell's solution having the highest-quality, fully-connected qubits with the lowest error rates.

Building quantum computers capable of solving deeper, more complex problems is not just a simple matter of increasing the number of qubits, said Paul Smith-Goodson, analyst-in-residence for quantum computing, Moor Insights & Strategy. Quantum volume is a powerful tool that should be adopted as an interim benchmarking tool by other gate-based quantum computer companies.

Honeywell Chairman and Chief Executive Officer Darius Adamczyk said companies should start now to determine their strategy to leverage or mitigate the many business changes that are likely to result from new quantum computing technology.

Quantum computing will enable us to tackle complex scientific and business challenges, driving step-change improvements in computational power, operating costs and speed, Adamczyk said. Materials companies will explore new molecular structures. Transportation companies will optimize logistics. Financial institutions will need faster and more precise software applications. Pharmaceutical companies will accelerate the discovery of new drugs. Honeywell is striving to influence how quantum computing evolves and to create opportunities for our customers to benefit from this powerful new technology.

To accelerate the development of quantum computing and explore practical applications for its customers, Honeywell Ventures, the strategic venture capital arm of Honeywell, has made investments in two leading quantum software and algorithm providers Cambridge Quantum Computing (CQC) and Zapata Computing. Both Zapata and CQC complement Honeywell's own quantum computing capabilities by bringing a wealth of cross-vertical market algorithm and software expertise. CQC has strong expertise in quantum software, specifically a quantum development platform and enterprise applications in the areas of chemistry, machine learning and augmented cybersecurity. Zapata creates enterprise-grade, quantum-enabled software for a variety of industries and use cases, allowing users to build quantum workflows and execute them freely across a range of quantum and classical devices.

Honeywell also announced that it will collaborate with JPMorgan Chase, a global financial services firm, to develop quantum algorithms using Honeywell's computer.

Honeywell's unique quantum computer, along with the ecosystem Honeywell has developed around it, will enable us to get closer to tackling major and growing business challenges in the financial services industry, said Dr. Marco Pistoia, managing director and research lead for Future Lab for Applied Research & Engineering (FLARE), JPMorgan Chase.

Honeywell first announced its quantum computing capabilities in late 2018, although the company had been working on the technical foundations for its quantum computer for a decade prior to that. In late 2019, Honeywell announced a partnership with Microsoft to provide cloud access to Honeywell's quantum computer through Microsoft Azure Quantum services.

Honeywell's quantum computer uses trapped-ion technology, which leverages numerous, individual, charged atoms (ions) to hold quantum information. Honeywell's system applies electromagnetic fields to hold (trap) each ion so it can be manipulated and encoded using laser pulses.

Honeywell's trapped-ion qubits can be uniformly generated with errors more well understood compared with alternative qubit technologies that do not directly use atoms. These high-performance operations require deep experience across multiple disciplines, including atomic physics, optics, cryogenics, lasers, magnetics, ultra-high vacuum, and precision control systems. Honeywell has a decades-long legacy of expertise in these technologies.

Today, Honeywell has a cross-disciplinary team of more than 100 scientists, engineers, and software developers dedicated to advancing quantum volume and addressing real enterprise problems across industries.

Honeywell (www.honeywell.com) is a Fortune 100 technology company that delivers industry-specific solutions that include aerospace products and services; control technologies for buildings and industry; and performance materials globally. Our technologies help aircraft, buildings, manufacturing plants, supply chains, and workers become more connected to make our world smarter, safer, and more sustainable. For more news and information on Honeywell, please visit http://www.honeywell.com/newsroom.

Originally posted here:
Honeywell Achieves Breakthrough That Will Enable The Worlds Most Powerful Quantum Computer #47655 - New Kerala

Flipping the switch on the ageing process – The Age

Sinclair was scheduled to discuss his ideas with Norman Swan, host of Radio Nationals Health Report, at Ageing is a Disease at Sydney Town Hall on April 4 as part of the Festival of Dangerous Ideas, which was cancelled last week.

David Sinclair says ageing fulfils every criteria for what medical textbooks define as a disease.Credit:Nic Walker

Sinclair says the coronavirus pandemic, which is causing a higher fatality rate among the elderly, gives urgency to his research into the ageing process. "Our research is aimed at delaying or reversing age-related diseases and providing the elderly with resilience," he says. "Other labs have shown in human clinical trials that immune responses in the elderly can be boosted by molecules that target ageing, such as low dose rapamycin."

But Sinclairs argument that ageing is a disease may cause some unease. He says ageing causes frailty, sickness and eventually death, fulfilling every criteria for what medical textbooks define as a disease. The only difference is that ageing happens to everyone, whereas cancer and heart disease do not. Were very good at preventing heart disease but we havent been successful at delaying ageing of the brain, he says. So weve ended up with the worst nightmare - were increasing lifespan but not healthspan as much.

"Its also not recognised that age is what causes those diseases in the first place," he adds.

Describing ageing as a disease also sounds to some ears like it is stigmatising older people. I find that people over 50 tend to get upset when you call ageing a disease, he says. But people under 50, particularly people in their 20s and 30s, they totally embrace this idea. They don't want to get sick. They think technology can solve everything.

Loading

Sinclairs book Lifespan, subtitled Why We Age, and Why We Don't Have To, outlines his 25 years of research into ageing, which he likens to corrupted software.

Getting older amounts to a loss of what he calls epigenetic information. Essentially its the information that tells the cells how to read the genes in the right way and stay young, he says. In the same way that a genome is the computer, the epigenome is the software. And so I'm proposing that ageing is corrupted software.

Sinclair says his research has also revealed there is a back-up copy of our software. If its corrupted, weve figured out a way to reboot the cell and be young again.

In Lifespan, he describes experiments in which old mice have been given gene therapy that restores their eyesight to that of young mice. The idea is we still have all the information to be young again in our bodies if we can just flip the switch, he says.

Sinclairs effort to reboot cells is undergoing pre-clinical trials to test the safety and efficacy of the procedure. Within the next three years, he aims to treat patients who have lost their vision as a result of glaucoma - a disease that can be controlled but cannot be cured.

Do I know if its going to work in people? he says. Of course not. Nobody knows until you do the clinical trials.

Sinclairs research into ageing is slowly gaining acceptance among scientists and medical professionals. British scientists Robert Faragher and Stuart Calimport said last year on academic website The Conversation that the World Health Organisations International Classification of Disease should be amended to classify ageing as a disease.

However, Sinclair says it is hard to change regulations and the habits of doctors who have been taught that ageing is inevitable and not something we can treat.

It takes radical thinking to overcome what youve taken for granted your whole life, he says.

Sinclair traces his interest in ageing to childhood and a conversation with his grandmother Vera. I remember very clearly that my grandmother told me everything is going to die, he says. The cat was going to die, she was going to die and my parents and I would.

Loading

Sinclair describes his grandmother as a huge rebel who taught him to question dogma and authority. He recalls with pride how she was ejected from Bondi Beach in 1957 for wearing a bikini one year after she fled Hungary following the Soviet invasion. She taught me that humans can do evil and can do amazing good, he says. And she said: David that's what you should do with your life is leave the world better than you found it.

Sinclair says his aim is not to cheat death, but to allow older people to stay healthier for longer. There are plenty of people who look at what its like to be 100 and say Heaven forbid, I don't want to get old, he says. But thats missing the point, which is you could be 80 or 90 and still play tennis and hang out with your grandkids and be productive like my father is at 80.

Sinclair says preventing illness is the key to improving quality of life and reducing the burden on family and society. Its what we see in our animal studies, he says. They live longer because theyre healthy. Thats the only way I know to keep something living longer is to prevent them getting sick.

However, Sinclair cautions that his research will not deliver the magic bullet that allows people to eat, drink and be merry without consequence. If you live a healthy lifestyle and eat all the right foods are you going to live beyond 120? Probably not, he says. On the other hand, the average lifespan for humans used to be about 40 years. And weve used technology to improve our health - thats what we do as human beings."

Andrew Taylor is a Senior Reporter for The Sydney Morning Herald.

Read more from the original source:
Flipping the switch on the ageing process - The Age

Are machine-learning-based automation tools good enough for storage management and other areas of IT? Let us know – The Register

Reader survey We hear a lot these days about IT automation. Yet whether it's labelled intelligent infrastructure, AIOps, self-driving IT, or even private cloud, the aim is the same.

And that aim is: to use the likes of machine learning, workflow automation, and infrastructure-as-code to automatically make changes in real-time, eliminating as much as possible of the manual drudgery associated with routine IT administration.

Are the latest AI/ML-powered intelligent automation solutions trustworthy and ready for mainstream deployment, particularly in areas such as storage management?

Should we go ahead and implement the technology now on offer?

This controversial topic is the subject of our latest reader survey, and we are eager to hear your views.

Please complete our short survey, here.

As always, your responses will be anonymous and your privacy assured.

Sponsored: Webcast: Why you need managed detection and response

The rest is here:
Are machine-learning-based automation tools good enough for storage management and other areas of IT? Let us know - The Register

With launch of COVID-19 data hub, the White House issues a call to action for AI researchers – TechCrunch

In a briefing on Monday, research leaders across tech, academia and the government joined the White House to announce an open data set full of scientific literature on the novel coronavirus. The COVID-19 Open Research Dataset, known as CORD-19, will also add relevant new research moving forward, compiling it into one centralized hub. The new data set is machine readable, making it easily parsed for machine learning purposes a key advantage according to researchers involved in the ambitious project.

In a press conference, U.S. CTO Michael Kratsios called the new data set the most extensive collection of machine readable coronavirus literature to date. Kratsios characterized the project as a call to action for the AI community, which can employ machine learning techniques to surface unique insights in the body of data. To come up with guidance for researchers combing through the data, the National Academies of Sciences, Engineering, and Medicine collaborated with the World Health Organization to come up with high priority questions about the coronavirus related to genetics, incubation, treatment, symptoms and prevention.

The partnership, announced today by the White House Office of Science and Technology Policy, brings together the Chan Zuckerberg Initiative, Microsoft Research, the Allen Institute for Artificial Intelligence, the National Institutes of Healths National Library of Medicine, Georgetown Universitys Center for Security and Emerging Technology, Cold Spring Harbor Laboratory and the Kaggle AI platform, owned by Google.

The database brings together nearly 30,000 scientific articles about the virus known as SARS-CoV-2. as well as related viruses in the broader coronavirus group. Around half of those articles make the full text available. Critically, the database will include pre-publication research from resources like medRxiv and bioRxiv, open access archives for pre-print health sciences and biology research.

Sharing vital information across scientific and medical communities is key to accelerating our ability to respond to the coronavirus pandemic, Chan Zuckerberg Initiative Head of Science Cori Bargmann said of the project.

The Chan Zuckerberg Initiative hopes that the global machine learning community will be able to help the science community connect the dots on some of the enduring mysteries about the novel coronavirus as scientists pursue knowledge around prevention, treatment and a vaccine.

For updates to the CORD-19 data set, the Chan Zuckerberg Initiative will track new research on a dedicated page on Meta, the research search engine the organization acquired in 2017.

The CORD-19 data set announcement is certain to roll out more smoothly than the White Houses last attempt at a coronavirus-related partnership with the tech industry. The White House came under criticism last week for President Trumps announcement that Google would build a dedicated website for COVID-19 screening. In fact, the site was in development by Verily, Alphabets life science research group, and intended to serve California residents, beginning with San Mateo and Santa Clara County. (Alphabet is the parent company of Google.)

The site, now live, offers risk screening through an online questionnaire to direct high-risk individuals toward local mobile testing sites. At this time, the project has no plans for a nationwide rollout.

Google later clarified that the company is undertaking its own efforts to bring crucial COVID-19 information to users across its products, but that may have become conflated with Verilys much more limited screening site rollout. On Twitter, Googles comms team noted that Google is indeed working with the government on a website, but not one intended to screen potential COVID-19 patients or refer them to local testing sites.

In a partial clarification over the weekend, Vice President Pence, one of the Trump administrations designated point people on the pandemic, indicated that the White House is working with Google but also working with many other tech companies. Its not clear if that means a central site will indeed launch soon out of a White House collaboration with Silicon Valley, but Pence hinted that might be the case. If that centralized site will handle screening and testing location referral is not clear.

Our best estimate is that some point early in the week we will have a website that goes up, Pence said.

More:
With launch of COVID-19 data hub, the White House issues a call to action for AI researchers - TechCrunch

The Top Machine Learning WR Prospect Will Surprise You – RotoExperts

What Can Machine Learning Tell Us About WR Prospects?

One of my favorite parts of draft season is trying to model the incoming prospects. This year, I wanted to try something new, so I dove into the world of machine learning models. Using machine learning to detail the value of a WR prospect is very useful for dynasty fantasy football.

Machine learning leverages artificial intelligence to identify patterns (learn) from the data, and build an appropriate model. I took over 60 different variables and 366 receiving prospects between the 2004 and 2016 NFL Drafts, and let the machine do its thing. As with any machine, some human intervention is necessary, and I fine-tuned everything down to a 24-model ensemble built upon different logistic regressions.

Just like before, the model presents the likelihood of a WR hitting 200 or more PPR points in at least one of his first three seasons. Here are the nine different components featured, in order of significance:

This obviously represents a massive change from the original model, proving once again that machines are smarter than humans. I decided to move over to ESPN grades and ranks instead of NFL Draft Scout for a few reasons:

Those changes alone made strong improvements to the model, and it should be noted that the ESPN overall ranks have been very closely tied to actual NFL Draft position.

Having an idea of draft position will always help a model since draft position usually begets a bunch of opportunity at the NFL level.

Since the model is built on drafts up until 2016, I figured perhaps youd want to see the results from the last three drafts before seeing the 2020 outputs.

It is encouraging to see some hits towards the top of the model, but there are obviously some misses as well. Your biggest takeaway here should be just how difficult it is to hit that 200 point threshold. Only two prospects the last three years have even a 40% chance of success. The model is telling us not to be over-confident, and that is a good thing.

Now that youve already seen some results, here are the 2020 model outputs.

Tee Higgins as the top WR is likely surprising for a lot of people, but it shouldnt be. Higgins had a fantastic career at Clemson, arguably the best school in the country over the course of his career. He is a proven touchdown scorer, and is just over 21 years old with a prototypical body-type.

Nobody is surprised that the second WR on this list is from Alabama, but they are likely shocked to see that a data-based model has Henry Ruggs over Jerry Jeudy. The pair is honestly a lot closer that many people think in a lot of the peripheral statistics. The major edge for Ruggs comes on the ground. He had a 75 yard rushing touchdown, which really underlines his special athleticism and play-making ability.

The name that likely stands out the most is Geraud Sanders, who comes in ahead of Jerry Jeudy despite being a relative unknown out of Air Force. You can mentally bump him down a good bit. The academy schools are a bit of a glitch in the system, as their offensive approach usually yields some outrageous efficiency. Since 2015, 12 of the top 15 seasons in adjusted receiving yards per pass attempt came from either an academy school or Georgia Techs triple-option attack. Sanders isnt a total zero, his profile looks very impressive, but I would have him closer to a 10% chance of success given his likely Day 3 or undrafted outcome in the NFL Draft.

Read more here:
The Top Machine Learning WR Prospect Will Surprise You - RotoExperts

How Quantum Computers Work | HowStuffWorks

The massive amount of processing power generated by computer manufacturers has not yet been able to quench our thirst for speed and computing capacity. In 1947, American computer engineer Howard Aiken said that just six electronic digital computers would satisfy the computing needs of the United States. Others have made similar errant predictions about the amount of computing power that would support our growing technological needs. Of course, Aiken didn't count on the large amounts of data generated by scientific research, the proliferation of personal computers or the emergence of the Internet, which have only fueled our need for more, more and more computing power.

Will we ever have the amount of computing power we need or want? If, as Moore's Law states, the number of transistors on a microprocessor continues to double every 18 months, the year 2020 or 2030 will find the circuits on a microprocessor measured on an atomic scale. And the logical next step will be to create quantum computers, which will harness the power of atoms and molecules to perform memory and processing tasks. Quantum computers have the potential to perform certain calculations significantly faster than any silicon-based computer.

Scientists have already built basic quantum computers that can perform certain calculations; but a practical quantum computer is still years away. In this article, you'll learn what a quantum computer is and just what it'll be used for in the next era of computing.

You don't have to go back too far to find the origins of quantum computing. While computers have been around for the majority of the 20th century, quantum computing was first theorized less than 30 years ago, by a physicist at the Argonne National Laboratory. Paul Benioff is credited with first applying quantum theory to computers in 1981. Benioff theorized about creating a quantum Turing machine. Most digital computers, like the one you are using to read this article, are based on the Turing Theory. Learn what this is in the next section.

More:
How Quantum Computers Work | HowStuffWorks

Quantum Computing | Intel Newsroom

Quantum computing is an exciting new computing paradigm with unique problems to be solved and new physics to be discovered. Quantum computing, in essence, is the ultimate in parallel computing, with the potential to tackle problems conventional computers cant handle. For example, quantum computers may simulate nature to advance research in chemistry, materials science and molecular modeling. In 2015, Intel established a collaborative relationship with QuTech to accelerate advancements in quantum computing. The collaboration spans the entire quantum system or stack from qubit devices to the hardware and software architecture required to control these devices as well as quantum applications. All of these elements are essential to advancing quantum computing from research to reality.

Jim Clarke, Intel Corporations director of quantum hardware, holds an Intel 49-qubit quantum test chip, called Tangle Lake, in front of a dilution refrigerator at QuTechs quantum computing lab inside Delft University of Technology in July 2018. QuTech at Delft University of Technology is Intel Corporations quantum computing research partner in the Netherlands. (Credit: Tim Herman/Intel Corporation)

Florian Unseld (left) and Kian van der Enden, research assistants at QuTech, work on a readout tool for an Intel quantum test chip at Delft University in July 2018. QuTech at Delft University of Technology is Intel Corporations quantum computing research partner in the Netherlands. (Credit: Tim Herman/Intel Corporation)

Dr. Leonardo DiCarlo, professor of superconducting quantum circuits, works on a dilution refrigerator for quantum computing at Delft University of Technology in July 2018. QuTech at Delft University of Technology is Intel Corporations quantum computing research partner in the Netherlands. (Credit: Tim Herman/Intel Corporation)

Brian Tarasimski, (left) post-doctoral researcher, and Dr. Leonardo DiCarlo, professor of superconducting quantum circuits, both of QuTech, work on a dilution refrigerator for quantum computing at Delft University of Technology in July 2018. QuTech at Delft University of Technology is Intel Corporations quantum computing research partner in the Netherlands. (Credit: Tim Herman/Intel Corporation)

A July 2018 photo shows a dilution refrigerator at QuTechs quantum computing lab. QuTech at Delft University of Technology is Intel Corporations quantum computing research partner in the Netherlands. (Credit: Tim Herman/Intel Corporation)

A July 2018 photo shows a dilution refrigerator at QuTechs quantum computing lab. QuTech at Delft University of Technology is Intel Corporations quantum computing research partner in the Netherlands. (Credit: Tim Herman/Intel Corporation)

A July 2018 photo shows a dilution refrigerator at QuTechs quantum computing lab. QuTech at Delft University of Technology is Intel Corporations quantum computing research partner in the Netherlands. (Credit: Tim Herman/Intel Corporation)

A July 2018 photo shows a dilution refrigerator at QuTechs quantum computing lab. QuTech at Delft University of Technology is Intel Corporations quantum computing research partner in the Netherlands. (Credit: Tim Herman/Intel Corporation)

A July 2018 photos shows an Intel Corporation-manufactured wafer that contains working spin qubits. (Credit: Tim Herman/Intel Corporation)

A July 2018 photos shows an Intel Corporation-manufactured wafer that contains working spin qubits. (Credit: Tim Herman/Intel Corporation)

Changing the World with Quantum Computing | Intel

Intel & Qutech Advance Quantum Computing Research (B-roll)

Download A Quantum Computing Primer

Intel Corporation has invented a spin qubit fabrication flow on its 300 mm process technology using isotopically pure wafers like this one. (Credit: Walden Kirsch/Intel Corporation)

Intel Corporation has invented a spin qubit fabrication flow on its 300 mm process technology using isotopically pure wafers like this one. (Credit: Walden Kirsch/Intel Corporation)

Intels director of quantum hardware, Jim Clarke, holds the new 17-qubit superconducting test chip. (Credit: Intel Corporation)

Intels 17-qubit superconducting test chip for quantum computing has unique features for improved connectivity and better electrical and thermo-mechanical performance. (Credit: Intel Corporation)

Researchers work in the quantum computing lab at QuTech, Intels quantum research partner in the Netherlands. Intel in October 2017 provided QuTech a 17-qubit superconducting test chip for quantum computing. (Credit: QuTech)

Professor Leo DiCarlo poses in the quantum computing lab at QuTech, Intels quantum research partner in the Netherlands. Intel in October 2017 provided QuTech a 17-qubit superconducting test chip for quantum computing. (Credit: QuTech)

Intel is collaborating with QuTech in the Netherlands to advance quantum computing research. Intel in October 2017 provided QuTech a 17-qubit superconducting test chip for quantum computing. (Credit: Intel Corporation)

Intels new 17-qubit superconducting test chip packaged for delivery to research partners at QuTech, Intels quantum research partner in the Netherlands. Intel in October 2017 provided QuTech with the 17-qubit superconducting test chip for quantum computing. (Credit: Intel Corporation)

A 2018 photo shows Intels new quantum computing chip balanced on a pencil eraser. Researchers started testing this spin qubit chip at the extremely low temperatures necessary for quantum computing: about 460 degrees below zero Fahrenheit. Intel projects that qubit-based quantum computers, which operate based on the behaviors of single electrons, could someday be more powerful than todays supercomputers. (Credit: Walden Kirsch/Intel Corporation)

Intel Corporation is making fast progress scaling superconducting quantum computing test chips to higher qubit counts -- from 7, to 17 and now 49 qubits (left to right). Multiple gold connectors are required to control and operate each qubit. (Credit: Walden Kirsch/Intel Corporation)

Intel Corporations 49-qubit quantum computing test chip, code-named Tangle Lake, is unveiled at 2018 CES in Las Vegas. (Credit: Walden Kirsch/Intel Corporation)

Intel Corporations self-learning neuromorphic research chip, code-named Loihi. (Credit: Intel Corporation)

Follow this link:
Quantum Computing | Intel Newsroom