Iridium Unveils the World’s First ML Algorithms That Decode the Value of Investor Relations – AiThority

Four Different Machine Learning Algorithms Were Deployed to Analyze 9 Million Data Points Across 673 Global Banks, Including 65 Gcc Banks, to Explain up to 98% of What Drives Bank Valuations

Iridium Quant Lens Shows That Investor Relations Adds up to 24.2% of Gcc Bank Valuations

IR Quality Is the 3rd Most Important Factor Impacting Price/ Tangible Book Value of Gcc Banks

The quality of investor relations can add up to 24.2 percent to a listed companys market capitalization, according to a new data science project by Iridium Advisors that uses four different Machine Learning algorithms to calculate the impact of 30 financial and non-financial valuation drivers.

Oliver Schutzmann, CEO of Iridium Advisors, said: Many boards and management teams in emerging markets have not yet invested sufficiently in investor relations because they do not fully understand the value it adds. To this background, we sought to take a scientific and systematic approach to show how the business value they create can be translated into market value, and thereby quantify the value of investor relations. With the insights gained from Iridium Quant Lens Machine Learning algorithms, we can now help business leaders understand what exactly drives their market value and show them how to unlock material valuation potential.

Recommended AI News: Zixi And Telstra Partner For Global Live Video Distribution

The Iridium Quant Lens machine learning (ML) platform was built on the foundations of classic finance theory that a companys stock price is derived through an evaluation of risk relative to return factors by equity market participants. In order to identify the financial and non-financial drivers of bank valuations, four different machine learning algorithms were deployed to consider 30 risk and return metrics, compiled from over 9 million data points, and covering 673 banks globally. The Quant Lens algorithms were run separately for all banks and for 65 GCC banks over different time horizons ranging from 1 to 10 years.

Iridiums algorithms proved successful in decomposing valuation drivers and, in aggregate, explained 86% of valuation variability for the test data set and 91 percent of the full data set. Furthermore, some individual models, such as the 3-year models for GCC banks, explained up 95 percent of the test data set and 98 percent of the full data set.

Recommended AI News: Litmus and Oden Partner to Offer Complete IIoT Solution for Smart Manufacturing

A significant finding of this study was that the quality of investor relations based on the classification into IR-Agnostic, IR-Basic and IR-Emerging archetypes is a highly material factor consistently influencing valuations of GCC banks. In fact, for most models it was the third most important factor impacting price to tangible book value (P/TBV) and explained 6% of share price variability on average.

In addition, the impact of upgrading investor relations is significant, with each upgrade step in a 2-stage upgrade path, commanding a 12 percent valuation premium on average and a complete move along the investor relations upgrade path adding 24 percent to market capitalization.

To illustrate the impact of IR Quality with real-world examples, one bank (Bank A) currently operates at an IR-Emerging level which adds 0.16x to its P/TBV valuation. Given the banks current market capitalization of USD 33 billion, this translates to almost USD 3 billion of its market value, or the equivalent of USD 220 million in net profits. Considering the valuation uplift achieved by the IR-Emerging level this is a compelling return on investment, being typically achievable with a USD 1.0 million annual IR budget. The converse is true for low IR quality. Bank X currently operates at an IR-Agnostic level, which in fact subtracts -0.07x from its P/TBV valuation.

Recommended AI News: ZelaaPayAE Partners with Tron to Empower Users with a Dual Chain Solution

Go here to see the original:
Iridium Unveils the World's First ML Algorithms That Decode the Value of Investor Relations - AiThority

Machine Learning as a Service Market to Witness Astonishing Growth by 2026 | Amazon, Oracle Corporation, IBM and more – The News Brok

The report also tracks the latest Machine Learning as a Service Market dynamics, such as driving factors, restraining factors, and industry news like mergers, acquisitions, and investments. It provides market size (value and volume), market share, growth rate by types, applications, and combines both qualitative and quantitative methods to make micro and macro forecasts in different regions or countries.

Prominent players profiled in the study: Amazon, Oracle Corporation, IBM, Microsoft Corporation, Google Inc., Salesforce.Com, Tencent, Alibaba, UCloud, Baidu, Rackspace, SAP AG, Century Link Inc., CSC (Computer Science Corporation), Heroku, Clustrix, Xeround

Sample Report with Latest Industry Trends @ https://www.statsandreports.com/request-sample/314706-global-machine-learning-as-a-service-market-size-status-and-forecast-2019-2025

Acknowledge the Global Machine Learning as a Service market with the assist of our expert analyst moderating the worldwide fluctuations. This market report will answer all your queries regarding growth of your business in this Covid-19 pandemic.

This Report provides an overview of the Machine Learning as a Service market, containing global revenue, global production, sales, and CAGR. Also describe Machine Learning as a Service product scope, market overview, market opportunities, market driving force, and market risks. The forecast and analysis of the Machine Learning as a Service market by type, application, and region are also presented. The next part of the report provides a full-scale analysis of Machine Learning as a Service competitive situation, sales, revenue and global market share of major players in the Machine Learning as a Service industry. The basic information, as well as the profiles, applications, and specifications of products market performance along with Business Overview, are offered.

Product Type: Private clouds, Public clouds, Hybrid cloud

Application: Personal, Business

Geographical Regions: North America, Europe, Central & South America, Asia-Pacific, and the Middle East & Africa, etc.

Get Reasonable Discount on this Premium Report @https://www.statsandreports.com/check-discount/314706-global-machine-learning-as-a-service-market-size-status-and-forecast-2019-2025

Machine Learning as a Service Market

Scope of the Machine Learning as a Service Report:

Worldwide Machine Learning as a Service Market 2020, Market Size Value CAGR (XX %) and revenue (USD Million) for the historical years (2016 to 2018) and forecast years (2020 to 2026), with SWOT analysis, Industry Analysis, Demand, Sales, Market Drivers, Restraints, Opportunities and Forecast to 2026 cover in this research report.

This report covers the current scenario and growth prospects of the Machine Learning as a Service Market for the period 2020-2026. The study is a professional and in-depth study with around tables and figures which provides key statistics on the state of the industry and is a valuable source of guidance and direction for companies and individuals interested in the domain.

Finally, all aspects of the Global Machine Learning as a Service Market are quantitatively as well qualitatively assessed to study the Global as well as regional market comparatively. This market study presents critical information and factual data about the market providing an overall statistical study of this market on the basis of market drivers, limitations and future prospects.

You can Buy This Report from Here: https://www.statsandreports.com/placeorder?report=314706-global-machine-learning-as-a-service-market-size-status-and-forecast-2019-2025&type=SingleUser

Thank you for reading our report. For more information on customization, please reach out to us. Our team will ensure the report is tailored according to your needs.

About Us

Stats and Reports is a global market research and consulting service provider specialized in offering wide range of business solutions to their clients including market research reports, primary and secondary research, demand forecasting services, focus group analysis and other services. We understand that how data is important in todays competitive environment and thus, we have collaborated with industrys leading research providers who works continuously to meet the ever-growing demand for market research reports throughout the year.

Contact:

Stats and ReportsMangalam Chamber, Office No 16, Paud RoadSankalp Society, Kothrud, Pune, Maharashtra 411038Phone: +1 650-646-3808Email: [emailprotected]Website: https://www.statsandreports.comFollow Us on: LinkedIN | Twitter |

More:
Machine Learning as a Service Market to Witness Astonishing Growth by 2026 | Amazon, Oracle Corporation, IBM and more - The News Brok

Utilizing Machine Learning on Internet Search Activity to Support the Diagnostic Process and Relapse Detection in Young Individuals With Early…

Psychiatry is nearly entirely reliant on patient self-reporting, and there are few objective and reliable tests or sources of collateral information available to help diagnostic and assessment procedures. Technology offers opportunities to collect objective digital data to complement patient experience and facilitate more informed treatment decisions.We aimed to develop computational algorithms based on internet search activity designed to support diagnostic procedures and relapse identification in individuals with schizophrenia spectrum disorders.We extracted 32,733 time-stamped search queries across 42 participants with schizophrenia spectrum disorders and 74 healthy volunteers between the ages of 15 and 35 (mean 24.4 years, 44.0% male), and built machine-learning diagnostic and relapse classifiers utilizing the timing, frequency, and content of online search activity.Classifiers predicted a diagnosis of schizophrenia spectrum disorders with an area under the curve value of 0.74 and predicted a psychotic relapse in individuals with schizophrenia spectrum disorders with an area under the curve of 0.71. Compared with healthy participants, those with schizophrenia spectrum disorders made fewer searches and their searches consisted of fewer words. Prior to a relapse hospitalization, participants with schizophrenia spectrum disorders were more likely to use words related to hearing, perception, and anger, and were less likely to use words related to health.Online search activity holds promise for gathering objective and easily accessed indicators of psychiatric symptoms. Utilizing search activity as collateral behavioral health information would represent a major advancement in efforts to capitalize on objective digital data to improve mental health monitoring.Michael Leo Birnbaum, Prathamesh Param Kulkarni, Anna Van Meter, Victor Chen, Asra F Rizvi, Elizabeth Arenare, Munmun De Choudhury, John M Kane. Originally published in JMIR Mental Health (http://mental.jmir.org), 01.09.2020.

PubMed

Read more:
Utilizing Machine Learning on Internet Search Activity to Support the Diagnostic Process and Relapse Detection in Young Individuals With Early...

AI and Machine Learning Algorithms are Increasingly being Used to Identify Fraudulent Transactions, Cybersecurity Professional Explains – Crowdfund…

The retail banking sector has been hit with numerous scams during the past few years. Cybercriminals are now also beginning to increasingly go after much larger corporate accounts by launching sophisticated malware and phishing attacks, according to Beate Zwijnenberg, chief information security officer at ING Group.

Zwijnenberg recommends using advanced AI defense systems to identify potentially fraudulent transactions which may not be immediately recognizable by human analysts.

Financial institutions across the globe have been spending a lot of money to deal with serious cybersecurity threats.

Theyve been using static, rules-based verification processes to identify suspicious activity. Theyve also been using more advanced biometric authentication methods. Banks throughout the world keep looking for better or more efficient ways to ensure that their platforms remain secure, while trying to lower the costs involved with maintaining a high level of security.

Artificial intelligence (AI) and machine learning (ML) are now being used to analyze thousands of transactions in real-time. These advanced technologies allow security professionals to quickly and accurately check for potentially fraudulent activities. In many cases, cybersecurity experts are able to take action before bad actors can carry out fraudulent transactions.

As reported by PYMNTS, Amsterdams ING Group, which manages nearly a trillion euros in assets, has been using AI/ML tech to protect its platform against attacks from cybercriminals.

Zwijnenberg told the news outlet:

The real-time aspect of online fraud means that you need to intervene immediately because otherwise, the money is transferred and its gone for good. So, the real-time element [of artificial intelligence] is quite important.

She added:

Fraudsters are after the data or the money, but until recently, the techniques had not changed. If you have a traditional bank branch, they try to get into the safe and physically get the money out, and for digital banks, its not much different. It is only the modus operandi that has changed.

Zwijnenberg revealed that cybercriminals are increasingly targeting wholesale banking and are consistently applying the same phishing techniques to different types of customers. She confirmed that phishing scams are the most common in both business banking and wholesale banking. Identity theft has also become a major problem, Zwijnenberg noted.

She explained that using machine learning algorithms is a good idea when the amount of data is becoming bigger and bigger over time. She added that its like finding the needle in the haystack, and you benefit from applying AI and ML to make sure that you really only look into the specific areas that call for it.

Banks and government offices were recently targeted by malicious malware (a P2P botnet) which had maliciously mined privacy-oriented cryptocurrency Monero (XMR) by hogging the computing resources of targeted computers.

Cyberattacks in the UK and the US have increased as more consumers and businesses conduct financial transactions online.

Last month, over 300,000 potentially fraudulent sites with fake celeb endorsements were identified by the UKs National Cyber Security Center, with half of them being related to cryptocurrency.

Read the original post:
AI and Machine Learning Algorithms are Increasingly being Used to Identify Fraudulent Transactions, Cybersecurity Professional Explains - Crowdfund...

Machine Learning AI Casts Henry Cavill as the Next James Bond – Screen Rant

After considering a lengthy list of actors, the first ever AI-assisted casting process determined Henry Cavill is the best pick for James Bond.

The first ever AI-assisted casting process has determined Henry Cavill should be the nextJames Bond. Daniel Craig has played the iconic spy since 2006'sCasino Royale and has since put in five performances in total. His final outing as 007,No Time to Die, was scheduled for release in April, but the coronavirus pandemic delayed it. Currently,No Time to Dieis slated for November 20, and a new trailer will arrive online tomorrow. The film will pick up with Bond after he's left active service, though a request from an old friend brings him back in to the fray.

BeyondNo Time to Die,the biggest question on fans' minds is: Who will be the next Bond? As the James Bond franchise has existed for decades, it's inevitable that a new actor will be brought in for the next generation of films. However, it remains to be seen who will take the reins from Craig. There have been a number of names thrown around by fans, from Idris Elba to Richard Madden. One thing is certain though: The next Bond won't be a woman, as producers said earlier this year.

Related:All 8 Actors Who Have Played James Bond In A Movie

If AI casting had its way, however, Cavill would be James Bond. In a new study conducted by Largo.ai, AI software was used to compare an actor's attributes and Bond's attributes in order to best assess which performer would earn the most positive audience reactions. When it comes to British actors, Cavill won with a score of 92.3%, followed by Richard Armitage (The Hobbitfilms, 92%) and Elba (90.9%).

When expanding the study to international actors,The Boysstar Karl Urban topped the list with a whopping 96.7%, which puts him firmly ahead of Cavill. Right behind Urban was Chris Evans (93.9%) and Will Smith (92.2%). For the sake of exploring all options, the study also considered actresses for a female Bond, withThe Mandalorian's Gina Carano coming in at 97.3% ahead of both Cavill and Urban. She was followed by Katee Sackhoff (94.4%) and Angelina Jolie (94.2%).

Interestingly enough, Cavill very nearly became Bond back in 2005. He and Craig were the final two contenders, with the role obviously going to Craig. Rumors even spread back in 2018 that Cavill was once again in consideration for the role, and there's definitely an argument to be made that he would still be an excellent pick. However, as he's currently starring in Netflix'sThe Witcherand might be reviving his Superman, he could be too busy to take on another iconic role. AsNo Time to Diehas yet to be released, it might be a while before the next James Bond is revealed, but as this study revealed, there are a lot of viable options.

Original post:
Machine Learning AI Casts Henry Cavill as the Next James Bond - Screen Rant

Venga Global expands data annotation, collection, and validation for AI and Machine Learning services – Benzinga

SAN FRANCISCO, Sept. 2, 2020 /PRNewswire-PRWeb/ -- Venga Global, a global leader in translation and localization, has launched "Venga AI" to meet growing data transformation and machine learning needs.

"We started offering data services in 2016 focused around natural language processing and data translation," says Antoine Rey, CSMO at Venga. "We have learned, adapted, and developed technology with great success to bring quality clean data to top AI and data companies. We are excited to now publicly offer our expanded roster of services including data annotation and validation for text, image, video, and audio."

The need for clean data to feed into machine learning algorithms has grown exponentially over the past few years with applications in sectors ranging from medical diagnostics to autonomous vehicles, to voice search.

As the world moves towards more localized approaches, the need for clean data in a variety of languages other than English climbs. Venga has its roots in the translation industry with resources all over the world so it is a natural step to provide data services leveraging those local connections. Whether in English or another language, culture and sentiment are expressed differently depending on location so having trained people in location creates the most accurate data sets.

"Clients continuously recognize Venga for delivering quality at scale - even for low-resource languages," says Chris Phillips, COO at Venga. " Our ability to ramp up from zero to thousands of trained resources in very short time periods has proven key to our success. We achieve this through stringent vetting, testing, and training of quality resources and optimize our technology stack project by project to create efficient and controlled NLP data collection."

Venga will be exhibiting at the TechXLR8 & The Virtual AI Summit London on September 2-3.

About Venga

With expertise in Natural Language Processing (NLP), Venga builds custom programs for enterprise clients to provide human-assisted clean data collection, annotation, and validation for machine learning. These programs are supported by an agile production team, innovative tools and technology, a specialized supply chain, and an ISO-certified quality assurance team.

Venga is committed to continuous improvement and supporting our client's accelerated growth and localization maturity.

To learn more about Venga AI, visit our website at https://venga.ai

SOURCE Venga Global

Follow this link:
Venga Global expands data annotation, collection, and validation for AI and Machine Learning services - Benzinga

Global machine learning market is expected to grow with a healthy CAGR over the forecast period from 2020-2026 – GlobeNewswire

New York, Aug. 28, 2020 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Machine Learning Market: Global Industry Analysis, Trends, Market Size, and Forecasts up to 2026" - https://www.reportlinker.com/p05751673/?utm_source=GNW The study on machine learning market covers the analysis of the leading geographies such as North America, Europe, Asia-Pacific, and RoW for the period of 2018 to 2026.

The report on machine learning market is a comprehensive study and presentation of drivers, restraints, opportunities, demand factors, market size, forecasts, and trends in the global machine learning market over the period of 2018 to 2026. Moreover, the report is a collective presentation of primary and secondary research findings.

Porters five forces model in the report provides insights into the competitive rivalry, supplier and buyer positions in the market and opportunities for the new entrants in the global machine learning market over the period of 2018 to 2026. Further, IGR-Growth Matrix gave in the report brings an insight into the investment areas that existing or new market players can consider.

Report Findings 1) Drivers The increasing adoption of cloud-based services, upsurge in unstructured data leads to the growing demand for machine learning solutions Growing need to improve computing power and decline hardware cost owing to machine learning algorithms capability to run or execute faster 2) Restraints Absence of technical expertise is anticipated to restrain the machine-learning market 3) Opportunities The increasing rate of adoption for iot and automation systems in industries is projected to drive the growth

Research Methodology

A) Primary Research Our primary research involves extensive interviews and analysis of the opinions provided by the primary respondents. The primary research starts with identifying and approaching the primary respondents, the primary respondents are approached include 1. Key Opinion Leaders 2. Internal and External subject matter experts 3. Professionals and participants from the industry

Our primary research respondents typically include 1. Executives working with leading companies in the market under review 2. Product/brand/marketing managers 3. CXO level executives 4. Regional/zonal/ country managers 5. Vice President level executives.

B) Secondary Research Secondary research involves extensive exploring through the secondary sources of information available in both the public domain and paid sources. Each research study is based on over 500 hours of secondary research accompanied by primary research. The information obtained through the secondary sources is validated through the crosscheck on various data sources.

The secondary sources of the data typically include 1. Company reports and publications 2. Government/institutional publications 3. Trade and associations journals 4. Databases such as WTO, OECD, World Bank, and among others. 5. Websites and publications by research agencies

Segment Covered The global machine learning market is segmented on the basis of component, enterprise size, service, deployment model, and end-user.

Global Machine Learning Market by Component Hardware Software Services

Global Machine Learning Market by Enterprise Size Large Enterprises SMEs

Global Machine Learning Market by Service Professional Services Managed Services

Global Machine Learning Market by Deployment Model Cloud On-premises

Global Machine Learning Market by End-user Healthcare BFSI Government and Defense Retail Advertising & Media Automotive & Transportation Agriculture Others

Company Profiles Amazon Web Services, Inc. Baidu Inc. Google Inc. RapidMiner, Inc. Intel Corporation International Business Machines Corporation Hewlett Packard Enterprise Development LP Microsoft Corporation SAS Institute Inc. SAP SE

What Does This Report Deliver? 1. Comprehensive analysis of the global as well as regional markets of the machine learning market. 2. Complete coverage of all the segments in the machine learning market to analyze the trends, developments in the global market and forecast of market size up to 2026. 3. Comprehensive analysis of the companies operating in the global machine learning market. The company profile includes analysis of product portfolio, revenue, SWOT analysis and latest developments of the company. 4. IGR-Growth Matrix presents an analysis of the product segments and geographies that market players should focus to invest, consolidate, expand and/or diversify.Read the full report: https://www.reportlinker.com/p05751673/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

Originally posted here:
Global machine learning market is expected to grow with a healthy CAGR over the forecast period from 2020-2026 - GlobeNewswire

How to Measure the Performance of Your AI/Machine Learning Platform? – Analytics Insight

With each passing day, new technologies are emerging across the world. They are not just bringing innovation to industries but also radically transforming entire societies. Be it artificial intelligence, machine learning, Internet of Things, or Cloud. All of these have found a plethora of applications in the world that are implemented through their specialized platforms. Organizations choose a suitable platform that has the power to uncover the complete benefits of the respective technology and obtain the desired results.

But, choosing a platform isnt as easy as it seems. It has to be of high caliber, fast, independent, etc. In other words, it should be worth your investment. Lets say that you want to know the performance of a CPU in comparison to others. Its easy because you know you have Passmark for the job. Similarly, when you want to check the performance of a graphics processing unit, you have Unigines Superposition. But, when it comes to machine learning, how do you figure out how fast a platform is? Alternatively, as an organization, if you have to invest in a single machine learning platform, how do you decide which one is the best?

For a long period, there has been no benchmark to decide the worthiness of machine learning platforms. Put differently, the artificial intelligence and machine learning industry have lacked reliable, transparent, standard, and vendor-neutral benchmarks that help in flagging performance differences between different parameters used for handling a workload. Some of these parameters include hardware, software, algorithms, and cloud configurations among others.

Even though it has never roadblock when designing applications, the choice of platform determines the efficiency of the ultimate product in one way or the other. Technologies like artificial intelligence and machine learning are growing to be extremely resource-sensitive, as research progresses. For this reason, the practitioners of AI and ML are seeking the fastest, most scalable, power-efficient, and low-cost hardware and software platforms to run their workloads.

This need has emerged since machine learning is moving towards a workload-optimized structure. As a result, there is a more than ever need for standard benchmarking tools that will help machine learning developers access and analyze the target environments which are best suited for the required job. Not just developers but enterprise information technology professionals also need a benchmarking tool for a specific training or inference job. Andrew Ng, CEO of the Landing AI points out that there is no doubt that AI is transforming multiple industries. But for it to reach its full potential, we still need faster hardware and software. Therefore, unless we have something to measure the efficiency of the hardware and software specifically for the needs of ML, there is no way that we can design more advanced ones for our requirements.

David Patterson, Author of the Computer Architecture: A quantitative approach highlights the fact that good benchmarks enable researchers to compare different ideas quickly, which makes it easier to innovate. Having said this, the need for a standard benchmarking tool for ML is more than ever.

To solve the underlying problem of an unbiased benchmarking tool, machine learning expert David Katner along with scientists and engineers from a reputed organization such as Google, Intel, and Microsoft have come up with a new solution. Welcome ML Perf- a machine learning benchmark suite that measures how fast a system can perform ML inference using a trained model.

Measuring the speed of a machine learning problem is already a complex task and tangles even more as it is observed for a longer period. All of this is simply because of the varying nature of problem sets and architectures in machine learning services. Having said this, ML Perf in addition to performance also measures the accuracy of a platform. It is intended for the widest range of systems including mobile devices to servers.

Training is that process in machine learning, where a network is fed with large datasets and let loose to find any underlying patterns in them. The more the number of datasets, the more is the efficiency of the system. It is called training because the network learns from the datasets and trains itself to recognize a particular pattern. For example, Gmails Smart Reply is trained in 238,000,000 sample emails. Similarly, Google Translate is trained on a trillion datasets. This makes the computational cost of training quite expensive. Systems that are designed for training have large and powerful hardware since their job is to chew up the data as fast as possible. Once the system is trained, the output received from it is called the inference.

Therefore, performance certainly matters when running inference workloads. On the one hand, the training phase requires as many operations per second without the concern of any latency. On the other hand, latency is a big issue during inference since a human is waiting on the other end to receive the results of the inference query.

Due to the complex nature of architecture and metrics, one cannot receive a perfect score through ML Perf. Since ML Perf is also valid across a range of workloads and overwhelming architectures, one cannot make assumptions about a perfect score just like in the case of CPUs or GPUs. In ML Perf, scores are broken down into training workloads and inference workloads before being divided into tasks, models, datasets, and scenarios. The result obtained from ML Perf is not a perfect score but a wide spreadsheet. Each task is measured under the following four parameters-

Finally, ML Perf separates the benchmark into Open and Closed divisions, with more strict requirements for the closed division. Similarly, the hardware for an ML workload is also separated into categories such as Available, preview, Research, Development, and Others. All these factors give Ml experts and practitioners an idea of how close a given system is to real production.

Share This ArticleDo the sharing thingy

Read the original post:
How to Measure the Performance of Your AI/Machine Learning Platform? - Analytics Insight

Another Dimension of Apple’s Eye Tracking Technology reveals the use of Biometrics and Machine Learning – Patently Apple

Today the US Patent & Trademark Office published Apple's fourth patent application relating to their eye tracking system for 2020 alone. The patent relates to yet another dimension of their advanced eye tracking/eye gazing technology for their future Head Mounted Display (HMD) device. The other three patents covering this technology could be reviewed here: 01, 02 and 03. Today's patent introduces us to how an eye tracking system is able to obtain biometrics of a user using event camera data and then adjust the brightness of the imagery generated onto the HMD display and more.

Apple's invention covers a head-mounted device includes an eye tracking system that determines a gaze direction of a user of the head-mounted device. The eye tracking system often includes a camera that transmits images of the eyes of the user to a processor that performs eye tracking. Transmission of the images at a sufficient frame rate to enable eye tracking requires a communication link with substantial bandwidth.

Various implementations include devices, systems, and methods for determining an eye tracking characteristic using intensity-modulated light sources. The method includes emitting light with modulating intensity from a plurality of light sources towards an eye of a user. The method includes receiving light intensity data indicative of an intensity of the emitted light reflected by the eye of the user in the form of a plurality of glints. The method includes determining an eye tracking characteristic of the user based on the light intensity data.

Apple's eye tracking system using intensity-modulated light sources uses machine learning. This system can perform some very unique functions. For instance, in one case, the one or more light sources modulate the intensity of emitted light according to user biometrics.

For instance, if the user is blinking more than normal, has an elevated heart rate, or is registered as a child, the one or more light sources decreases the intensity of the emitted light (or the total intensity of all light emitted by the plurality of light sources) to reduce stress upon the eye.

As another example, the one or more light sources modulate the intensity of emitted light based on an eye color of the user, as spectral reflectivity may differ for blue eyes as compared to brown eyes.

In various implementations, eye tracking, or particularly a determined gaze direction, is used to enable user interaction such as allowing the user to gaze at a pop-up menu on the HMD display and then choose one specific option on that menu by simply gauging the user's gaze position in order to perform an action.

Apple's patent FIG. 1 below is a block diagram of an example operating environment #100 wherein the controller (#110) is configured to manage and coordinate an augmented reality/virtual reality (AR/VR) experience for the user.

Apple's patent FIG. 4 illustrates a block diagram of a head-mounted device (#400). The housing (#401) also houses an eye tracking system including one or more light sources #422, a camera 424, and a controller 480. The one or more light sources 422 emit light onto the eye of the user 10 that reflects as a light pattern (e.g., a circle of glints) that can be detected by the camera 424. Based on the light pattern, the controller 480 can determine an eye tracking characteristic of the user 10. For example, the controller 480 can determine a gaze direction and/or a blinking state (eyes open or eyes closed) of the user 10. As another example, the controller 480 can determine a pupil center, a pupil size, or a point of regard. Thus, in various implementations, the light is emitted by the one or more light sources 422, reflects off the eye of the user 10, and is detected by the camera 424. In various implementations, the light from the eye of the user 10 is reflected off a hot mirror or passed through an eyepiece before reaching the camera 424.

In patent FIG. 5A above we see an eye of a user having a first gaze direction; FIG. 5B illustrates the eye of the user having a second gaze direction.

In various implementations, the one or more light sources emit light towards the eye of the user which reflects in the form of a plurality of glints which form a pattern. The reflected pattern (and, potentially, other features, such as the pupil size, pupil shape, and pupil center), an eye tracking characteristic of the user can be determined.

The eye includes a pupil surrounded by an iris, both covered by a cornea. The eye also includes a sclera (also known as the white of the eye).

Apple's patent FIG. 9A below illustrates a functional block diagram of an eye tracking system (#900) including an event camera (#910). The eye tracking system outputs a gaze direction of a user based on event messages received from the event camera.

The geometric analyzer #970 receives data regarding detected glints from the glint detector (#940) and data regarding the pupil of the eye of the user from the pupil detector (#960). Based on this received information, the geometric analyzer determines an eye tracking characteristic of a user, such as a gaze direction and/or a blinking state of the user.

(Click on image to Enlarge)

Apple's patent FIG. 9B below illustrates a functional block diagram of an eye tracking system (#902) including a machine-learning regressor #980. Here the glint detector (#940), pupil detector (#960), and geometric analyzer (#970) uses a machine-learning regressor that determines the eye tracking characteristic based on the target-feature and the off-target feature.

(Click on image to Enlarge)

Lastly in patent FIG. 9C below we're able to see a functional block diagram of an eye tracking system (#904) including a gaze estimator (#990). The eye tracking system here includes an event camera (#910). The event messages are fed into a probability tagger (#925) that tags each event message with a probability that the event message is a target-frequency event message.

The probability-tagged event messages are fed into a feature generator (#935) that generates one or more features that are fed into a gaze estimator (#990) that determines an eye tracking characteristic (e.g., a gaze direction) based on the one or more features.

(Click on image to Enlarge)

Apple's patent application 20200278539 that was published today by the U.S. Patent Office is shown being filed back in Q1 2020, though the patent shows that some of the work dates back to a 2017 filing being incorporated into this latest filing.

Considering that this is a patent application, the timing of such a product to market is unknown at this time.

Apple Inventors

Daniel Kurz: Senior Engineering Manager (Computer Vision, Machine Learning) who came to Apple via the acquisition of Metaio. Some of the earlier work on this patent likely came from the Metaio acquisition and revised with Apple team members.

Li Jia: Computer Vision and Machine Learning Engineering Manager that resides in Beijing China. Jia leads a team to develop CVML algorithms for mobile camera applications. Jia also organizes collaboration with Tsinghua University on research projects on computer vision and machine learning.

Raffi Bedikian: Computer Vision Engineer. He worked 5 years over at Leap Motion.

Branko Petljanski: Engineering Manager, Incubation (Cameras)

Read this article:
Another Dimension of Apple's Eye Tracking Technology reveals the use of Biometrics and Machine Learning - Patently Apple

13 Algorithms and 4 Learning Methods of Machine Learning – TechBullion

Share

Share

Share

Email

According to the similarity of the function and form of the algorithm, we can classify the algorithm, such as tree-based algorithm, neural network-based algorithm, and so on. Of course, the scope of machine learning is very large, and it is difficult for some algorithms to be clearly classified into a certain category.

Regression algorithm is a type of algorithm that tries to explore the relationship between variables by using a measure of error. Regression algorithm is a powerful tool for statistical machine learning. In the field of machine learning, when people talk about regression, sometimes they refer to a type of problem and sometimes a type of algorithm. This often confuses beginners.

Common regression algorithms include: Ordinary Least Square, Logistic Regression, Stepwise Regression, Multivariate Adaptive Regression Splines, and Locally Estimated Scatterplot Smoothing).

The regularization method is an extension of other algorithms (usually regression algorithms), and the algorithm is adjusted according to the complexity of the algorithm. Regularization methods usually reward simple models and penalize complex algorithms.

Common algorithms include: Ridge Regression, Least Absolute Shrinkage and Selection Operator (LASSO), and Elastic Net.

The decision tree algorithm uses a tree structure to establish a decision model based on the attributes of the data. The decision tree model is often used to solve classification and regression problems.

Common algorithms include: Classification and Regression Tree (CART), ID3 (Iterative Dichotomiser 3), C4.5, Chi-squared Automatic Interaction Detection (CHAID), Decision Stump, Random Forest (Random Forest), multivariate Adaptive regression spline (MARS) and gradient boosting machine (Gradient Boosting Machine, GBM)

Case-based algorithms are often used to model decision-making problems. Such models often first select a batch of sample data, and then compare the new data with the sample data based on some similarity. In this way, the best match is found. Therefore, instance-based algorithms are often referred to as winner-takes-all learning or memory-based learning.

Common algorithms include k-Nearest Neighbor (KNN), Learning Vector Quantization (LVQ), and Self-Organizing Map (SOM).

Bayesian method algorithm is a kind of algorithm based on Bayes theorem, mainly used to solve classification and regression problems.

Common algorithms include: Naive Bayes algorithm, Averaged One-Dependence Estimators (AODE), and Bayesian Belief Network (BBN).

Clustering, like regression, sometimes people describe a type of problem, and sometimes a type of algorithm. Clustering algorithms usually merge the input data in a central point or hierarchical manner. All clustering algorithms try to find the internal structure of the data in order to classify the data according to the biggest common point.

Common clustering algorithms include k-Means algorithm and Expectation Maximization (EM).

Like clustering algorithms, dimensionality reduction algorithms try to analyze the internal structure of the data, but dimensionality reduction algorithms try to use less information to summarize or interpret data in an unsupervised learning manner. This type of algorithm can be used to visualize high-dimensional data or to simplify data for supervised learning.

Common algorithms include: Principle Component Analysis (PCA), Partial Least Square Regression (PLS), Sammon mapping, Multidimensional Scaling (MDS), Projection Pursuit (Projection Pursuit) Wait.

Association rule learning finds useful association rules in a large number of multivariate data sets by finding the rules that best explain the relationship between data variables.

Common algorithms include Apriori algorithm and Eclat algorithm.

The genetic algorithm simulates the mutation, exchange and Darwinian natural selection of biological reproduction (the survival of the fittest in every ecological environment).

It encodes the possible solutions of the problem into a vector, called an individual, each element of the vector is called a gene, and uses an objective function (corresponding to the natural selection criteria) to evaluate each individual in the group (a collection of individuals).

According to the evaluation value (fitness), genetic operations such as selection, exchange and mutation are performed on individuals to obtain a new population.

Genetic algorithms are suitable for very complex and difficult environments, such as with a lot of noise and irrelevant data, things are constantly updated, problem goals cannot be clearly and accurately defined, and the value of current behavior can be determined through a long execution process.

Artificial neural network algorithm simulates biological neural network and are a type of pattern matching algorithm. Usually used to solve classification and regression problems. Artificial neural networks are a huge branch of machine learning, with hundreds of different algorithms.

(Deep learning is one of the algorithms, we will discuss it separately), important artificial neural network algorithms include: Perceptron Neural Network, Back Propagation, Hopfield Network, Self-Organizing Map ( Self-Organizing Map, SOM).

Deep learning algorithms are the development of artificial intelligence . It has won a lot of attention recently, especially after Baidu has also begun to work hard on deep learning, which has attracted a lot of attention. With computing power becoming increasingly cheap today, deep learning is trying to build a much larger and more complex neural network.

Many deep learning algorithms are semi-supervised learning algorithms, which are used to process large data sets with a small amount of unidentified data.

Common deep learning algorithms include: Restricted Boltzmann Machine (RBN), Deep Belief Networks (DBN), Convolutional Network (Convolutional Network), and Stacked Auto-encoders.

The most famous of kernel-based algorithms is the support vector machine (SVM). The kernel-based algorithm maps the input data to a high-order vector space. In these high-order vector spaces, some classification or regression problems can be solved more easily.

Common kernel-based algorithms include: Support Vector Machine (SVM), Radial Basis Function (RBF), and Linear Discriminate Analysis (LDA), etc.

The ensemble algorithm uses some relatively weak learning models to independently train the same samples, and then integrates the results for overall prediction. The main difficulty of the integrated algorithm is which independent weaker learning models are integrated and how to integrate the learning results. This is a very powerful algorithm and also very popular.

Common algorithms include: Boosting, Bootstrapped Aggregation (Bagging), AdaBoost, Stacked Generalization (Stacked Generalization, Blending), Gradient Boosting Machine (GBM), Random Forest (Random Forest), GBDT (Gradient Boosting Decision Tree) .

There are many algorithms for machine learning. Many times, people are confused. Many algorithms are a type of algorithm, and some algorithms are extended from other algorithms. Here, we will introduce to you from two aspects. The first aspect is the way of learning, and the second aspect is the classification of algorithms.

Under supervised learning, the input data is called training data, and each set of training data has a clear identification or result, such as spam and non-spam in the anti-spam system, and recognition of handwritten numbers 1, 2, 3, 4 and so on.

When building a predictive model, supervised learning establishes a learning process that compares the predictive results with the actual results of the training data, and continuously adjusts the predictive model until the predictive result of the model reaches an expected accuracy rate.

Common application scenarios of supervised learning are classification problems and regression problems. Common algorithms are Logistic Regression and Back Propagation Neural Network.

In this learning mode, the input data is used as feedback to the model. Unlike the supervision model, the input data is only used as a way to check whether the model is right or wrong. Under reinforcement learning, the input data is directly fed back to the model. Make adjustments immediately.

Common application scenarios include dynamic systems and robot control. Common algorithms include Q-Learning and Temporal difference learning.

In unsupervised learning, the data is not specifically identified, and the learning model is to infer some internal structure of the data.Popular application scenarios involve association rules and clustering learning. Common algorithms include Apriori algorithm and k-Means algorithm.

In this learning mode, part of the input data is identified and part is not. This learning model can be used to make predictions, but the model first needs to learn the internal structure of the data in order to organize the data reasonably to make predictions.

Application scenarios include classification and regression. Algorithms include some extensions to commonly used supervised learning algorithms. These algorithms first try to model unidentified data, and then predict the identified data on this basis. Graph inference algorithm (Graph Inference) or Laplacian support vector machine (Laplacian SVM.) etc.

Read this article:
13 Algorithms and 4 Learning Methods of Machine Learning - TechBullion

Quantiphi Renews the ML Partner Specialization in the Google Cloud – AiThority

Google Cloud Recognizes Quantiphis Technical Proficiency and Proven Success In Machine Learning

Quantiphi, an applied artificial intelligence and data science software and services company, announced that it has successfully renewed its specialization status in Machine Learning for the third time as part of Google Clouds Partner Advantage Program. By renewing the Partner Specialization, Quantiphi has proven their expertise and success in building customer solutions in the Machine Learning field using Google Cloud technology.

Specializations in theGoogle Cloud Partner Advantage Programare designed to provide Google Cloud customers with qualified partners that have demonstrated technical proficiency and proven success in specialized solution and service areas.

Recommended AI News: Ribbons 5G Perspectives Highlights New Revenue Opportunities for Service Providers

Partners achieving this specialization have demonstrated success with data exploration, preprocessing, model training, model evaluation, model deployment, online prediction, and Google Cloud pre-trained Machine Learning APIs.

Artificial intelligence and machine learning have become essential building blocks of digital transformation, and Google Cloud has built an impressive set of tools to democratize access to these technologies, said Asif Hasan, Co-founder, Quantiphi. Recent business conditions have accelerated digital adoption trends in unprecedented ways and Quantiphi is committed to combining our industry expertise with the power of Google Cloud to help clients accelerate their digital transformation programs.

Recommended AI News: CognitiveScale Receives Double Industry Recognition for Its Trust as a Service AI Solution

As a Premier Google Cloud Services Partner and one of the first Machine Learning Specialization launch partners in 2017, Quantiphi has previously earned Google Clouds Machine Learning Partner of the Year award twice in a row for 2017 and 2018. Quantiphi was recently awarded the Google Cloud Social Impact Partner of the year 2019 for leveraging AI for social good.

In the year 2020 alone, Quantiphi successfully completed over 70 machine learning projects for customers across Industries, including Retail, Healthcare, Insurance, Financial Services, Education and Public Sector. Quantiphi has delivered excellent customer experience by leveraging Google Cloud to help create competitive and compliant solutions in rapidly changing global markets with powerful, scalable technology. Developing end-to-end machine learning platforms with AI building blocks, templates and services to build, train, serve, and manage models on Google Cloud.

Recommended AI News: Genesys Names New CFO to Drive Next Phase of Rapid Growth

Read this article:
Quantiphi Renews the ML Partner Specialization in the Google Cloud - AiThority

Machine Learning Artificial intelligence Market 2020 | Know the Latest COVID19 Impact Analysis And Strategies of Key Players: AIBrain, Amazon, Anki,…

Machine Learning Artificial intelligence Marketreport analyses the market potential for each geographical region based on the growth rate, macroeconomic parameters, consumer buying patterns, and market demand and supply scenarios. The report covers the present scenario and the growth prospects of the global Machine Learning Artificial intelligencemarket for 2020-2025.

The Machine Learning Artificial intelligenceMarket Report further describes detailed information about tactics and strategies used by leading key companies in the Machine Learning Artificial intelligenceindustry. It also gives an extensive study of different market segments and regions.

Request For Exclusive Sample PDF along with few company profileshttps://inforgrowth.com/sample-request/6231151/machine-learning-artificial-intelligence-market

The Top players are

Market Segmentation:

By Product Type:

On the basis of the end users/applications,

Get Chance of 20% Extra Discount, If your Company is Listed in Above Key Players List https://inforgrowth.com/discount/6231151/machine-learning-artificial-intelligence-market

Impact of COVID-19:

Machine Learning Artificial intelligence Market report analyses the impact of Coronavirus (COVID-19) on the Machine Learning Artificial intelligence industry. Since the COVID-19 virus outbreak in December 2019, the disease has spread to almost 180+ countries around the globe with the World Health Organization declaring it a public health emergency. The global impacts of the coronavirus disease 2019 (COVID-19) are already starting to be felt, and will significantly affect the Machine Learning Artificial intelligence market in 2020.

The outbreak of COVID-19 has brought effects on many aspects, like flight cancellations; travel bans and quarantines; restaurants closed; all indoor events restricted; emergency declared in many countries; massive slowing of the supply chain; stock market unpredictability; falling business assurance, growing panic among the population, and uncertainty about future.

COVID-19 can affect the global economy in 3 main ways: by directly affecting production and demand, by creating supply chain and market disturbance, and by its financial impact on firms and financial markets.

Get Sample ToC to understand the CORONA Virus/COVID19 impact and be smart in redefining business strategies. https://inforgrowth.com/CovidImpact-Request/6231151/machine-learning-artificial-intelligence-market

Reasons to Get this Report:

Study on Table of Contents:

ENQUIRE MORE ABOUT THIS REPORT AT https://inforgrowth.com/enquiry/6231151/machine-learning-artificial-intelligence-market

FOR ALL YOUR RESEARCH NEEDS, REACH OUT TO US AT:Address: 6400 Village Pkwy suite # 104, Dublin, CA 94568, USAContact Name: Rohan S.Email:[emailprotected]Phone: +1-909-329-2808UK: +44 (203) 743 1898Website:

Originally posted here:
Machine Learning Artificial intelligence Market 2020 | Know the Latest COVID19 Impact Analysis And Strategies of Key Players: AIBrain, Amazon, Anki,...

The factors that’ll make or break your relationship, according to AI – World Economic Forum

Swipe left? Or swipe right? AI might have the answer.

The reasons that some relationships blossom while others fail could be less to do with the people involved and more about the connection they build with each other, data from more than 11,000 couples indicates.

Scientists using machine learning have found the characteristics of a relationship might be a far greater predictor of couples satisfaction than their or their partners personalities.

"Really, it suggests that the person we choose is not nearly as important as the relationship we build The dynamic that you build with someone the shared norms, the in-jokes, the shared experiences is so much more than the separate individuals who make up that relationship, said Samatha Joel, the study author and director of the Relationship Decisions Lab at Canadas Western University.

The recipe for relationship success

The study looked at data from thousands of romantic relationships, grouping together characteristics of the relationship itself, and individual characteristics of each partner. Although some traits will influence others, they dont all have equal weighting.

The top five individual variables that explained differences in relationship satisfaction were:

The five main relationship characteristics that influenced satisfaction were:

And although the individual characteristics have an important role to play, they are far less important than the relationship characteristics, the study says.

These are the most popular online dating apps in the US, as of September 2019.

Image: Statista

Love in the time of corona

The growth in the online dating market has been impressive and sustained. Around 276.9 million people are expected to use apps in their search for love by 2024, with revenues reaching $2.5 billion. And while the pandemic may have hampered short-term dating prospects, research by data company Statista suggests that more people have signed up to dating services over the past few months.

Percentage of adults in the United States who have used a dating website or app as of April 2020.

Image: Statista

In the United States, one the of biggest online dating markets in the world, heterosexual couples are now more likely to meet online than in any other way.

The percentage of couples meeting online has skyrocketed over the last few years.

Image: Stanford University

Read more from the original source:
The factors that'll make or break your relationship, according to AI - World Economic Forum

Why quantum computing matters – Axios

A new government initiative will direct hundreds of millions of dollars to support new centers for quantum computing research.

Why it matters: Quantum information science represents the next leap forward for computing, opening the door to powerful machines that can help provide answers to some of our most pressing questions. The nation that takes the lead in quantum will stake a pole position for the future.

Details: The five new quantum research centers established in national labs across the country are part of a $1 billion White House program announced Wednesday morning that includes seven institutes that will explore different facets of AI, including precision agriculture and forecast prediction.

How it works: While AI is better known and increasingly integrated into our daily lives hey, Siri quantum computing is just as important, promising huge leaps forward in computer processing power.

Of note: Albert Einstein famously hated the concept of entanglement, describing it as "spooky action at a distance." But the idea has held up over decades of research in quantum science.

Quantum computers won't replace classical ones wholesale in part because the process of manipulating quantum particles is still highly tricky but as they develop, they'll open up new frontiers in computing.

What they're saying: "Quantum is the biggest revolution in computers since the advent of computers," says Dario Gil, director of IBM Research. "With the quantum bit, you can actually rethink the nature of information."

The catch: While the underlying science behind quantum computers is decades old, quantum computers are only just now beginning to be used commercially.

What to watch: Who ultimately wins out on quantum supremacy the act of demonstrating that a quantum computer can solve a problem that even the fastest classical computer would be unable to solve in a feasible time frame.

The bottom line: The age of quantum computers isn't quite here yet, but it promises to be one of the major technological drivers of the 21st century.

Go here to read the rest:
Why quantum computing matters - Axios

Fermilab to lead $115 million National Quantum Information Science Research Center to build revolutionary quantum computer with Rigetti Computing,…

One of the goals of theSuperconducting Quantum Materials and Systems Centeris to build a beyond-state-of-the-art quantum computer based on superconducting technologies.The center also will develop new quantum sensors, which could lead to the discovery of the nature of dark matter and other elusive subatomic particles.

The U.S. Department of Energys Fermilab has been selected to lead one of five national centers to bring about transformational advances in quantum information science as a part of the U.S. National Quantum Initiative, announced the White House Office of Science and Technology Policy, the National Science Foundation and the U.S. Department of Energy today.

The initiative provides the newSuperconducting Quantum Materials and Systems Centerfunding with the goal of building and deploying a beyond-state-of-the-art quantum computer based on superconducting technologies. The center also will develop new quantum sensors, which could lead to the discovery of the nature of dark matter and other elusive subatomic particles. Total planned DOE funding for the center is $115 million over five years, with $15 million in fiscal year 2020 dollars and outyear funding contingent on congressional appropriations. SQMS will also receive an additional $8 million in matching contributions from center partners.

The SQMS Center is part of a $625 million federal program to facilitate and foster quantum innovation in the United States. The 2018 National Quantum Initiative Act called for a long-term, large-scale commitment of U.S. scientific and technological resources to quantum science.

The revolutionary leaps in quantum computing and sensing that SQMS aims for will be enabled by a unique multidisciplinary collaboration that includes 20 partners national laboratories, academic institutions and industry. The collaboration brings together world-leading expertise in all key aspects: from identifying qubits quality limitations at the nanometer scale to fabrication and scale-up capabilities into multiqubit quantum computers to the exploration of new applications enabled by quantum computers and sensors.

The breadth of the SQMS physics, materials science, device fabrication and characterization technology combined with the expertise in large-scale integration capabilities by the SQMS Center is unprecedented for superconducting quantum science and technology, said SQMS Deputy Director James Sauls of Northwestern University. As part of the network of National QIS Research centers, SQMS will contribute to U.S. leadership in quantum science for the years to come.

SQMS researchers are developing long-coherence-time qubits based on Rigetti Computings state-of-the-art quantum processors. Image: Rigetti Computing

At the heart of SQMS research will be solving one of the most pressing problems in quantum information science: the length of time that a qubit, the basic element of a quantum computer, can maintain information, also called quantum coherence. Understanding and mitigating sources of decoherence that limit performance of quantum devices is critical to engineering in next-generation quantum computers and sensors.

Unless we address and overcome the issue of quantum system decoherence, we will not be able to build quantum computers that solve new complex and important problems. The same applies to quantum sensors with the range of sensitivity needed to address long-standing questions in many fields of science, said SQMS Center Director Anna Grassellino of Fermilab. Overcoming this crucial limitation would allow us to have a great impact in the life sciences, biology, medicine, and national security, and enable measurements of incomparable precision and sensitivity in basic science.

The SQMS Centers ambitious goals in computing and sensing are driven by Fermilabs achievement of world-leading coherence times in components called superconducting cavities, which were developed for particle accelerators used in Fermilabs particle physics experiments. Researchers have expanded the use of Fermilab cavities into the quantum regime.

We have the most coherent by a factor of more than 200 3-D superconducting cavities in the world, which will be turned into quantum processors with unprecedented performance by combining them with Rigettis state-of-the-art planar structures, said Fermilab scientist Alexander Romanenko, SQMS technology thrust leader and Fermilab SRF program manager. This long coherence would not only enable qubits to be long-lived, but it would also allow them to be all connected to each other, opening qualitatively new opportunities for applications.

The SQMS Centers goals in computing and sensing are driven by Fermilabs achievement of world-leading coherence times in components called superconducting cavities, which were developed for particle accelerators used in Fermilabs particle physics experiments. Photo: Reidar Hahn, Fermilab

To advance the coherence even further, SQMS collaborators will launch a materials-science investigation of unprecedented scale to gain insights into the fundamental limiting mechanisms of cavities and qubits, working to understand the quantum properties of superconductors and other materials used at the nanoscale and in the microwave regime.

Now is the time to harness the strengths of the DOE laboratories and partners to identify the underlying mechanisms limiting quantum devices in order to push their performance to the next level for quantum computing and sensing applications, said SQMS Chief Engineer Matt Kramer, Ames Laboratory.

Northwestern University, Ames Laboratory, Fermilab, Rigetti Computing, the National Institute of Standards and Technology, the Italian National Institute for Nuclear Physics and several universities are partnering to contribute world-class materials science and superconductivity expertise to target sources of decoherence.

SQMS partner Rigetti Computing will provide crucial state-of-the-art qubit fabrication and full stack quantum computing capabilities required for building the SQMS quantum computer.

By partnering with world-class experts, our work will translate ground-breaking science into scalable superconducting quantum computing systems and commercialize capabilities that will further the energy, economic and national security interests of the United States, said Rigetti Computing CEO Chad Rigetti.

SQMS will also partner with the NASA Ames Research Center quantum group, led by SQMS Chief Scientist Eleanor Rieffel. Their strengths in quantum algorithms, programming and simulation will be crucial to use the quantum processors developed by the SQMS Center.

The Italian National Institute for Nuclear Physics has been successfully collaborating with Fermilab for more than 40 years and is excited to be a member of the extraordinary SQMS team, said INFN President Antonio Zoccoli. With its strong know-how in detector development, cryogenics and environmental measurements, including the Gran Sasso national laboratories, the largest underground laboratory in the world devoted to fundamental physics, INFN looks forward to exciting joint progress in fundamental physics and in quantum science and technology.

Fermilab is excited to host this National Quantum Information Science Research Center and work with this extraordinary network of collaborators, said Fermilab Director Nigel Lockyer. This initiative aligns with Fermilab and its mission. It will help us answer important particle physics questions, and, at the same time, we will contribute to advancements in quantum information science with our strengths in particle accelerator technologies, such as superconducting radio-frequency devices and cryogenics.

We are thankful and honored to have this unique opportunity to be a national center for advancing quantum science and technology, Grassellino said. We have a focused mission: build something revolutionary. This center brings together the right expertise and motivation to accomplish that mission.

The Superconducting Quantum Materials and Systems Center at Fermilab is supported by the DOE Office of Science.

Fermilab is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit science.energy.gov.

Read more here:
Fermilab to lead $115 million National Quantum Information Science Research Center to build revolutionary quantum computer with Rigetti Computing,...

A continent works to grow its stake in quantum computing – University World News

AFRICA

South Africa is a few steps ahead in the advancement of quantum computing and quantum technologies in general, said Mark Tame, professor in photonics at Stellenbosch University in the Western Cape.

South Africas University of KwaZulu-Natal has also been working on quantum computing for more than a decade, gradually building up a community around the field.

The buzz about quantum computing in South Africa just started recently due to the agreement between [Johannesburgs] University of the Witwatersrand and IBM, said Professor Francesco Petruccione, interim director, National Institute for Theoretical and Computational Science, and South African Research Chair in Quantum Information Processing and Communication at the School of Chemistry and Physics Quantum Research Group, University of KwaZulu-Natal.

Interest was intensified by Googles announcement last October that it had developed a 53-qubit device which it claimed took 200 seconds to sample one instance of a quantum circuit a million times. The IT company claimed it would take a state-of-the-art digital supercomputer 10,000 years to achieve this.

A University of Waterloo Institute for Quantum Computing paper stresses quantum computers ability to express a signal (a qubit) of more than one value at the same time (the superposition ability) with that signal being manifested in another device independently, but in exactly the same way (the entanglement ability). This enables quantum computers to handle much more complex questions and problems than standard computers using binary codes of ones and zeros.

The IBM Research Laboratory in Johannesburg offers African researchers the potential to harness such computing power. It was established in 2015, part of a 10-year investment programme through the South African governments Department of Trade and Industry.

It is a portal to the IBM Quantum Experience, a cloud-based quantum computing platform accessible to other African universities that are part of the African Research Universities Alliance (ARUA), which involves 16 of the continents leading universities (in Ethiopia, Ghana, Kenya, Nigeria, Rwanda, Senegal, Tanzania, Uganda and South Africa).

Levelling of the playing field

The IBM development has levelled the playing field for students, [giving them] access to the same hardware as students elsewhere in the world. There is nothing to hold them back to develop quantum applications and code. This has been really helpful for us at Stellenbosch to work on projects which need access to quantum processors not available to the general public, said Tame.

While IBM has another centre on the continent, at the Catholic University of Eastern Africa in Nairobi, Kenya, in 2018 the University of the Witwatersrand became the first African university to join the American computing giants Quantum Computing Network. They are starting to increase the network to have an army of quantum experts, said Professor Zeblon Vilakazi, a nuclear physicist, and vice-chancellor and principal of the University of the Witwatersrand.

At a continental level, Vilakazi said Africa is still in a learning phase regarding quantum computing. At this early stage we are still developing the skills and building a network of young students, he said. The university has sent students to IBMs Zurich facility to learn about quantum computing, he said.

To spur cooperation in the field, a Quantum Africa conference has been held every year since 2010, with the first three in South Africa, and others in Algeria and Morocco. Last years event was in Stellenbosch, while this years event, to be hosted at the University of Rwanda, was postponed until 2021 due to the COVID-19 pandemic.

Growing African involvement

Rwanda is making big efforts to set up quantum technology centres, and I have former students now working in Botswana and the Gambia. It is slowly diffusing around the continent, said Petruccione.

Academics participating at the Stellenbosch event included Yassine Hassouni of Mohammed V University, Rabat; Nigerian academic Dr Obinna Abah of Queens University Belfast; and Haikel Jelassi of the National Centre for Nuclear Sciences and Technologies, Tunisia.

In South Africa, experimental and theoretical work is also being carried out into quantum communications the use of quantum physics to carry messages via fibre optic cable.

A lot of work is being done on the hardware side of quantum technologies by various groups, but funding for these things is not the same order of magnitude as in, say, North America, Australia or the UK. We have to do more with less, said Tame.

Stellenbosch, near Cape Town, is carrying out research into quantum computing, quantum communication and quantum sensing (the ability to detect if a quantum-sent message is being read).

I would like it to grow over the next few years by bringing in more expertise and help the development of quantum computing and technologies for South Africa, said Tame.

Witwatersrand is focusing on quantum optics, as is Petrucciones team, while there is collaboration in quantum computing with the University of Johannesburg and the University of Pretoria.

University programmes

Building up and retaining talent is a key challenge as the field expands in Africa, as is expanding courses in quantum computing.

South Africa doesnt offer a masters in quantum computing, or an honours programme, which we need to develop, said Petruccione.

This is set to change at the University of the Witwatersrand.

We will launch a syllabus in quantum computing, and were in the process of developing courses at the graduate level in physics, natural sciences and engineering. But such academic developments are very slow, said Vilakazi.

Further development will hinge on governmental support, with a framework programme for quantum computing being developed by Petruccione. There is interest from the [South African] Department of Science and Innovation. Because of [the economic impact of] COVID-19, I hope some money is left for quantum technology, but at least the government is willing to listen to the community, he said.

Universities are certainly trying to tap non-governmental support to expand quantum computing, engaging local industries, banks and pharmaceutical companies to get involved in supporting research.

We have had some interesting interactions with local banks, but it needs to be scaled up, said Petruccione.

Applications

While African universities are working on quantum computing questions that could be applicable anywhere in the world, there are plans to look into more localised issues. One is drug development for tuberculosis, malaria and HIV, diseases that have afflicted Southern Africa for decades, with quantum computings ability to handle complex modelling of natural structures a potential boon.

There is potential there for helping in drug development through quantum simulations. It could also help develop quantum computing networks in South Africa and more broadly across the continent, said Vilakazi.

Agriculture is a further area of application. The production of fertilisers is very expensive as it requires high temperatures, but bacteria in the soil do it for free. The reason we cant do what bacteria do is because we dont understand it. The hope is that as quantum computing is good at chemical reactions, maybe we can model it and that would lead to cheaper fertilisers, said Petruccione.

With the world in a quantum computing race, with the US and China at the forefront, Africa is well positioned to take advantage of developments. We can pick the best technology coming out of either country, and that is how Africa should position itself, said Vilakazi.

Petrucciones group currently has collaborations with Russia, India and China. We want to do satellite quantum communication. The first step is to have a ground station, but that requires investment, he said.

Link:
A continent works to grow its stake in quantum computing - University World News

New Microsoft program to help develop the quantum computing workforce of the future in India – Microsoft News Center India – Microsoft

900 faculty from top Indian institutes to be trained

New Delhi, August 24, 2020: Microsoft is creating a new program to build quantum computing skills and capabilities in the academic community in India. As part of this initiative, Microsoft Garage is organizing a Train the Trainer program in collaboration with Electronics and ICT Academies at Malaviya National Institute of Technology (MNIT), Jaipur and National Institute of Technology, Patna.

This program will train 900 faculty from Universities and Institutes across India through E & ICT Academies at Institutes of National Importance such as IIT Kanpur, IIT Guwahati, IIT Roorkee, MNIT Jaipur, NIT Patna, IIIT-D Jabalpur, and NIT Warangal, equipping academics with the required skills to start building their quantum future.

Quantum computing applies the properties of quantum physics to process information. Quantum computers will enable new discoveries in the areas of healthcare, energy, environmental systems, smart materials, and beyond. Microsoft is bringing the capabilities to develop for this quantum future, to the cloud with Azure Quantum.

Azure Quantum is an open cloud ecosystem enabling developers to access diverse quantum software, hardware, and solutions from Microsoft and its partners. It is built on Azure, a trusted, scalable and secure platform, and will continue to adapt to Microsofts rapidly evolving cloud future. Moreover, it delivers the ability to have impact today through quantum inspired solvers running on classical hardware and to explorations on classical hardware using the open source Quantum Development Kit and the Q# programming language.

The Quantum training program through the E & ICT Academies, supports an initiative by Ministry of Electronics & Information Technology (MeitY) to enhance the skills of the academicians in imparting next level technological skills for future generations. Key themes that will be covered include an introduction to quantum information, quantum concepts such as superposition and entanglement, processing of information using qubits and quantum gates, as well as an introduction to quantum machine learning and quantum programming.

Rajiv Kumar, Managing Director, Microsoft India Development Center, and Corporate Vice President, Enterprise+Devices India, said, India is renowned across the world for its science, technology, engineering, mathematics and computing (STEM+C) workforce, and a tech-capable citizenry. Through this initiative in India, we aim to develop skills in quantum at scale, which has the potential to trigger the new frontier of innovation, shaping the future of the IT industry in this part of the world.

Inaugurating the program, Ms. Reena Dayal, Director, Microsoft Garage India & Chair for IEEE Quantum SIG, said, Quantum computing holds the potential to solve some of the most pressing issues our world faces today. Through this program, we aim to equip academia in India with the requisite knowledge to develop a comprehensive Quantum learning curriculum in their institutions and help develop these skills among some of the brightest minds in the country.

The training program will be conducted virtually, from August 24 Aug 29, 2020. The program will also cover practical coding for participants using Microsoft Q# & Quantum Development Kit.

Speaking on the collaboration, Prof. Udaykumar R Yaragatti, Director, MNIT Jaipur said, The institute is committed to providing state-of-the-art technologies to students and this collaboration with Microsoft will provide further encouragement to faculty members to explore the different aspects of Quantum Computing.

Prof. Pradip K Jain, Director, NIT Patna said, The COVID situation has given an opportunity for going digital with this program. This partnership will ignite the passion in faculty members who will in turn share the knowledge with their students.

About The Microsoft Garage

The Microsoft Garage is a program that drives a culture of experimentation and innovation at Microsoft. They deliver programs and experiences to our employees, customers, and ecosystem that drive collaboration and creativity. Their motto doers, not talkers continues to be the core. The Garage attracts people who are passionate about making a difference in the world. Garage India works on Cutting Edge Technologies and actively engages with the Ecosystem in India.

About Microsoft

Microsoft (Nasdaq MSFT @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more. Microsoft set up its India operations in 1990. Today, Microsoft entities in India have over 11,000 employees, engaged in sales and marketing, research, development and customer services and support, across 11 Indian cities Ahmedabad, Bengaluru, Chennai, New Delhi, Gurugram, Noida, Hyderabad, Kochi, Kolkata, Mumbai, and Pune. Microsoft offers its global cloud services from local data centers to accelerate digital transformation across Indian startups, businesses, and government organizations.

Read more:
New Microsoft program to help develop the quantum computing workforce of the future in India - Microsoft News Center India - Microsoft

Tufts Joins Major Effort to Build the Next Generation of Quantum Computers – Tufts Now

Tufts is joining a major U.S. Department of Energy (DOE) funded center called the Quantum Systems Accelerator (QSA), led by Lawrence Berkeley National Laboratory. The center hopes to create the next generation of quantum computers and apply them to the study of some of the most challenging problems in physics, chemistry, materials science, and more.

The QSA is one of five new DOE Quantum Information Science research centers announced on Aug. 26, and will be funded with $115 million over five years, supporting dozens of scientists at 15 institutions.

Peter Love, an associate professor of physics, will lead Tufts participation in the project. We have long been interested in using quantum computers for calculations in physics and chemistry, said Love.

A large-scale quantum computer would be a very powerful instrument for studying everything from the structure of large molecules to the nature and behavior of subatomic particles, he said. The only difficulty is that the quantum computers we need dont exist yet.

Quantum computers employ a fundamentally different approach to computing than those existing now, using quantum states of atoms, ions, light, quantum dots or superconducting circuits to store information.

The QSA will bring together world-class researchers and facilities to develop quantum systems that could significantly exceed the capability of todays computers. Multidisciplinary teams across all the institutions will work toward advancing qubit technologythe manner and materials in which information is stored in a quantum state, and other components of quantum computers.

Loves research will focus on developing simulation algorithms in areas such as particle and nuclear physics, which will be run by the new quantum computers. It is important to work hard on the algorithms now, so we are ready when the hardware appears, he said. Love is also part of a National Science Foundation-funded effort to develop a quantum computer and applications to run on it.

Quantum computing is an important and growing area of research at Tufts. Tom Vandervelde, an associate professor in electrical and computer engineering, Luke Davis, an assistant professor of chemistry, and Cristian Staii, an associate professor of physics, are exploring new materials capable of storing qubits.

Philip Shushkov, Charles W. Fotis Assistant Professor of Chemistry, has research focused on theoretical modeling of qubit materials, while Misha Kilmer, William Walker Professor of Mathematics, and Xiaozhe Hu, associate professor of mathematics, study quantum-inspired algorithms relevant to their research in linear algebra. Bruce Boghosian, professor of mathematics, also made some fundamental contributions to quantum simulation in the late 1990s.

Mike Silver can be reached at mike.silver@tufts.edu.

Continued here:
Tufts Joins Major Effort to Build the Next Generation of Quantum Computers - Tufts Now

BBVA Uncovers The Promise Of Quantum Computing For Banking And Financial Services – Forbes

Computers have underpinned the digital transformation of the banking and financial services sector, and quantum computing promises to elevate this transformation to a radically new level. BBVA, the digital bank for the 21st centuryestablished in 1857 and today the second largest bank in Spainis at the forefront of investigating the benefits of quantum computing.

Will quantum computing move banking to a new level of digital transformation?

We are trying to understand the potential impact of quantum computing over the next 5 years, says Carlos Kuchkovsky, global head of research and patents at BBVA. Last month, BBVA announced initial results from their recent exploration of quantum computings advantage over traditional computer methods. Kuchkovskys team looked at complex financial problems with many dimensions or variables that require computational calculations that sometimes take days to complete. In the case of investment portfolio optimization, for example, they found that the use of quantum and quantum-inspired algorithms could represent a significant speed-up compared to traditional techniques when there are more than 100 variables.

Carlos Kuchkovsky, Global Head of Research and Patents, BBVA

After hiring researchers with expertise in quantum computing, BBVA identified fifteen challenges that could be solved better with quantum computing, faster and with greater accuracy, says Kuchkovsky. The results released last month were for six of these challenges, serving as proofs-of-concept for, first and foremost, the development of quantum algorithms and also for their application in the following five financial services tasks: Static and dynamic portfolio optimization, credit scoring process optimization, currency arbitrage optimization, and derivative valuations and adjustments.

Another important dimension of BBVAs quantum computing journey is developing an external network. The above six proofs-of-concept were pursued in collaboration with external partners bringing to the various investigations their own set of skills and expertise: The Spanish National Research Council (CSIC), the startups Zapata Computing and Multiverse, the technology firm Fujitsu, and the consulting firm Accenture.

Kuchkovsky advises technology and business executives in other companies, in any industry, to follow BBVAs initial stepssurveying the current state of the technology and the major players, developing internal expertise and experience with quantum computing and consolidating the internal team, identifying specific business problems, activities and opportunities where quantum computing could provide an advantage over todays computers, and develop an external network by connecting to and collaborating with relevant research centers and companies.

As for how to organize internally for quantum computing explorations, Kuchkovsky thinks there could be different possibilities, depending on the level of maturity of the research and technology functions of the business. In BBVAs case, the effort started in the research function and he thinks will evolve in a year or two to a full-fledged quantum computing center of excellence.

Quantum computing is evolving rapidly and Kuchkovsky predicts that in five years, companies around the world will enjoy full access to quantum computing as a service and will benefit from the application of quantum algorithms, also provided as a service. Specifically, he thinks we will see the successful application of quantum computing to machine learning (e.g., improving fraud detection in the banking sector). With the growing interest in quantum computing, Kuchkovsky believes that in five years there will be a sufficient supply of quantum computing talent to satisfy the demand for quantum computing expertise.

The development of a talent pool of experienced and knowledgeable quantum computing professionals depends among other things on close working relationships between academia and industry. These relationships tend to steer researchers towards practical problems and specific business challenges and, in turn, helps in upgrading the skills of engineers working in large corporations and orient them toward quantum computing.

In Kuchocvskys estimation, the connection between academia and industry is relatively weaker in Europe compared to the United States. But there are examples of such collaboration, such as BBVAs work with CSIC and the European Unions Quantum Technologies Flagship, bringing together research centers, industry, and public funding agencies.

On July 29, Fujitsu announced a new collaboration with BBVA, to test whether a quantum computer could outperform traditional computing techniques in optimizing asset portfolios, helping minimize risk while maximizing returns, based on a decades worth of historical data. In the release, Kuchkovsky summarized BBVAs motivation for exploring quantum computing: Our research is helping us identify the areas where quantum computing could represent a greater competitive advantage, once the tools have sufficiently matured. At BBVA, we believe that quantum technology will be key to solving some of the major challenges facing society this decade. Addressing these challenges dovetails with BBVAs strategic priorities, such as fostering the more efficient use of increasingly greater volumes of data for better decision-making as well as supporting the transition to a more sustainable future.

Continued here:
BBVA Uncovers The Promise Of Quantum Computing For Banking And Financial Services - Forbes

US to spend US$625m on super-computing research centres – The Business Times

Thu, Aug 27, 2020 - 6:50 AM

[SAN FRANCISCO] The US on Wednesday said it will spend US$625 million over the next five years on centers to research artificial intelligence and quantum computing.

An additional US$340 million will be contributed by the private sector and academic institutions, bringing the total planned investment close to US$1 billion, according to a release by the Department of Energy.

The money will go to establishing a dozen research institutes focused on artificial intelligence and quantum computing, the DOE said.

"These institutes will be world-class hubs for accelerating American innovation and building the 21st century American workforce," said US Chief Technology Officer Michael Kratsios.

The US invests more than US$500 million annually in AI research and is building on that effort to "advance American competitiveness," according to National Science Foundation director Sethuraman Panchanathan.

A Google official warned in January that in a technological race to the future, China could pour "enormous resources" into developing super-computers with quantum technology.

US officials and scientists in July began laying the groundwork for a more secure "virtually unhackable" internet based on quantum computing technology.

During a presentation at that time, DOE officials issued a report laying out a strategy for the development of a national quantum internet, using laws of quantum mechanics to transmit information more securely than on existing networks.

The agency is working with universities and industry researchers with the aim of creating a prototype within a decade.

"The foundation of quantum networks rests on our ability to precisely synthesize and manipulate matter at the atomic scale, including the control of single photons," David Awschalom, a University of Chicago professor and senior scientist at Argonne National Laboratory, said at the time.

Not included in the US announcement Wednesday were Google and Honeywell, which have claimed strides in quantum computing research.

US manufacturing and technology group Honeywell earlier this year said it would bring to market "the world's most powerful quantum computer" aimed at tackling complex scientific and business challenges.

The company said it had achieved a breakthrough in quantum computing, which uses subatomic particles to speed up processing.

Quantum computing is based on the use of quantum bits or qubits, which can perform trillions of calculations per second and in some cases outperform the fastest traditional supercomputers.

The Honeywell announcement came after Google claimed last year to have achieved "quantum supremacy" by developing a machine outperforming the world's fastest supercomputers.

Google said that its Sycamore quantum processor solved a computing problem within 200 seconds which would have taken 10,000 years on a traditional computer.

IBM runs its own quantum computing program.

AFP

Read more from the original source:
US to spend US$625m on super-computing research centres - The Business Times